In a 15 year old snippet from stackoverflow and gist. Somebody paid for this article.
That’s the point though: LLMs recycle junk information, including some potentially dangerous information, without any indication of the context. In a regular search of the web or of Stack Overflow, you’d probably see people commenting on how the code is vulnerable, but when you ask an LLM it doesn’t necessarily communicate that while still delivering the code.
I’m fine with reading comments and not copy pasting code without reading but I see it’s too much nowadays.
Yeah, and this particular vulnerability is pretty obvious for even a moderately experienced developer. You’d really have to be pasting without thinking to let this one slip by.
It is also that previously we have had a dialogue between people about code. Even some historic background if creator of library or some RFC standard was involved. There is also a big broader aspect of the topic if you look into mentioned starckoverflow. Now you have dialogue between entry level dev and 6years old ADHD AI. That doesn’t teach human anything because it’s a tool to solve particular problem, nothing more. That’s not how learning works.