- cross-posted to:
- aboringdystopia@lemmy.world
- cross-posted to:
- aboringdystopia@lemmy.world
Remember: AI chatbots are designed to maximize engagement, not speak the truth. Telling a methhead to do more meth is called customer capture.
The llm models aren’t, they don’t really have focus or discriminate.
The ai chatbots that are build using those models absolutely are and its no secret.
What confuses me is that the article points to llama3 which is a meta owned model. But not to a chatbot.
This could be an official facebook ai (do they have one?) but it could also be. Bro i used this self hosted model to build a therapist, wanna try it for your meth problem?
Heck i could even see it happen that a dealer pretends to help customers who are trying to kick it.
For all we know, they could have self-hosted “Llama3.1_NightmareExtreme_RPG-StoryHorror8B_Q4_K_M” and instructed it to take on the role of a therapist.
Sounds a lot like a drug dealer’s business model. How ironic
You don’t look so good… Here, try some meth—that always perks you right up. Sobriety? Oh, sure, if you want a solution that takes a long time, but don’t you wanna feel better now???
You avoided meth so well! To reward yourself, you could try some meth
I feel like humanity is stupid. Over and over again we develop new technologies, make breakthroughs, and instead of calmly evaluating them, making sure they’re safe, we just jump blindly on the bandwagon and adopt it for everything, everywhere. Just like with asbestos, plastics and now LLMs.
Fucking idiots.