

That’s why you meed to know the cavieats of the tool you are using.
LLM hallucinate. People willing to use them need to know, where is more prone to hallucinate. Which is where the data about the topic you are requesting is more fuzzy. If you ask for the capital of France is highly unlikely you will get an hallucination, if you as for the color of the hair of the second spouse of the fourth president of the third French republic, you probably will get an hallucination.
And you need to know what are you using it for. If it’s for roleplay, or any not critical matters you may not care about hallucinations. If you use them for important things you need to know that the output needs to be human reviewed before using it. For some things it may be worth the human review as it would be faster that writing from zero, for other instances it may not be worth it and then a LLM should not be used for that task.
As an example I just was writing some lsp library for an API and I tried the LLM to generate it from the source documentation. I had my doubts as the source documentation is quite bigger that my context size, I tried anyway but I quickly saw that hallucinations were all over the place and hard to fix, so I desisted and I’ve been doing it myself entirely. But before that I did ask the LLM how to even start writing such a thing as it is the first time I’ve done this, and the answer was quite on point, probably saving me several hours searching online trying to find out how to do it.
It’s all about knowing the tool you are using, same as anything in this world.
Didn’t you got the memo? This week we are hating on crabs.