In large groups, yes. It’s just a statistics thing. For example I can’t tell if any given flipped coin will be heads or tails but I can tell you that of 100 million flipped coins about 50 million will be tails.
We are, but only the truly simple minded can be thoroughly swayed and changed into an antisocial beast of propaganda, tasked with toil and consumption. So, there’s no need to vilify “the algorithms” or their results… there’s nothing wrong with YouTube recommending me a Japanese “Careless Whisper” cover from the 80s, based on my previous input. 😅
oh you are so mistaken. propaganda, which is essentially advertisement for political stances, takes a toll on us all. you just don’t notice it because modern propaganda is targeted towards the subconscious more than towards the conscious, as many people have poorer defenses around their subconsciousness than around their consciousness.
On top of that, you’re vastly underestimating how very pliable the human mind mostly is. When presented with one credible idea, an infestation takes place similar to a virus infestation which can make that idea grow exponentially, up to a target size.
Yet you are right that we must not give up confronting ourselves with these kind of messages, in order to find truth. Dialogue is the essential foundation of democracy. Only dialogue can reveal the truth.
Fun fact: LLMs that strictly generate the most predictable output are seen as boring and vacuous by human readers, so programmers add a bit of randomization they call “temperature”.
It’s that unpredictable element that makes LLMs seem humanlike—not the predictable part that’s just functioning as a carrier signal.
The unpredictable element is also why they absolutely suck at being the reliable sources of accurate information that they are being advertised to be.
Yeah, humans are wrong a lot of the time but AI forced into everything should be more reliable than the average human.
That’s not it. Even without any added variability they would still be wrong all the time. The issue is inherent to LLMs; they don’t actually understand your questions or even their own responses. It’s just the most probable jumble of words that would follow the question.
First of all it doesn’t matter whether you think that AI can replace human workers. It only matter whether company think that AI can replace human workers.
Secondly, you’re assuming that humans typically understand the question at stake. You’ve clearly never met, or been, an under-paid, over-worked employee who doesn’t give a flying fuck about the daily bullshit.