• AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    Fun fact: LLMs that strictly generate the most predictable output are seen as boring and vacuous by human readers, so programmers add a bit of randomization they call “temperature”.

    It’s that unpredictable element that makes LLMs seem humanlike—not the predictable part that’s just functioning as a carrier signal.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      The unpredictable element is also why they absolutely suck at being the reliable sources of accurate information that they are being advertised to be.

      Yeah, humans are wrong a lot of the time but AI forced into everything should be more reliable than the average human.

      • rhombus@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        3 days ago

        That’s not it. Even without any added variability they would still be wrong all the time. The issue is inherent to LLMs; they don’t actually understand your questions or even their own responses. It’s just the most probable jumble of words that would follow the question.

        • gandalf_der_12te@discuss.tchncs.de
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          2 days ago

          First of all it doesn’t matter whether you think that AI can replace human workers. It only matter whether company think that AI can replace human workers.

          Secondly, you’re assuming that humans typically understand the question at stake. You’ve clearly never met, or been, an under-paid, over-worked employee who doesn’t give a flying fuck about the daily bullshit.