As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

    • chaosCruiser@futurology.todayOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      Sure does, but somehow many of the answers still work well enough. In many contexts, the hallucinations are only speed bumps, not show stopping disasters.

      • oakey66@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        18 days ago

        It told people to put glue in their pizza to make the dough chewy. It’s pretty fucking awful.

        • chaosCruiser@futurology.todayOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 days ago

          Copilot wrote me some code that totally does not work. I pointed out the bug and told it exactly how to fix the problem. It said it fixed it and gave me the exact same buggy trash code again. Yes, it can be pretty awful. LLMs fail in some totally absurd and unexpected ways. On the other hand, it knows the documentation of every function, but somehow still fails at some trivial tasks. It’s just bizarre.

          • oakey66@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            18 days ago

            It does this because it inherently hallucinates. It’s just an analytical letter guesser that sounds human because it amalgamates and predicts the next word. It’s just gotten so much input that it can sound human. But it has no concept of right and wrong. Even when you tell it that it’s wrong. It doesn’t understand anything. That’s why it sucks. And that’s why it will always suck. It will not replace search because it makes shit up. I use it for coding here and there as well and it’s just making up functions that don’t exist or attributes functions to packages that aren’t real.

  • Seasoned_Greetings@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    17 days ago

    My 70 year old boss and his 50 year old business partner just today generated a set of instructions for scanning to a thumb drive on a specific model of printer.

    They obviously missed the “AI Generated” tag on the Google search and couldn’t figure out why the instructions cited the exact model but told them to press buttons and navigate menus that didn’t exist.

    These are average people and they didn’t realize that they were even using ai much less how unreliable it can be.

    I think there’s going to be a place for forums to discuss niche problems for as long as ai just means advanced LLM and not actual intelligence.

    • chaosCruiser@futurology.todayOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      When diagnosing software related tech problems with proper instructions, there’s always the risk of finding outdated tips. You may be advised to press buttons that no longer exist in the version you’re currently using.

      With hardware though, that’s unlikely to happen, as long as the model numbers match. However, when relying on AI generated instructions, anything is possible.

      • embed_me@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        15 days ago

        Not so simple with hardware also. Although less frequent, hardware also has variants, the nuances of which are easily missed by LLMs