• atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    8 days ago

    This is just brilliant. Every ridiculous argument addressed perfectly.

    but you have no idea what the code is

    Are you a vibe coding Youtuber? Can you not read code? If so: astute point. Otherwise: what the fuck is wrong with you?

    You’ve always been responsible for what you merge to main. You were five years go. And you are tomorrow, whether or not you use an LLM.

    I want to scream every time somebody brings up “but it writes code that doesn’t work” and all I can think of is “what the fuck is wrong with you that you’re merging code that doesn’t work?” LLMs do not remove your responsibility as a developer to create a working product.

    • SouthFresh@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      8 days ago

      I’ve played with QwenCoder2.5, Qwen3, and Devstral.

      Holy shit are they bad. Seriously, consistently bad at coding. Initialized variables that are never used. Importing, using functions/methods that don’t exist, it’s fucking pathetic.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        8 days ago

        I don’t know what to tell ya - GPT 4o does a really good job. Feel free to simply blame “ai slop” for everything though.

        • Endmaker@ani.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 days ago

          Kinda late to the party but based on my day-to-day usage of ChatGPT, 4o is rubbish when it comes to coding.

          Now o4-mini-high on the other hand - that’s the good stuff (most of the time).

        • ikt@aussie.zoneOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          yep I’ve been told Gemini is the new hot shit, really hoping local models can catch up