• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    11 days ago

    Local models are not capable of coding yet, despite what benchmarks say. Even if they get what you’re trying to do they spew out so many syntax errors and tool calling problems that it’s a complete waste of time.

    I disagree with this. Qwen Coder 32B and on have been fantastic for niches with the right settings.

    If you apply a grammar template and/or start/fill in their response, drop the temperature a ton, and keep the actual outputs short, it’s like night and day vs ‘regular’ chatbot usage.

    TBH one of the biggest problems with LLM is that they’re treated as chatbot genies with all sorts of performance-degrading workarounds, not tools to fill in little bits of text (which is what language models were originally concieved for).