• 2 Posts
  • 7 Comments
Joined 16 days ago
cake
Cake day: May 31st, 2025

help-circle
  • can trust moderators to make a judgement call on who’s being a dick

    I think there’s something to that. I’ve only been here a short time, but it looks to me (so far) like the mods are doing a good job, or maybe the community is better behaved than certain others, or both. Whatever problems are here, it’s nothing like the toxicity of some online spaces. Thus, I am content to trust the mod team to make judgment calls, with guidelines to nudge people toward good behaviours and set the tone they want the group to have. A balance between rules and flexibility, if you will.

    Social media seems to drive everyone toward polarization. XYZ is either the best thing that has ever happened, or the worst. The big sites weaponize that using algorithms. But even without the algorithms, it’s human nature and happens on its own to a lesser degree. I think it should be part of social media literacy to be cognizant of that, and try to guard ourselves against it. Personally I’m really enjoying experimenting with LLMs so far, and I think they can enrich people’s lives in nice ways. At the same time, I also believe there are major social risks to this technology, and it’s worth thinking about those. Both can be true at once.



  • Mistral (24B) models are really bad at long context, but this is not always the case. I find that Qwen 32B and Gemma 27B are solid at 32K

    It looks like the Harbinger RPG model I’m using (from Latitude Games) is based on Mistral 24B, so maybe it inherits that limitation? I like it in other ways. It was trained on RPG games, which seems to help it for my use case. I did try some general purpose / vanilla models and felt they were not as good at D&D type scenarios.

    It looks like Latitude also has a 70B Wayfarer model. Maybe it would do better at bigger contexts. I have several networked machines with 40GB VRAM between all them, and I can just squeak I4Q_XS x 70B into that unholy assembly if I run 24000 context (before the SWA patch, so maybe more now). I will try it! The drawback is speed. 70B models are slow on my setup, about 8 t/s at startup.


  • Ah, great idea about the low temp for rules and high for creativity. I guess I can easily change it in the front end, although I also set the temp when I start the server, and I’m not sure which one takes priority. Hopefully the frontend does, so I can tweak it easily.

    Also your post just got me thinking about the DRY sampler, which I’m using, but might be causing troubles for cases where the model legit should repeat itself, like an !inventory or !spells command. I might try to either disable it or add a custom break token, like the ! mark.

    I think ST can show token probabilities, so I’ll try that too, thanks. I have so much to learn! I really should try other frontends though. ST is powerful in a lot of ways like dynamic management of the context, but there are other things I don’t like as much. It attaches a lot of info to a character that I don’t feel should be a property of a character. And all my D&D scenarios so far have been just me + 1 AI char, because even though ST has a “group chat” feature, I feel like it’s cumbersome and kind of annoying. It feels like the frontend was first designed around one AI char only, and then something got glued on to work around that limitation.





  • Thanks for your comments and thoughts! I appreciate hearing from more experienced people.

    I feel like a little bit of prompt engineering would go a long way.

    Yah, probably so. I tried to write a system prompt to steer the model toward what I wanted, but it’s going to take a lot more refinement and experimenting to dial it in. I like your idea of asking it to be unforgiving about rules. I hadn’t put anything like that in.

    That’s a great idea about putting a D&D manual, or at least the important parts, into a RAG system. I haven’t tried RAG yet but it’s on my queue of matters to learn. I know what it is, I just haven’t tried it yet.

    I’ve for sure seen that the quality of output starts to decline about 16K context, even on models that claim to support 128K. Also, I feel like the system prompt seems more effective when there are only let’s say 4K context tokens so far. After the context grows, the model becomes less and less inclined to follow the system prompt. I’ve been guessing this is because as the context grows, any given piece of it becomes more dilute, but I don’t really know.

    For those reasons, I’m trying to use summarization to keep the context size under control, but I haven’t found a good approach yet. SillyTavern has an auto summary injecting system, but either I’m misunderstanding it, or I don’t like how it works, and I end up doing it manually.

    I tried a few CoT models, but not since I moved to ST as a front end. I was using them with the standard llama-server web interface, which is a rather simple affair. My problem was that the thinking output seemed to spam up the context, leaving me much less ctx space for my own use. Each think block was like 500-800 tokens. It looks like ST might have an ability to only keep the most recent think block in the context, so I need to do more experimenting. The other problem I had was that the thinking could just take a lot of time.