queermunist she/her
/u/outwrangle before everything went to shit in 2020, /u/emma_lazarus for a while after that, now I’m all queermunist!
- 0 Posts
- 54 Comments
queermunist she/her@lemmy.mlto Showerthoughts@lemmy.world•Accessibility is the only moral use that Generative AI can have nowadays1·3 days agoFirst you say “you’re not mad a GenAI as a technology, you’re mad at Capitalism” and then when I agree with you you move the goal posts to argue against it? Are you just arguing for its own sake?
It’s like I never left Reddit!
Talking to you was a mistake. You’re just trying to “win” the conversation, you don’t actually care about anything I have to say.
I’m done. You can have the last word.
queermunist she/her@lemmy.mlto Showerthoughts@lemmy.world•Accessibility is the only moral use that Generative AI can have nowadays1·3 days agoAnd you can’t separate the technology from it’s historical and material context. We could say “capitalist LLMs” instead of just “LLMs” every time we talk about the technology, but is that useful?
queermunist she/her@lemmy.mlto Showerthoughts@lemmy.world•Accessibility is the only moral use that Generative AI can have nowadays1·3 days agoThe problem is capitalism, but the technology was produced under capitalism and you’re using this technology under capitalism. So, a distinction without a difference.
queermunist she/her@lemmy.mlto Technology@lemmy.world•YouTube might slow down your videos if you block adsEnglish3·5 days agoSo you don’t find something else to do.
queermunist she/her@lemmy.mlto You Should Know@lemmy.world•YSK: Non-violent protests are 2x more likely to succeed and no non-violent movement that has involved more than 3.5% of a population has ever failed211·5 days agoYou talk about research, so I’m curious: has any nonviolent campaign succeeded without an accompanying violent campaign?
queermunist she/her@lemmy.mlto Technology@lemmy.world•Is Google about to destroy the web?English101·5 days ago
The mistakes compound. It starts with one, but if no mistakes are ever corrected then it won’t just be this one. I’d rather we don’t create a new dialect. So, let’s just nip it in the bud, correct all simple mistakes and ensure communications remain clear for everyone. It’s not even a big deal, someone just pointed out a minor mistake.
I called it stupid because you made a big deal about this, and I got emotional. It really isn’t! It’s just a small correction and we could have all moved on, but no, you had to die on this molehill and now I’m going to ruin my day being mad at this stupid fucking bullshit.
We should all work together to be understood. It’s good that people help each other communicate more clearly.
Corrections are how we reduce lingual entropy. Being corrected shouldn’t be embarrassing or shameful, we should welcome corrections so we can be better understood.
Language is collaborative, we’re always working to be better understood and to help each other be better understood.
If no one was ever corrected about anything, language will drift so badly we’d lose the ability to communicate. Try reading Olde English, before standardization people would just do whatever they wanted. It ranges from barely legible to gibberish.
It’s more effort than a straight read.
I didn’t correct anyone, by the way. I’m just a different person griping about how much it sucks to have to communicate with people who don’t care about being understood.
And you’re right, correcting people is even more work! So on top of the work of translating their stupid post we now have to tell them they were wrong so they don’t do this to us again. If they aren’t ever corrected they’ll just keep being wrong and we’ll have to keep translating their posts.
The alternative is to block them so we never see their posts ever again, which honestly is a better idea. It not like we’re missing out.
It’s frustrating to translate from what they said to what they mean. It’s more effort on my part and this is my free time, I don’t want to work.
Just communicate as clearly as you can.
queermunist she/her@lemmy.mlto AI@lemmy.ml•Should there be a law mandating all AI generated content be tagged?1·7 days agoWhat do you mean by “retrain your model”?
An example of this is deepseek-r1’s “1776” variant, where someone uncensored it, and now it will talk freely about Tiananmen Square.
I guess this is more accurately called “post-training” instead of “re-training” but my point stands.
If it’s possible, hold the model’s creators responsible.
Requiring US commercial vendors to implement fingerprinting would disadvantage them against open source models, and against vendors from other countries (like DeepSeek) who wouldn’t comply.
China is very willing to regulate AI development. If the US and China would actually cooperate we’d be able to work together to get a handle on this technology and its development. Also, it really looks like the US is the problem. They’re the ones who don’t want to regulate, they’re the ones that don’t want to cooperate, and they’re the ones with the most problematic companies.
And “open source” models just aren’t as problematic as proprietary models. Training a model is still something that requires massive amounts of data, compute, energy, etc etc. The open source models are going to be much smaller and weaker and more specialized and, as a result, less dangerous or in need of regulation anyway.
But I wouldn’t cry too hard if commercial vendors all failed and were replaced with open source, so if open source really can out compete then I welcome it. That seems like a really good side effect of regulating the commercial vendors!
The current US government is very unlikely to try in the first place
Well if we’re limiting ourselves to what is likely, nothing will happen.
There will never be any regulations, they won’t even try.
queermunist she/her@lemmy.mlto AI@lemmy.ml•Should there be a law mandating all AI generated content be tagged?1·8 days agoIt’s practical for a government to regulate Microsoft, Google, Amazon, OpenAI, etc. Who cares if they can’t catch everything? Focusing on the biggest problems is perfectly fine imo, the worst offenders are the biggest companies as usual.
Your company’s AI model got retrained and used in a way that violates regulations? Whelp, looks like your company is liable for that. Oh, that wasn’t done by your company or anyone involved? Too fucking bad, should have made it harder to retrain your model.
And if they resist, break them on the fucking wheel.You act like it’s impossible and so we shouldn’t even try, which is honestly just an anti regulation talking point that is trotted out for literally everything.
queermunist she/her@lemmy.mlto AI@lemmy.ml•Should there be a law mandating all AI generated content be tagged?1·9 days agoTraining AI models is completely different, though. That requires massive amounts of compute and data and electricity and water, and that’s all very easy for the government to track.
That’s the history of memes.
So at first, people were making fun of a bad artist with a bad comic. In that context it wasn’t edgelord material, he was an adult and he can handle being the butt of a joke.
Then, the meme gained a life of its own (its context was itself) and that old context ceased to matter. It wasn’t about the artist anymore, but without context it still wasn’t edgelord material.
At no point was the joke edgy because the joke was never about miscarriage.
The context is that people recognize the meme. That’s basically it. The joke is itself.
But also I don’t really care if some artist is annoyed that his early, bad artwork became the butt of a joke. He was in his 20s when he started that comic, let’s not pretend like this is bullying a child’s comics or something.
Well, by now it’s just a self referential meme like all memes. The joke is itself, the context doesn’t even matter anymore.
It’s still not a joke about miscarriage.
You ever notice how every other animal manages to survive without water bottles? It was like that for most of human existence, before we figured out water skins and wooden cups and clay jugs.