queermunist she/her

/u/outwrangle before everything went to shit in 2020, /u/emma_lazarus for a while after that, now I’m all queermunist!

  • 0 Posts
  • 41 Comments
Joined 2 years ago
cake
Cake day: July 10th, 2023

help-circle
  • What do you mean by “retrain your model”?

    An example of this is deepseek-r1’s “1776” variant, where someone uncensored it, and now it will talk freely about Tiananmen Square.

    I guess this is more accurately called “post-training” instead of “re-training” but my point stands.

    If it’s possible, hold the model’s creators responsible.

    Requiring US commercial vendors to implement fingerprinting would disadvantage them against open source models, and against vendors from other countries (like DeepSeek) who wouldn’t comply.

    China is very willing to regulate AI development. If the US and China would actually cooperate we’d be able to work together to get a handle on this technology and its development. Also, it really looks like the US is the problem. They’re the ones who don’t want to regulate, they’re the ones that don’t want to cooperate, and they’re the ones with the most problematic companies.

    And “open source” models just aren’t as problematic as proprietary models. Training a model is still something that requires massive amounts of data, compute, energy, etc etc. The open source models are going to be much smaller and weaker and more specialized and, as a result, less dangerous or in need of regulation anyway.

    But I wouldn’t cry too hard if commercial vendors all failed and were replaced with open source, so if open source really can out compete then I welcome it. That seems like a really good side effect of regulating the commercial vendors!

    The current US government is very unlikely to try in the first place

    Well if we’re limiting ourselves to what is likely, nothing will happen.

    There will never be any regulations, they won’t even try.



  • It’s practical for a government to regulate Microsoft, Google, Amazon, OpenAI, etc. Who cares if they can’t catch everything? Focusing on the biggest problems is perfectly fine imo, the worst offenders are the biggest companies as usual.

    Your company’s AI model got retrained and used in a way that violates regulations? Whelp, looks like your company is liable for that. Oh, that wasn’t done by your company or anyone involved? Too fucking bad, should have made it harder to retrain your model.

    And if they resist, break them on the fucking wheel.

    You act like it’s impossible and so we shouldn’t even try, which is honestly just an anti regulation talking point that is trotted out for literally everything.



  • That’s the history of memes.

    So at first, people were making fun of a bad artist with a bad comic. In that context it wasn’t edgelord material, he was an adult and he can handle being the butt of a joke.

    Then, the meme gained a life of its own (its context was itself) and that old context ceased to matter. It wasn’t about the artist anymore, but without context it still wasn’t edgelord material.

    At no point was the joke edgy because the joke was never about miscarriage.


  • queermunist she/her@lemmy.mltomemes@lemmy.worldLoss
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    3 days ago

    The context is that people recognize the meme. That’s basically it. The joke is itself.

    But also I don’t really care if some artist is annoyed that his early, bad artwork became the butt of a joke. He was in his 20s when he started that comic, let’s not pretend like this is bullying a child’s comics or something.



  • queermunist she/her@lemmy.mltomemes@lemmy.worldLoss
    link
    fedilink
    arrow-up
    15
    arrow-down
    5
    ·
    edit-2
    4 days ago

    You might have missed the joke?

    The joke isn’t that miscarriage is funny. That’s edgelord material.

    The joke is that B^Uckley is a hack writer and hack artist and his comics aren’t funny and look like shit. It’s making fun of him.


  • China’s commitment to peaceful internal development certainly means they won’t help revolutionary communism abroad directly, at least for now, but they still raise the contractions. Normal people will look at their growing economy and compare it to our stagnant economy and become agitated.

    And then there’s BRICS destroying the reserve currency status of the US dollar and bringing about a multipolar world.

    Closer to home, there’s the internal decline of the US that will soon make it impossible for it to meddle in countries with socialist and anticolonial movements. Imagine a world where the US couldn’t kill leaders like Sukarno or Lumumba or Allende, nor could it invade Korea or Vietnam. It will try, of course, but the age of hegemony is over.

    That’s the future I see, and it makes me optimistic.

    Quite off topic for an AI thread, though!






  • Force the AI models to contain some kind of metadata in all their material. Training AI models is a massive undertaking, it’s not like they can hide what they’re doing. We know who is training these models and where their data centers are, so a regulatory agency would certainly be able to force them to comply.

    In the US this could be done with the FCC, in other countries the power can be invested into regulatory bodies that control communications and broadcasting etc.

    The penalty? Break them on the fucking wheel.


  • I’ve hated Reddit since back when shitredditsays was in its hay day (even though I was also an incessent poster) and after they banned the Chapo subreddit I retreated to only really using a few local subreddits to keep track of local news.

    When the exodus started two ish years ago I jumped on the bandwagon and haven’t logged back into Reddit ever since.

    I call y’all reddit.world as a joke but you aren’t nearly as bad.