Pro@programming.dev to Technology@lemmy.worldEnglish · edit-28 days agoMeta plans to replace humans with AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent contenttext.npr.orgexternal-linkmessage-square30linkfedilinkarrow-up120arrow-down11
arrow-up119arrow-down1external-linkMeta plans to replace humans with AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent contenttext.npr.orgPro@programming.dev to Technology@lemmy.worldEnglish · edit-28 days agomessage-square30linkfedilink
minus-squareAstralPath@lemmy.calinkfedilinkEnglisharrow-up6·8 days agoHonestly, I’ve always thought the best use case for AI is moderating NSFL content online. No one should have to see that horrific shit.
minus-squareouch@lemmy.worldlinkfedilinkEnglisharrow-up2·8 days agoWhat about false positives? Or a process to challenge them? But yes, I agree with the general idea.
minus-squareBeej Jorgensen@lemmy.sdf.orglinkfedilinkEnglisharrow-up3·8 days ago Or a process to challenge them? 😂😂😂😔
minus-squaretarknassus@lemmy.worldlinkfedilinkEnglisharrow-up3·8 days agoThey will probably use the YouTube model - “you’re wrong and that’s it”.
minus-squareHowAbt2day@futurology.todaylinkfedilinkEnglisharrow-up2·8 days agoNot suitable for Lemmy?
minus-squareblargle@sh.itjust.workslinkfedilinkEnglisharrow-up1·8 days agoNot sufficiently fascist leaning. It’s coming, Palantir’s just waiting for the go-ahead…
Honestly, I’ve always thought the best use case for AI is moderating NSFL content online. No one should have to see that horrific shit.
Bsky already does that.
What about false positives? Or a process to challenge them?
But yes, I agree with the general idea.
😂😂😂😔
They will probably use the YouTube model - “you’re wrong and that’s it”.
Not suitable for Lemmy?
Not sufficiently fascist leaning. It’s coming, Palantir’s just waiting for the go-ahead…