In the age of AI, it's no surprise that companies are rolling out AI moderation and recommendation systemsbut how do they make sure the systems aren't biased against users? That's the question at the heart of a post by Emma Bates, CEO of the social media platform Diem.
"By empowering users to contribute to the annotation and labeling of content, platforms can not only improve transparency but also biases inherent in automated moderation systems," she writes at Medium.
That's why she's created Diem AI, a platform that allows users to ask Diem AI their "personal, pressing, funny, and important questions to discover both factual information (AI summary) and validation (community conversations)."
Bates says Diem AI will also "help present conversational data differentlyin a less hierarchial wayfor moderation."
Diem is one of many social media platforms rolling out AI moderation and recommendation systems, but Bates writes that "many of them expressed similar concerns regarding the ethical implications of AI, especially in the context of content moderation."
That's why she's created Diem AI so that users can "participate in the annotation and labeling of content, platforms can not only improve transparency but also biases inherent in automated moderation
A customized collection of grant news from foundations and the federal government from around the Web.
In the world of social enterprises, failure is a cringe-worthy moment nobody wants to talk about. But, social entrepreneurs can benefit from their failures.