Americas

  • United States

Asia

Charlotte Trueman
Senior Writer

Microsoft launches AI content safety service

news
Oct 17, 20232 mins
Artificial IntelligenceCloud ComputingMicrosoft

Microsoft’s Azure AI Content Safety service includes image and text detection to identify and grade content based on the likelihood that it will cause harm.

worried alarmed concerned user
Credit: Shutterstock

Microsoft has announced the general availability of its Azure AI Content Safety, a new service that helps users detect and filter harmful AI- and user-generated content across applications and services.

The service includes text and image detection and identifies content that Microsoft terms “offensive, risky, or undesirable,” including profanity, adult content, gore, violence, and certain types of speech.

“By focusing on content safety, we can create a safer digital environment that promotes responsible use of AI and safeguards the well-being of individuals and society as a whole,” wrote Louise Han, product manager for Azure Anomaly Detector, in a blog post announcing the launch.

Azure AI Content Safety has the ability to handle various content categories, languages, and threats to moderate both text and visual content. It also offers image features that use AI algorithms to scan, analyze, and moderate visual content, ensuring what Microsoft terms 360-degree comprehensive safety measures. 

The service is also equipped to moderate content across multiple languages and uses a severity metric which provides an indication of the severity of specific content on a scale ranging from 0 to 7.

Content graded 0-1 is deemed to be safe and appropriate for all audiences, while content that expresses prejudiced, judgmental, or opinionated views is graded 2-3, or low.

Medium severity content is graded at 4-5 and contains offensive, insulting, mocking, intimidating language or explicit attacks against identity groups, while high severity content, which contains the harmful and explicit promotion of harmful acts, or endorses or glorifies extreme forms of harmful activity towards identity groups, is graded 6-7.

Azure AI Content Safety also uses multicategory filtering to identify and categorize harmful content across a number of critical domains, including hate, violence, self-harm, and sexual.

“[When it comes to online safety] it is crucial to consider more than just human-generated content, especially as AI-generated content becomes prevalent,” Han wrote. “Ensuring the accuracy, reliability, and absence of harmful or inappropriate materials in AI-generated outputs is essential. Content safety not only protects users from misinformation and potential harm but also upholds ethical standards and builds trust in AI technologies.”

Azure AI Content Safety is priced on a pay-as-you-go basis. Interested users can check out pricing options on the Azure AI Content Safety pricing page.

Charlotte Trueman
Senior Writer

Charlotte Trueman is a staff writer at Computerworld. She joined IDG in 2016 after graduating with a degree in English and American Literature from the University of Kent. Trueman covers collaboration, focusing on videoconferencing, productivity software, future of work and issues around diversity and inclusion in the tech sector.

More from this author