Home
» Wiki
»
Microsoft Announces New Tool to Detect and Correct Hallucinatory Content in AI Output
Microsoft Announces New Tool to Detect and Correct Hallucinatory Content in AI Output
Azure AI Content Safety is Microsoft's AI service that detects harmful AI-generated content across apps, services, and platforms. It provides both text and image APIs, allowing developers to identify unwanted content.
The Groundedness API in Azure AI Content Safety can identify whether a large language model's response is based on a user-selected source document. Since current large language models can generate inaccurate or non-genuine information (illusory information), this API helps developers identify such content in AI output.
Today, Microsoft announced a preview of its error correction capabilities, which will allow developers to detect and correct “delusional” content in AI output in real time, ensuring end users always receive factually accurate AI-generated content. Here’s how it works:
The app developer enables editing capabilities.
When unfounded data is detected, a new request is sent to the AI model to correct the error.
The LLM algorithm will evaluate the overall score based on authoritative documents.
Sentences that are not relevant to the underlying, authenticated document may be filtered out entirely.
If the content is derived from a base document, the base model will rewrite the unsubstantiated sentence to match the document.
In addition to the editing feature, Microsoft also announced a public preview of Hybrid Azure AI Content Safety (AACS). This feature allows developers to deploy content safety mechanisms in the cloud and on the device. The AACS embedded SDK enables real-time content safety checks directly on the device, even without an internet connection.
Finally, Microsoft announced a preview of Protected Materials Detection for Code, which can be used with code-generating AI applications to detect if LLM is generating any protected code. This feature was previously only available through the Azure OpenAI Service. Microsoft is now making Protected Materials Detection for Code available to customers for use in conjunction with other code-generating AI models.