Microsoft’s new safety system can catch hallucinations in its customers’ AI apps

1 month ago 19
Microsoft logo Illustration: The Verge

Sarah Bird, Microsoft’s chief product officer of responsible AI, tells The Verge in an interview that her team has designed several new safety features that will be easy to use for Azure customers who aren’t hiring groups of red teamers to test the AI services they built. Microsoft says these LLM-powered tools can detect potential vulnerabilities, monitor for hallucinations “that are plausible yet unsupported,” and block malicious prompts in real time for Azure AI customers working with any model hosted on the platform.

“We know that customers don’t all have deep expertise in prompt injection attacks or hateful content, so the evaluation system generates the prompts needed to simulate these types of attacks. Customers can then get a...

Continue reading…

*** Disclaimer: This Article is auto-aggregated by a Rss Api Program and has not been created or edited by Bdtype.

(Note: This is an unedited and auto-generated story from Syndicated News Rss Api. News.bdtype.com Staff may not have modified or edited the content body.

Please visit the Source Website that deserves the credit and responsibility for creating this content.)

Watch Live | Source Article