Connect with us

Hi, what are you looking for?

Webinar Expert TeamWebinar Expert Team

Tech News

Microsoft’s new safety system can catch hallucinations in its customers’ AI apps

Microsoft logo
Illustration: The Verge

Sarah Bird, Microsoft’s chief product officer of responsible AI, tells The Verge in an interview that her team has designed several new safety features that will be easy to use for Azure customers who aren’t hiring groups of red teamers to test the AI services they built. Microsoft says these LLM-powered tools can detect potential vulnerabilities, monitor for hallucinations “that are plausible yet unsupported,” and block malicious prompts in real time for Azure AI customers working with any model hosted on the platform.

“We know that customers don’t all have deep expertise in prompt injection attacks or hateful content, so the evaluation system generates the prompts needed to simulate these types of attacks. Customers can then get a…

Continue reading…

You May Also Like

Tech News

The Dbrand Ghost Case for the iPhone 15 Pro. | Image: Dbrand Dbrand is scrapping plans to fix its anti-yellowing Ghost Case, but not...

Editor's Pick

Patrick G. Eddington I’ve been in Washington over 30 years, but sometimes even I can be stunned by the short memories and shortsightedness of members of...

Editor's Pick

I’ve always liked to look at certain points during a bull market or bear market where the character of the market could change based...

Editor's Pick

We were perusing the Advance-Decline Line charts that we have and noticed something that surprised us. The Nasdaq Advance-Decline Line is trending lower while...

Copyright © 2024 All Rights Reserved.