Explaining Responsible AI: Why AI sometimes gets it wrong

Your video will begin in 10
Skip ad (5)
Launch it! Create your course and sell it for up to $997 in the next 7 days

Thanks! Share it with your friends!

You disliked this video. Thanks for the feedback!

Added by admin
23 Views
AI models learn from vast amounts of information but sometimes they create hallucinations, also known as “ungrounded content,” by altering or adding to the data.

Learn more about the tools we have put in place to measure, detect, and reduce inaccuracies and ungrounded content: https://news.microsoft.com/source/features/company-news/why-ai-sometimes-gets-it-wrong-and-big-strides-to-address-it/

Subscribe to Microsoft on YouTube here: https://aka.ms/SubscribeToYouTube

Follow us on social:
LinkedIn: https://www.linkedin.com/company/microsoft/
Twitter: https://twitter.com/Microsoft
Facebook: https://www.facebook.com/Microsoft/
Instagram: https://www.instagram.com/microsoft/

For more about Microsoft, our technology, and our mission, visit https://aka.ms/microsoftstories
Category
Artificial Intelligence

Post your comment

Comments

Be the first to comment