news-24092024-175436

Microsoft has introduced a new tool called Correction to address the issue of AI-generated text that may be factually incorrect. This tool is part of Microsoft’s Azure AI Content Safety API and aims to automatically revise text that contains errors by comparing it with a source of truth.

While Correction may help improve the reliability of AI-generated content, experts caution that it does not address the root cause of hallucinations in text-generating models. These models tend to make errors because they are statistical systems that predict words based on patterns in the training data, rather than actually “knowing” the information.

The Correction tool consists of two models – a classifier model that detects hallucinations and a language model that corrects them based on specified grounding documents. While Microsoft claims that Correction can enhance the trustworthiness of AI-generated content, experts like Os Keyes raise concerns about the potential limitations and new challenges that may arise.

Mike Cook, a research fellow specializing in AI, warns that while Correction may catch some errors, it could also create a false sense of security among users. He believes that trust and explainability issues around AI may be compounded by relying on tools like Correction to address inaccuracies in text generated by AI models.

In addition to ethical concerns, there are also business implications to consider. Microsoft is offering Correction for free, but the groundedness detection required to detect hallucinations is only free up to a certain limit. This pricing model could put pressure on businesses that rely on AI tools, especially if they are concerned about the accuracy and potential for errors in AI-generated content.

Overall, the introduction of Correction by Microsoft highlights the ongoing challenges in developing trustworthy and reliable AI tools. As businesses increasingly adopt AI technologies, ensuring the accuracy and transparency of AI-generated content remains a key concern for both developers and users. Microsoft’s efforts to address these challenges are commendable, but it is essential to continue monitoring and improving AI systems to mitigate the risks associated with AI-generated content.