Putting trust to the test
Hopkins researchers unveil new uncertainty quantification methods in an effort to promote appropriate trust in AI use.
Artificial intelligence and machine learning can help users sift through terabytes of data to arrive at a conclusion driven by relevant information, prior results, and statistics. But how much should we trust those conclusions—especially given AI’s tendency to hallucinate, or simply make things up?
Consider the use of an AI health care system that predicts a patient’s medical diagnosis or risk of disease. A doctor’s choice to trust that AI’s advice can have a major impact on the patient’s health outcomes; in such high-stakes scenarios, the cost of a hallucination could be someone’s life.
AI predictions can vary in trustworthiness, for any number of reasons: maybe the data are bad to begin with, or the system hasn’t seen enough examples of the current situation it’s facing to have sufficiently learned what to do in that scenario. Additionally, most of the modern deep neural networks behind AI predict singular outcomes—that is, whichever has the “highest score” based on a particular model. In the context of health care, this would mean the aforementioned AI system would return only one probable disease diagnosis out of the many possibilities. But just how certain is the AI that it’s got the correct answer?
Read the full article here.