The Alliance for AI Health announced that it will meet this month to finalize the consensus-driven framework and share recommendations by the end of the year in a progress update.
why does it matter
Hayy met in December to develop consensus and mutual understanding with goals to tame the drive to buy AI and machine learning products in healthcare and arm health IT decision makers with academic research and vetted guidelines to help them choose technologies that Reliable and that delivers value.
Until October 14, Hayy is accepting public comments on its work in testability screening, usability and safety at a workshop with subject matter experts from healthcare and other industries that the organization held in July.
Previously, Hayy produced a large paper on bias, fairness and equity based on a two-day meeting and public comment accepted through the end of last month. The result will be a framework, the Guidelines for the Responsible Use of Artificial Intelligence in Healthcare, that intentionally advances the assurance of resilient AI, safety and security, according to the October 6 Progress Update.
“The application of artificial intelligence brings tremendous benefit to patient care, but also may exacerbate inequalities in health care,” Dr. John Hallamka, President of the Mayo Clinic Platform and co-founder of the Alliance said in the update.
The coalition says it is also working to build a toolkit and guidelines for the patient care journey, from chatbots to patient records, so that the population is not negatively affected by algorithmic bias.
“Guidelines for the ethical use of an AI solution cannot be late. Our coalition experts share a commitment to ensuring that patient-centered and informed stakeholder guidelines can achieve equitable outcomes for the entire population,” Hallamka noted in the update.
The progress update comes on the heels of this week’s release of White House Blueprint for AI Bill of Rights.
Founded by Change Healthcare, Duke AI Health, Google, Johns Hopkins University, Mayo Clinic, Microsoft, MITER, Stanford Medicine, UC Berkeley, UC San Francisco, among others, CHAI is monitored by the US Food and Drug Administration, National Institutes of Health, and now the Office of the Coordinator National Health Information Technology, according to the announcement.
Some organizations are also part of Health AI Partnership They are led by the Duke Institute for Health Innovation and they develop guidelines and an open source curriculum based on best practices for AI cybersecurity. DIHI is currently seeking funding requests from faculty, staff, students, and interns across Duke University and the Duke University Health System for automation-related innovation projects to enhance healthcare operational efficiency.
ONC focuses on the evolving space and discusses in its blog series what it might take to get the best out of algorithms to drive innovation, increase competition, and improve patient and population care.
“What we know from studies so far is that predictive technology based on AI/machine learning may positively or negatively affect patient safety, create or propagate bias, and increase or decrease costs. In short, the results have been mixed. But the interest – ONC authors Catherine Marchesini, Jeff Smith and Jordan Iverson wrote in June Blog post.
A national need is driving a national health AI framework that promotes transparency and trustworthiness, according to Dr Brian Anderson, the coalition’s co-founder and chief digital health clinician at MITER.
“The enthusiastic engagement of leading academic health systems, technology organizations and federal regulators demonstrates the great national interest in ensuring that AI in health serves us all,” he said in an update on Hayy’s progress.
The AI collaboration has also been launched to address hacked software that poses the risks of harming doctors and patients and to advance understanding about the spread of AI across the healthcare industry.
Hayy researchers are also preparing to develop an online curriculum to help educate health IT leaders, setting standards for how employees are trained and how to support and maintain AI systems.
“These systems can embed systemic bias in care delivery, and vendors can market performance claims that differ from real-world performance and that the software is in a state with little or no software best practice guidance,” according to a CHAI launch statement.
but by Set equity and efficiency goals up front In the process of machine learning and designing systems to achieve those goals, many in the healthcare field believe that oblique outcomes can be prevented and the benefits of artificial intelligence can be realized in healthcare and patient care processes.
“It is inspiring to see the commitment of the White House and the US Department of Health and Human Services to instilling ethical standards in AI,” Hallamka said in the update.
“As a coalition, we share many of the same goals, including removing bias in algorithms that focus on health, and we look forward to providing our support and expertise as the policy process progresses,” he said.
Andrea Fox is Senior Editor at Healthcare IT News.
Healthcare IT News is a HIMSS publication.