How is AI in Healthcare Being Regulated 2024? [Opportunities and Risks] 

Zeeshan Ali

0 Comment

Blog

Artificial intelligence (AI) is increasingly used in vital healthcare applications. From diagnostics and remote patient monitoring to screening medical imaging for abnormalities, the applications of AI in healthcare are immense and can be used to simplify processes, help professionals handle their workloads, and provide patients with information.

However, using these systems can also directly affect the quality of patient care and, therefore, their health. Although healthcare is already a highly regulated sector, with healthcare professionals required to follow rigorous rules and regulations to maintain patient care and well-being standards, AI in healthcare can introduce new risks that can cause widespread harm if they are not controlled.

As such, policymakers worldwide are moving to regulate AI in vital applications such as healthcare, although different jurisdictions are taking different approaches.

“If You want to Remove AI Detection and Bypass AI Detectors Use Undetectable AI: It can do it in one click“

Regulations governing AI in healthcare

How is AI in Healthcare Being Regulated?

In this blog post, we examine how AI in healthcare would be governed under the EU AI Act, California Assembly Bill 331, The DC Stop Discrimination by Algorithms Act, and the Algorithmic Accountability Act.

EU AI Act

The EU AI Act is a comprehensive piece of legislation that aims to set a global benchmark for regulating AI with its risk-based approach, under which obligations are aligned with the risk posed by the system.

Here, systems deemed as having an unacceptable level of risk are banned from being offered on the EU market, those with a high level of risk have strict requirements for their use to be allowed in the EU, and those posing limited risk will have transparency requirements, while those with minimal risk will only be subject to voluntary frameworks.

Is AI in healthcare a high-risk application under the EU AI Act?

The main use cases deemed to pose a significant risk to health, safety, and fundamental rights, therefore classified as high-risk under the AI Act, are listed in Annex III. While systems used in healthcare are not specifically named as a high-risk application in Annex III, aspects of healthcare are included.

This covers AI systems used to make decisions about eligibility for health and life insurance, those used to assess and categorize emergency calls and coordinate the dispatch of emergency services, including emergency healthcare patient triage systems, and those used by public authorities to assess eligibility for public assistance benefits and services including healthcare services.

Moreover, AI systems that are safety components of products or are themselves products that fall within the scope of certain EU regulations in Annex II are regarded as high-risk if they undergo a third-party conformity assessment according to the relevant harmonization law.

This covers medical devices that fall under the Medical Devices Regulation (EU) 2017/745 (MDR) and In Vitro Diagnostic Medical Devices Regulation (EU) 2017/746 (IVDR). Systems regarded as high-risk must adhere to the obligations laid out in articles 9-15, including establishing a risk management system.

While standards are still being devised for these obligations, Recital 27 states that requirements for high-risk systems should consider sectoral legislation, including the MDR and IVDR, meaning that specific provisions for AI systems used in healthcare could fall under these laws.

Support for the development of AI-driven healthcare systems under the EU AI Act

The AI Act not only lays out limitations and duties for using AI-driven healthcare systems. In addition to these duties, Recital 28 emphasizes the need for diagnostics systems and those used to support healthcare decisions to be dependable and precise.

To support this, Recital 45 states that the European health data space will provide access to health data to train algorithms in a privacy-protecting, secure, transparent, and prompt manner that is complemented by institutional governance.

Furthermore, Article 54 forbids the processing of personal data in regulatory sandboxes unless they are designed to protect the public interest in specified areas, including public health, in terms of activities such as disease detection, diagnosis prevention, control, and treatment, providing special permissions for the use of such data in the interests of maintaining healthcare standards.

AI in healthcare under the Algorithmic Accountability Act

In other places, in the US, horizontal legislation that aims to regulate multiple use cases is less advanced and easier to navigate. First introduced in 2019 and reintroduced in 2021, the Algorithmic Accountability Act aimed to require impact assessments of algorithms used to make critical decisions for issues such as bias, privacy, and continuous testing and monitoring mechanisms.

This covers systems used to make decisions that have a legal, material, or otherwise significant effect on a consumer’s life regarding their access to or the cost, terms, or availability of services such as healthcare. Mental healthcare, dental, and vision would be part of this.

Although the Algorithmic Accountability Act failed in the 117th Congress, if it had been passed, it would have been inferior to the EU AI Act by not considering other relevant legislation governing specific sectors, such as healthcare, as the AI Act does.

As such, this could have led to conflicting or redundant obligations, showing the importance of considering other regulations in heavily regulated sectors such as healthcare and medical devices. It also did not lay out any provisions to support innovation, such as regulatory sandboxes or access to data to enable the development of cutting-edge, but still secure, AI systems for use in healthcare.

AI in healthcare under DC’s Stop Discrimination by Algorithms Act

Like the Algorithmic Accountability Act, DC’s Stop Discrimination by Algorithms Act is a comprehensive piece of legislation proposed twice – first in 2021 and again in 2023. However, with goals different from those of the AI Act and Algorithmic Accountability Act, the DC bill concentrates on preventing discriminatory decisions made by algorithms, including AI, for important life opportunities.

These cover eligibility decisions about education, employment, housing, places of public accommodation, and insurance. Insurance would likely cover health and life insurance, but healthcare is not specifically mentioned in the text, although it is essential to note that this would still be governed by relevant state and federal laws.

AI in healthcare under California Assembly Bill 331

In January 2023, California Assembly Bill 331 was proposed to require developers and deployers of AI tools to conduct impact assessments and to require deployers to inform users of the use of the tool.

Developers would also have to provide deployers with documentation about the use of the tool and its limitations, and either developers or deployers would have to set up and maintain a governance program with reasonable administrative and technical safeguards to map, measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination from the use of automated decision tools within the scope of the legislation.

Like the EU AI Act, AB331 defines several categories of systems that would be included, such as health care or health insurance, including mental health care, dental, or vision. Interestingly, the assembly bill also considers systems used in reproductive health, going beyond any previous regulations.

However, like the Algorithmic Accountability Act, Assembly Bill 331 needs to consider how the bill would interact with other relevant laws already governing healthcare and other vital applications of AI.

Is horizontal legislation the right route for medical AI?

Healthcare is strongly regulated by sector-specific regulation that aims to protect patient care and avoid harm. While the use of algorithms and AI within healthcare can present novel risks that would benefit from governance specially designed to address them, this may not be something that could sufficiently be achieved by horizontal legislation since there are key differences between risk management in healthcare practices and other more common business practices.

For example, using protected attributes to make decisions is prohibited in other practices, such as employment decisions. However, given differences in disease symptoms and presentation among different demographic groups, protected attributes may need to be actively considered when making decisions to ensure patients are treated most effectively.

Therefore, there must be suitable and specific AI regulation in healthcare to avoid harm while also allowing appropriate considerations to be made on patient demographics. Regulation in this space is still developing, but it will be crucial to ensure that healthcare technologies are safe, effective, and fair.

As a result, ensuring solid AI governance, managing related risks, and following compliance in this domain is essential. We are world experts in AI Governance, Risk, and Compliance at Holistic AI. Schedule a call to learn more about how we can help your organization.

FAQs

  • What are the regulations for AI in healthcare?

How do different EU and US laws regulate AI in healthcare? The EU’s AI Act does not label healthcare as high-risk but covers some aspects. The US has several bills to prevent AI discrimination and ensure AI accountability in various sectors, including healthcare. California AB 331 also mandates impact assessments and governance programs for AI systems used in healthcare.

  • How is the task of overseeing AI in healthcare becoming more complex?

AI is transforming healthcare in many ways but poses new challenges and risks. Generic laws that apply to different sectors may not capture the specific issues in healthcare, such as patient safety, privacy, and quality. That’s why many experts call for customized governance to ensure that AI systems meet the high standards of patient care in a fast-changing environment.

  • How should regulatory bodies intervene with AI in the healthcare sector evolving fast?

Medical AI experts and regulators must collaborate to create flexible regulations that foster innovation and safeguard patients. By consulting with each other, they can design regulations that are agile and responsive to the ethical issues in healthcare, ensuring that technology advances without compromising healthcare quality.

Resources:

Tags:

Post Comments:

Leave a comment

Your email address will not be published. Required fields are marked *