X

Risk Based Classification of Artificial Intelligence

Artificial Intelligence (AI) is no longer a mere sci-fi project. It could be engaged in our daily lives in the form of recommendations on streaming platforms, autonomous vehicles, and predictive healthcare tools. With the advanced nature and ubiquitousness of AI in recent times, the focus has now shifted from the possibilities it presents to the potential risk it poses. How can we safeguard society by ensuring that AI is safe and being used responsibly? How does one regulate the ever-evolving technology?

The answers lie in the classification of AI as low, medium, high, and even unacceptable risk. These help governments, businesses, and developers know how to control AI to ensure that it is responsibly and ethically used. The categories are derived from the European Union’s Artificial Intelligence Act (AI Act), which sets out an extensive legal framework purposed to responsibly enable the widespread use of AI technologies across all industrial sectors. Such a classification allows policymakers, businesses, and developers to understand how AI systems can affect individuals and societies to varying degrees. Applying this framework allows us to write proportionate regulations to ensure that AI technologies are useful while limiting their associated risks. Here are the characteristics of the types of systems that fall into each risk category, and finally, why it is important to make these distinctions during the development and regulation of AI.

Low-Risk AI or Minimal-Risk AI

These can rightly be viewed as the most basic, simplest, and non-intrusive types of AI systems out there. While very ubiquitous, these technologies are used almost imperceptibly in the everyday lives of most people. While some of these systems have a huge face for people to see and behold daily, they are generally very safe, low on risk, and have little to no impact on life, security, privacy, or general well-being.

Characteristics of Low-Risk AI:

Minimal Consequences: These applications of AI carry a rather limited risk of implying any serious consequences if they fail or malfunction, as their reach is limited to addressing convenience issues.

Transparently Managed: Most of the low-risk systems of AI are straightforward and transparent in their functionality. They are often easy to understand and less likely to yield decisively sensitive or high-stakes impacts.

No Major Public Safety Threats: Failures inside those systems don’t pose a major risk of personal visible injury or loss of significant rights.

Examples of Low-Risk AI:

  • Chatbots functioning in customer service, where the AI is responsible for mundane inquiries of moving appointments on a customer’s behalf.
  • The recommendation systems in e-commerce, the most famous of them being Amazon or Netflix, suggesting products or movies that fit with the specific tastes of respective users.
  • Voice assistants, including Siri and Alexa, perform routine tasks such as reminding a user of something or providing weather updates.

Due to their relatively low-stakes activities, these applications do not require serious regulation. However, they should adhere to general privacy and data protection regulations to ensure users’ data are kept safe and secure.

Medium-risk AI or Limited-risk AI

As much as medium-risk AI systems might seem riskier than the low-risk category, they hardly involve any critical decision-making that might cause catastrophic harm. More sensitive areas engage these systems, thereby evincing higher error risk or misuse while allowing for human oversight or intervention.

Features of Medium-Risk AI:

Moderate Impact: Medium-risk AI systems can cause some inconvenience or discomfort in case of malfunction. However, the consequences are not severe relative to high-risk AI classes.

Limited Decision-Making Power: A medium-risk AI may influence decisions (credit scoring, hiring recommendations) where the ultimate control and judgment rests are retained by humans.

Transparency and Accountability: Regulation might ensure the decision-making process remains transparent, in case automated decisions need to be explained or contested.

Examples of Medium-Risk AI:

  • AI for credit scoring purposes, in the provisions of eligibility for loans, mortgages or setting interest rates, having a human supervisor.
  • Automated hiring instruments are used just to research resumes or conduct initial job interviews according to criteria set in advance, with recruiters making the final decision.
  • Healthcare diagnostic tools recommend AI-implemented advice, for example, on spotting patterns in medical images or suggesting a treatment program, but the ultimate medical decision rests with human physicians.

While these cases aren’t as life-and-death serious as high-risk AI, they certainly deserve intense consideration. These mediums of AI should be designed and implemented to minimize the “race, gender, class” input within the functions and processes governed by the AI and maintain an opportunity for human intervention, such as contesting the results of the AI.

High-Risk AI

High-risk AI systems are those that are expected to cause serious harm, especially in the event of a failure or misuse. These technologies often face decisions impacting people’s lives, safety, or fundamental rights. Whether driving through busy streets, town-center intersections, or using AI in criminal justice, everything is on a razor’s edge, and any wrong step could leave the catastrophic stain of tragedy or disaster.

Characteristics of High-Risk AI:

High Impact: High-risk AI systems can inflict significant damage due to malfunction, misuse, or failure to fulfill the intended function. The risk can involve personal injury, financial loss, or loss of life.

Autonomous Decision-Making: High-risk AI systems can initiate unsupervised decision-making in complex situations with limited or no human supervision. The AI systems exist to exert essential decision-making power that, through failure, can lead to accidents.

Ethical and Legal Problems: There are justifiably large ethical problems encompassed within high-risk AI systems, including broader categories of fairness, accountability, privacy, and protection of basic human rights.

Examples of High-Risk AI:

  • Autonomous vehicles relying on AI to navigate streets and make real-time driving choices that could impact the safety and security of their environments.
  • AI in criminal justice, including predictive policing tools or sentencing algorithms that might unfairly target specific demographic groups or propagate biases.
  • AI in healthcare in life-critical decision-making applications, such as in robotic surgery and diagnostic systems, where an error could trigger severe health-related consequences for patients.
  • Military AI systems empowered with drones or automated weapons and causing massive loss of life or escalation of hostilities as AI takes the decision itself.

High-risk AI systems attract strict regulations, reflecting the high stakes involved. Comprehensive testing, ethical review, transparency requirements, and continuous monitoring are required to ensure safe AI usage in a manner acceptable to human values.

Unacceptable Risk AI

AI classified as an unacceptable risk entail systems that have been assessed as either being too dangerous or too harmful to be deployed using legal processes. These systems pose serious hazards to people, society, or fundamental rights, and the risks associated with deploying these technologies far outweigh the other benefits enabled by their deployment. Some AI systems have been classified by the EU as unacceptable risk to fundamental human rights, dignity, or safety, and a complete ban on placing those AI systems on the market and using them was enforced.

Characteristics of Unacceptable Risk AI

Extreme Harm: These AI systems pose such great risk that they simply cannot be allowed to operate due to the serious harm or violation of basic human rights they are capable of causing.

Irreversible Effects: The outcomes of deploying these systems are irreversible, causing the loss of human life, harm to society at large, or violations of privacy and security.

Ethics Violations: These AI systems frequently enable serious ethical violations, such as exploitation, mass surveillance, or threatening personal freedoms.

Examples of Unacceptable Risk AI

  • AI systems for mass social scoring- a system whereby governments or organizations track and rate individuals based on their behavior or on their interactions with others, often leading to discrimination or social exclusion.
  • An autonomous weapons system that, in some instances, may be able to carry out independent targeting and kill without human oversight.
  • AIs used to manipulate or exploit people-for example, systems that trick people into making decisions contrary to their own best interests, such as deepfakes or AI-enabled disinformation campaigns.

These AI systems are banned under the EU’s AI Act since they are too dangerous and have ethical violations. Their deployment or use is considered intolerable as they happen to be incompatible with the values and legal frameworks of the EU.

The classification of AI systems as low-risk, medium-risk, high-risk, and unacceptable risk based on the EU’s Artificial Intelligence Act (AI Act), proposed in April 2021 to create a uniform legal framework governing the development and deployment of AI across the European Union. The AI Act aims to ensure the safe, ethical, and transparent use of AI technology while promoting innovation and protecting fundamental rights.

As AI continues shaping the future, awareness of its varying risk levels will be key in addressing the numerous and diverse legal and ethical challenges it presents. The clear-cut classification of AI systems into low-risk, medium-risk, high-risk, and unacceptable risk categories generates a set of regulatory guidelines that are equitable and functional. With the balancing of regulation with regard to the potentiality of risk and harm, it becomes possible for governments to reasonably ensure that such technologies are then used for the greater good. Due to the rapid evolution of AI, it will be imperative to update these classifications and frameworks in the future to ensure that they remain relevant to the protection of human rights and safety.

 

Kim Lance:
Related Post

This website uses cookies.

Read More