The European Union’s prohibitions on “unacceptable risk” in AI have kicked in, but experts are worried about the impact.
The first phase of the European Union’s AI legislation took effect on Sunday, February 2. It comprises prohibitions on specific uses of AI considered to be “unacceptable risk.”
The EU’s AI Act was officially passed in March 2024 and went into force in August 1. Companies were given a six-month period to adapt their activities to the changes. February 2 marked the date from which they were expected to start complying with the first phase of the Act.
The AI Act categorizes AI technology into four groups of risks. Systems such as AI-enabled video games or spam filters are classed as “minimal risk,” and therefore, unregulated. Others like chatbots and other AI models that generate text and images are classed as “limited risk” and will be subject to light regulation.
Systems such as AI medical tools are classed as “high risk” and will be subject to heavy regulation. The fourth and highest level of risk is tagged as “unacceptable risk.” It includes using AI to evaluate people based on social behavior or personal characteristics, deceive or manipulate people to control their behavior, or engage in predictive policing.
The EU AI Act is being implemented in stages. While the prohibitions on AI technology with “unacceptable risk” went into force on February 2, prohibitions for AI systems such as chatbots, classed as “limited risk,” will take effect later in the year, in August.
The new regulations will not affect companies based in Europe alone, according to experts. Any organization whose activities and output are available in the EU will have to comply with the law. This means that it will affect US businesses.
Back in September, over 100 companies signed a voluntary pact to start applying the principles of the Act before the February 2 date. These included US big tech companies Google, Amazon, and OpenAI, although others like Meta and Apple opted not to sign.
Critics have previously warned that the regulations are premature and vague. Many experts have also expressed their worry that excessive red tape could hamper AI innovation and development. The US under the new Trump administration has already made an about-turn on the previously proposed regulation of AI, as the country seeks to cover ground on its arms race with China. There are concerns that regulation of AI will hamper Europe’s efforts and leave it further behind in the global AI race.
In the US, companies with operations in Europe will want to make sure that they stay compliant. According to analysts, a major concern for organizations is whether all guidelines, standards, and codes of conduct relating to the EU’s AI Act will be made clear and promptly released.
Businesses that violate the AI Act could face penalties of up to 7% of their annual turnover, making it necessary to understand and comply with the legislation.






Leave a reply to The Useful AI Chatbots That Pose a Serious Threat to Users’ Lives – AI News Monitor Cancel reply