ElectronicsGadgetITScienceTechnology

EU Commission Gives Green Light To AI Law

The European Parliament’s Internal Market Committee and Freedom Commission have approved new transparency and risk management rules for artificial intelligence systems. AI law.

This is a big step in the development of AI regulation in Europe as these are the first ever rules on AI. This rule aims to ensure the safety, transparency, traceability and non-discrimination of AI systems.

Co-Rapporteur after voting Brando Benifay (S&D, Italy) Said:

“We are enacting groundbreaking legislation that will have to resist the challenges of time. It will build public confidence in the development of AI and change Europe’s way of coping with the extraordinary changes that are already taking place. Establish and steer the political debate about AI at the global level.

We believe that our document strikes a balance between protecting fundamental rights and the need to provide legal certainty to companies and foster innovation in Europe. “

co-rapporteur Dragos Tudorache (Romania, Renew) Added:

“Given the profound transformative impact of AI on our society and economy, the AI ​​Act is very likely to be the most important piece of legislation in this mandate. It is a world first and means the EU can lead the way in achieving human-centric, trustworthy and secure AI.

We support AI innovation in Europe, protect fundamental rights, strengthen democratic oversight, and ensure mature systems of AI governance and enforcement, while empowering start-ups, SMEs and industries to grow and thrive. We have worked to give room for innovation. “

The rules are based on a risk-based approach and lay out obligations of providers and users according to the level of risk an AI system may generate. AI systems that pose an unacceptable level of risk to people’s safety, such as those that introduce subconscious or intentional manipulation techniques, exploit people’s vulnerabilities, or are used for social scoring, will be severely discouraged. Prohibited.

MEPs have also substantially amended the list of prohibited AI practices, including real-time remote biometric authentication systems in publicly accessible spaces, post-mortem remote biometric authentication systems (except for law enforcement purposes), sensitive properties. biometric classification systems, predictive policing systems, law enforcement, border control, workplace and educational institutions, emotion recognition systems, biometric data promiscuous from social media and surveillance camera footage to create facial recognition databases scraping, etc.

MEPs have also expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights and the environment. It also added AI systems that influence voters in political campaigns and recommendation systems used on social media platforms to the high-risk list.

To encourage AI innovation, MEP has added exemptions to these rules for research activities and AI components provided under open source licenses. The new law also promotes regulatory sandboxes (controlled environments established by public bodies) for testing AI before deployment.

MEPs want to strengthen the right of citizens to lodge complaints about AI systems and to be briefed on decisions based on AI systems that pose a high risk of severely affecting their rights. MEPs have also reformed the role of the EU AI Secretariat, which is tasked with overseeing how the AI ​​rulebook is being implemented.

Tim Wright, Technology and AI Regulatory Partner at a London-based Law Firm flood gatecommented:

“Given the news that an EU parliamentary committee has given the go-ahead to a groundbreaking AI law, US-based AI developers are likely to get ahead of their European competitors, and AI systems From the outset, they need to be classified according to their likelihood of harm.

The US technology approach is typically to experiment first and then retrofit to other markets and their regulatory frameworks once market and product fit is established. While this approach encourages innovation, EU-based AI developers must be mindful of new rules and develop systems and processes that can undermine their ability to innovate.

The UK has taken a similar approach to the US, but its proximity to the EU market means UK-based developers are more likely to follow the EU ruleset from the start. However, the possibility of experimenting in a safe space, a regulatory sandbox, may prove to be very attractive. “

Before negotiations can begin with the Council on the final form of the law, the draft negotiating task will need to be approved by the entire Parliament, and a vote will be taken during the session from 12 to 15 June. .

(photo courtesy Dennis Sebastian Tamas upon unsplash)

Related: UK details ‘innovation-enhancing’ approach to AI regulation

Want to learn more about AI and big data from industry leaders? check out AI・Big Data EXPO It will be held in Amsterdam, California and London.The event will be held at the same time as digital transformation week.

Check out other upcoming enterprise technology events and webinars from TechForge here.

  • Ryan Dawes

    Ryan is a senior editor at TechForge Media and has over a decade of experience covering emerging technologies and interviewing key figures in the industry. He’s often seen at tech conferences with a strong cup of coffee in one hand and a laptop in the other. If it’s something nerdy, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social).

tag: Love, eye act, AI law, AI regulation, artificial intelligence, enterprise, EU, Europe, european union, law, legal, machine learning, privacy, regulation, surveillance

https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/ EU Commission Gives Green Light To AI Law

Show More
Back to top button