The European Union on Wednesday unveiled strict regulations to govern the use of artificial intelligence, a first-of-its-kind policy that outlines how companies and governments can use a technology seen as one of the most significant, but ethically fraught, scientific breakthroughs in recent memory.
Presented at a news briefing in Brussels, the draft rules would set limits around the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, school enrollment selections and the scoring of exams. It would also cover the use of artificial intelligence by law enforcement and court systems — areas considered “high risk” because they could threaten people’s safety or fundamental rights.
Some uses would be banned altogether, including live facial recognition in public spaces, though there would be some exemptions for national security and other purposes.
The rules have far-reaching implications for major technology companies including Amazon, Google, Facebook and Microsoft that have poured resources into developing artificial intelligence, but also scores of other companies that use the technology in health care, insurance and finance. Governments have used versions of the technology in criminal justice and allocating public services.
Companies that violate the new regulations, which are expected to take several years to debate and implement, could face fines of up to 6 percent of global sales.
Artificial intelligence — where machines are trained to learn how to perform jobs on their own by studying huge volumes of data — is seen by technologists, business leaders and government officials as one of the world’s most transformative technologies.
But as the systems become more sophisticated it can be harder to determine why the technology is making a decision, a problem that could get worse as computers become more powerful. Researchers have raised ethical questions about its use, suggesting that it could perpetuate existing biases in society, invade privacy, or result in more jobs being automated.
“On artificial intelligence, trust is a must, not a nice to have,” Margrethe Vestager, the European Commission executive vice president who oversees digital policy for the 27-nation bloc, said in a statement. “With these landmark rules, the E.U. is spearheading the development of new global norms to make sure A.I. can be trusted.”
In introducing the draft rules, the European Union is attempting to further establish itself as the world’s most aggressive watchdog of the technology industry. The bloc has already enacted the world’s most far-reaching data-privacy regulations, and is also debating additional antitrust and content-moderation laws.
In Washington, the risks of artificial intelligence are also being considered. This week, the Federal Trade Commission warned against the sale of artificial intelligence systems that use racially-biased algorithms, or ones that could “deny people employment, housing, credit, insurance, or other benefits.”