CENTER FOR DATA INNOVATION 1
The AI Act Should Be
Technology-Neutral
By Patrick Grady | February 1, 2023
The EU aims to implement the world’s first artificial
intelligence (AI) regulation, the Artificial Intelligence Act,
intended to allow people and businesses “to enjoy the
benefits of AI while feeling safe and protected.”
1
Unfortunately, the AI Act’s broad definition of AI penalizes
technologies that do not pose novel risks.
2
To resolve this,
policymakers should revise the definition of AI to only
apply to specific AI approaches that create significant
challenges.
Policymakers have long valued the principle of technology neutrality, which
holds that laws and regulations should avoid privileging or penalizing one
set of technologies over another.
3
Technology neutrality does not
necessarily demand that the exact same rules apply to different
technologies. For example, if policymakers believe AI systems present
novel risks not found in non-AI systems, they can and should address those
risks.
This report shows that the AI Act is not, despite the intention of the
European Commission, technology-neutral. Instead of addressing unique
concerns about uninterpretable machine learning (ML) systems—a subset
of AI systems—the Act would apply to a much broader set of AI systems that
do not need regulatory intervention.
4
The result is legislation that would
create significant overreach and potential harm to the EU’s AI ecosystem. A
better definition would limit the scope of the proposed law to only those
technologies that pose novel risks.
AI development is not linear—it has gone and continues to go through
various periods of flourishing (“springs”) and stagnations (“winters”). The
last “AI winter” has passed, but the EU is falling behind its global
competitors—China and the United States—in AI research, investment, and