Buoyed by a successful rollout of its data protection laws, or GDPR, the European Union has now set its sights on regulating artificial intelligence (AI). Impact Investor explores the implications.
In short
- The global market for AI including software, hardware and services is forecast to grow by 16.4% to $327.5 billion in 2021, according to the latest IDC Figures.
- But to capture this market potential business leaders say that cost and regulatory hurdles need addressing.
- The Commission is looking for the sweet spot between norm-setting protection and innovation-fed market liberalism to regulate artificial intelligence.
- There are questions whether it can repeat the global success of the GDPR rollout from 2018 until now.
Push past the hype and there are good reasons to be hopeful that AI can transform science, business and how governments keep people happy, healthy and safe.
This means there is no shortage of fresh opportunities for all manner of investors. But these may come with caveats.
Right now, all eyes are on the European Commission – the EU’s executive body – following its announcement on 21 April of proposed new AI regulations and guidelines to make Europe a “global hub for trustworthy AI”.
The new rules will aim to safeguard freedoms and security while encouraging innovation, investment and commercial uptake. Here, the focus on ‘trust’ stands out. What should we be worried about?
Ethical concerns
The words themselves conjure up sci-fi visions of robot saviours, but most current uses of AI are far less ‘deep’ and futuristic. John McCarthy, who won the 1971 Turing Award for his contributions to AI, called it “the science and engineering of making intelligent machines”.
Using the power of data, deep-learning and machine-learning (see Pathmind wiki), AI can automate complex tasks and make sense of our surroundings to improve how we live, work and interact.
Ethical concerns about AI revolve around fundamentals like fairness and avoiding biases in how it is applied, according to software giant SAS (see their AI ethics primer).
Some groups worry about AI compliance and strategies for enforcing it, others about its pervasive use by law enforcement in, for example, facial recognition systems.
Governments fear its use in shady attempts to subvert democracy or manipulate electorates, among other concerns. Rights activists and consumer groups worry about AI’s reach in things like credit scores or insurance claims.
Moreover, any of these concerns could prompt reactions from “conscientious consumers”, SAS points out. Companies (and their backers) thus need to be on their guard.
Looking for the sweet spot
The EU’s interest in AI at this relatively nascent stage should not come as a huge surprise, then. It is keen to lay some foundations and set standards which, according to its digital chief and vice-president Margrethe Vestager, “pave the way to ethical technology worldwide and ensure that the EU remains competitive”.
She says the new rules would only kick in “when the safety and fundamental rights of EU citizens are at stake”.
The Commission is looking for the sweet spot between norm-setting protection and innovation-fed market liberalism, but in this regard, there are questions whether it can repeat the global success of the GDPR rollout from 2018 until now.
Digital rights advocates question whether the EU has done enough to address what The Economist (24 April) calls “mushy language and loopholes” that either already exist or will emerge as businesses are set to comply with such an ambitious regulatory regime.
Business groups worry about the impact of added costs, especially for smaller tech companies and start-ups lacking legal teams and resources.
Betting on clarity
“AI as a technology offers enormous benefits, but also comes with great risks for both the integrity and health of individuals and society,” says Sophia Lagerholm, Head of Digital & Innovation at Delphi Law Firm in Sweden.
“Ambiguity only leads to legal uncertainty and slower uptake of the technology by companies, which of course will not benefit anyone.”
But with greater transparency and confidence in the technology, she believes the proposed new legal framework and actions will boost investment in AI and innovation throughout the EU.
Investors will need to weigh up the benefits of greater legal clarity against signs that innovation is being sucked out of the AI space.
AI for good
The global market for AI including software, hardware and services is forecast to grow by 16.4% to $327.5 billion in 2021 and according to the latest IDC figures breach the $500 billion mark by 2024 thanks to a five-year compound annual growth rate (CAGR) of 17.5%.
There is no shortage of market optimism that AI can be a force for good. A study commissioned in 2018 by chipmaker Intel found that 74% of tech business leaders believe AI can help solve long-standing environmental challenges and 92% think predictive analytics will help organisations detect issues and develop new solutions.
Intel is one of a growing number of tech firms including Europe’s NXP and Infineon that are charting applications for AI and the internet of things (IoT) – billions of connected/networked devices or ‘things’ – from industrial automation and health solutions to interconnected smart cities, transport and buildings. (see also ‘AI investing… it’s a private ‘and’ public affair’).
But to capture this market potential business leaders say that cost and regulatory hurdles need addressing. According to a BusinessWire editorial by Intel Corporation’s Director of Global Public Affairs and Sustainability Todd Brady, collaborative solutions are vital to unlocking the power of emerging technology and “create positive change”.
For investors hunting for returns with a ‘positive’ impact the Intel study’s findings offer some insights: the upfront costs of adopting AI and other tech solutions for social and environmental betterment have to be weighed against long-term benefits and risks.
Consultants at Expert.ai think businesses understand the potential value of adopting artificial intelligence but lack what they call “institutional AI knowledge”, which makes evaluating its true worth (to them, their customers, the market) rather complicated.
And yet AI adoption and investment seem undaunted by this perceived knowledge gap: “Companies and countries around the globe increasingly view the development of strong AI capabilities as imperative to staying competitive,” notes Deloitte Insights.
Need for a framework
Even before Covid-19 struck, some 37% of organisations said they had deployed AI solutions, up 270% from four years earlier, according to Gartner’s 2019 CIO Survey.
But Expert.ai underlines that “with more and more at stake, business leaders [still] need a proper framework to make smarter AI-related business decisions”.
Before it can become law, the Commission’s AI proposal goes to the European Parliament and Council. Given the intense interest and complex nature of the subject, the road to adoption could be slow and may have to factor in amendments resulting from the stakeholder consultation (ending 25 June) and debate process.
But if the final directive receives anywhere near the market observance garnered by GDPR the EU’s legal framework on AI looks set to gain traction. Watch this space!