Policymakers want to regulate AI but lack consensus on how
Commentary: AI is considered “world changing” by policymakers, but it’s unclear how to ensure positive outcomes.
According to a new Clifford Chance survey of 1,000 tech policy experts across the United States, U.K., Germany and France, policymakers are concerned about the impact of artificial intelligence, but perhaps not nearly enough. Though policymakers rightly worry about cybersecurity, it’s perhaps too easy to focus on near-term, obvious threats while the longer-term, not-obvious-at-all threats of AI get ignored.
Or, rather, not ignored, but there is no consensus on how to tackle emerging issues with AI.
SEE: Artificial intelligence ethics policy (TechRepublic Premium)
AI problems
When YouGov polled tech policy experts on behalf of Clifford Chance and asked priority areas for regulation (“To what extent do you think the following issues should be priorities for new legislation or regulation?”), ethical use of AI and algorithmic bias ranked well down the pecking order from other issues:
- 94%—Cybersecurity
- 92%—Data privacy, data protection and data sharing
- 90%—Sexual abuse and exploitation of minors
- 86%—Misinformation / disinformation
- 81%—Tax contribution
- 78%—Ethical use of artificial intelligence
- 78%—Creating a safe space for children
- 76%—Freedom of speech online
- 75%—Fair competition among technology companies
- 71%—Algorithmic bias and transparency
- 70%—Content moderation
- 70%—Treatment of minorities and disadvantaged
- 65%—Emotional wellbeing
- 65%—Emotional and psychological wellbeing of users
- 62%—Treatment of gig economy workers
- 53%—Self-harm
Just 23% rate algorithmic bias, and 33% rate the ethical use of AI, as a top priority for regulation. Maybe this isn’t a big deal, except that AI (or, more accurately, machine learning) finds its way into higher-ranked priorities like data privacy and misinformation. Indeed, it’s arguably the primary catalyst for problems in these areas, not to mention the “brains” behind sophisticated cybersecurity threats.
Also, as the report authors summarize, “While artificial intelligence is perceived to be a likely net good for society and the economy, there is a concern that it will entrench existing inequalities, benefitting bigger businesses (78% positive effect from AI) more than the young (42% positive effective) or those from minority groups (23% positive effect). This is the insidious side of AI/ML, and something I’ve highlighted before. As detailed in Anaconda’s State of Data Science 2021 report, the biggest concern data scientists have with AI today is the possibility, even likelihood, of bias in the algorithms. Such concern is well-founded, but easy to ignore. After all, it’s hard to look away from the billions of personal records that have been breached.
But a little AI/ML bias that quietly guarantees that a certain class of application won’t get the job? That’s easy to miss.
SEE: Open source powers AI, yet policymakers haven’t seemed to notice (TechRepublic)
But, arguably, a much bigger deal, because what, exactly, will policymakers do through regulation to improve cybersecurity? Last I checked, hackers violate all sorts of laws to crack into corporate databases. Will another law change that? Or how about data privacy? Are we going to get another GDPR bonanza of “click here to accept cookies so you can actually do what you were hoping to do on this site” non-choices? Such regulations don’t seem to be helping anyone. (And, yes, I know that European regulators aren’t really to blame: It’s the data-hungry websites that stink.)
Speaking of GDPR, don’t be surprised that, according to the survey, policymakers like the idea of enhanced operational requirements around AI like the mandatory notification of users every time they interact with an AI system (82% support). If that sounds a bit like GDPR, it is. And if the way we’re going to deal with potential problems with the ethical use of AI/bias is through more confusing consent pop-ups, we need to consider alternatives. Now.
Eighty-three percent of survey respondents consider AI “world changing,” but no one seems to know quite how to make it safe. As the report concludes, “The regulatory landscape for AI will likely emerge gradually, with a mixture of AI-specific and non-AI-specific binding rules, non-binding codes of practice, and sets of regulatory guidance. As more pieces are added to the puzzle, there is a risk of both geographical fragmentation and runaway regulatory hyperinflation, with multiple similar or overlapping sets of rules being generated by different bodies.”
Disclosure: I work for MongoDB, but the views expressed herein are mine.