A new report by the Lords Select Committee in the UK claims that Britain is in a strong position to be a world leader in the development of artificial intelligence. But to get there—and to keep AI safe and ethical—tech firms should follow the Committee’s newly proposed “AI Code.”
The new report was penned by the House of Lords Artificial Intelligence Committee, and it’s titled “AI in the UK: Ready, Willing and Able?.” The AI Committee is proposing a path for both the British government and UK-based businesses to move forward as AI increasingly expands in power and scope. The report is particularly timely given the recent scandal surrounding Cambridge Analytica’s use of Facebook data and growing concerns that tech companies aren’t working in the public’s best interests. In recognition of both current and future risks, the Committee says technology, and AI in particular, needs to be used for the common good.
The UK has a “unique opportunity” to shape AI positively, and it’s poised to be a world leader in the development of this technology, write the authors, adding that the government should support businesses in this area, and do what’s necessary to prevent “data monopolies.” In addition, people should be educated to work alongside AI to ensure future employment prospects and to “mitigate the negative effects” of technological unemployment. Many new and unknown jobs will be created by AI, the authors say, but many will disappear.
Indeed, AI could introduce a host of new problems, leading to the Committee to propose a set of principles to steer development and mitigate potential risks.
“An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse,” Chairman of the Committee Lord Clement-Jones said in a statement. “We want to make sure that this country remains a cutting-edge place to research and develop this exciting technology. However, start-ups can struggle to scale up on their own. Our recommendations for a growth fund for SMEs [small and medium sized enterprises] and changes to the immigration system will help to do this.”
The 181-page report is wide ranging in its recommendations, but the Committee suggests five overarching principles for a basic AI code:
Artificial intelligence should be developed for the common good and benefit of humanity.
Artificial intelligence should operate on principles of intelligibility and fairness.
Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
The second point, that AI “should operate on principles of intelligibility,” will be easier said than done. It’s getting increasingly difficult for us to understand why artificial intelligence does what it does, and why it reaches certain conclusions, a phenomenon known as the “black box” problem among AI developers. But the Committee is right—we should do what we can to understand as much of an artificially intelligent system as possible, and efforts are already underway in this area.
The other recommendations sound reasonable, but it’s not clear if tech firms will be compelled to follow these guidelines. The Committee isn’t asking the government to turn its AI code into to law; rather, it’s hoping that law makers and AI developers will use them as guideposts for both the development and regulation of AI. Each industry is going to face its own unique challenges, but these guidelines, argue the Committee, should be broad enough for each field, whether it be the finance sector or automobile manufacturers.
“The public and policymakers alike have a responsibility to understand the capabilities and limitations of this technology as it becomes an increasing part of our daily lives,” write the authors in their report. “This will require an awareness of when and where this technology is being deployed.”
To that end, the Committee is recommending the establishment of a UK AI Council, which will work with industry “to inform consumers when artificial intelligence is being used to make significant or sensitive decisions.”
The Committee also recognizes that existing legislation may be inadequate or ill-prepared to deal with situations in which AI systems malfunction, underperform, or make erroneous decisions which cause harm. The Committee is recommending that the UK Law Commission look into this “to provide clarity.”
“We also urge AI researchers and developers to be alive to the potential ethical implications of their work and the risk of their work being used for malicious purposes. We recommend that the bodies providing grants and funding to AI researchers insist that applications for such funding demonstrate an awareness of the implications of their research and how it might be misused,” writes the Committee in the report. “We also recommend that the Cabinet Office’s final Cyber Security & Technology Strategy consider the risks and opportunities of using AI in cybersecurity applications, and conduct further research as how to protect datasets from any attempts at data sabotage.”
It’s still early days for AI. What the UK is doing here is helpful inasmuch as it’s normalizing dialogue between tech developers, governments, regulators, and law makers. We’re not at the stage yet where AI needs to be regulated, but that day is fast approaching. These new guidelines are a step in the right direction.