- Say no new ministry needed, call for overarching ethical guide
- Immigration, education reforms needed to keep U.K. AI hotspot
An influential body of U.K. lawmakers said regulators should stop major tech companies from dominating the field of artificial intelligence, and also warned on the potential of widespread unemployment due the technology.
British antitrust regulators should be mindful that large datasets on which artificial intelligence depends on are not monopolized by a handful of large technology companies, such as Alphabet Inc., International Business Machines Corp. and Microsoft Corp., the House of Lords Select Committee on Artificial Intelligence said in a report published Monday.
"Large companies which have control over vast quantities of data must be prevented from becoming overly powerful within this landscape," the report, which followed a nine-month inquiry into all aspects of AI development in the U.K., said. The committee received 223 pieces of written evidence and interviewed 57 witnesses during the course of the investigation.
However, it stopped short in recommending the creation of an overarching new ministry to serve as a watchdog on the emerging technology.
"We don’t see the need for an overarching regulator," Timothy Clement-Jones, the chairman of the committee, said in an interview. But he said that the Financial Conduct Authority, for instance, should be aware of how insurance companies are using machine-learning algorithms to help determine someone’s premiums or how banks are using such technology to determine whether to extend credit.
The lawmakers also urged government to be vigilant about the potential for widespread job losses due to the adoption of AI across the economy, but stopped short of endorsing any radical policy solutions, such as a universal basic income, that some have advocated.
"We believe that AI will disrupt a wide range of jobs over the coming decades, and both blue- and white-collar jobs which exist today will be put at risk," it said.
Instead, the committee said that the government must invest more heavily in adult retraining programs and called on industry to match government funding for these programs.
"The U.K. is a world leader in AI and has many opportunities available to it, but it won’t be able to take advantage of those opportunities unless we mitigate some of the risks involved," Clement-Jones said.
The government also needs to do more to ensure the U.K. continues to have enough people with machine-learning skills, the committee said. This was especially important as the U.K. prepares to leave the European Union, since many British workers with AI skills are currently drawn from abroad.
Tier 1 Visas
It urged the government to further increase the number of Tier 1 visas for exceptionally talented individuals available each year. AI researchers with PhDs can avail themselves of these visas.
The government has said it is doubling the number of Tier 1 visas available each year to 2,000, but the committee said the government should increase it again. It also said that machine-learning and artificial intelligence roles should be added to the critical skills shortage list that qualifies people for Tier 2 visas.
"We have got a skill shortage currently and we rely quite heavily on bringing those skills in from outside and so the new visa regime must really be fit for purpose," Clement-Jones said.
The report said the U.K. Ministry of Defense should change its definition of autonomous weapons systems to bring the country more in line with others. Currently, the British military defines such weapons as those "capable of understanding higher-level intent and direction," a high bar that means very few weapons currently on the market meet the standard. By contrast, other countries define such systems as those able to select targets on their own, without human intervention.
The United Nations is currently discussing whether limitations should be placed on the use of what are called "lethal autonomous weapons systems." A number of prominent figures in the development of artificial intelligence, including billionaire Elon Musk and Mustafa Suleyman, a co-founder of DeepMind, the artificial intelligence company owned by Alphabet Inc., have signed a petition calling for an outright ban on such weapons.