Alphabet and Google CEO Sundar Pichai has expressed the need of new regulations in the world of AI, underlining the threats posed by technology such as deepfakes and facial recognition, while emphasizing that any legislation must balance “potential harms … with social opportunities.”
Careful Approach is needed
“There is no question in my mind that artificial intelligence needs to be regulated. It is too important not to,” Pichai wrote in an editorial for The Financial Times. “The only question is how to approach it.”
Although Pichai pointed out that new regulation is needed, he also said a careful approach is required that might not result in too many significant controls applied to AI. He says that for some products such as self-driving cars, “appropriate new rules” should be implemented. However, in other areas, such as healthcare, current frameworks can be protracted to cover AI-assisted products.
“Companies such as ours cannot just build promising new technology and let market forces decide how it will be used,” writes Pichai. “It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.”
The Alphabet CEO, who leads possibly the most prominent AI company in the world, also alleges that “international alignment will be critical to making global standards work,” pointing to a potential area of the dilemma for tech companies when it comes to AI regulation.
The US and EU diversion
Right now, the US and EU plans for AI regulation seem to be going in a different direction. The White House is emphasizing for subtle-touch regulation that eliminates “overreach,” so that the sense of innovation is not hampered. Whereas, the EU is considering more direct mediation, such as a straight five-year ban on facial recognition. Similarly regulations on data privacy, any difference between the US and EU will result in extra costs and technical challenges for international firms such as Google.
However, Pichai’s editorial also unmasks some unresolved questions about Google’s steps towards AI regulation. For example, the CEO points out that internal principles of the company ban certain uses of the technology, “such as to support mass surveillance or violate human rights.” It’s because of such concerns; Google does not sell facial recognition technology.
At the same time, Pichai is not calling for his competitors such as Amazon and many others, to stop selling facial recognition. Now the question is if Google believes such technologies pose a danger to mankind, why does the company is not calling for direct regulation on this specific issue?
Ultimately, Google (similar to government regulators) must keep a balance between the promise and threat of AI technologies. However, as Pichai points out, “principles that remain on paper are meaningless.” Sooner or later, this idea about the requirement of regulation is going to have to turn into action.
Hello, I’m Anna Yeo. If you like my news coverage, please drop a good word in my inbox. I’m journalist by profession and have been part of many major reporting across the globe. I like to write crisp and factual news. I have completed my masters degree in journalism. Feel free to contact me at [email protected]