Recently I ready an interview with Google's CEO, Sundar Pichai, about his view on AI. He mentioned that AI fears are very legitimate. He said that "tech has to realize it just can't build it and then fix it". He noted that AI with "agency of its own" doesn't harm humankind. His assessment of AI's potential down sides is similar to critics who warned about potential misuse and abuse of the AI technology.
SpaceX and Tesla founder Elon Musk weighed in saying that AI and prove to be "far more dangerous than nukes." Other tech companies such as Microsoft also embraced the regulation of AI both by companies that create this technology and the government that oversees its use. Mr. Pichai explained that AI, if handled properly, could have tremendous benefits from Healthcare to many other industries.
"Regulating a technology in its early days is hard, but I do think companies should self-regulate", Pichai told the Washington Post. We may not have got everything right when setting up the AI principles but it was important to start a conversation. He once said. "AI is one of the most important things humanity is working on and could prove more profound for human society than electricity or fire." However, the race to build machines that can self-operate is growing in Silicon Valley and is raising a lot of concerns that the technology harms people and eliminates jobs. Where do we
draw the line?
Comments