top of page
Search

AI Fears

Recently I ready an interview with Google's CEO, Sundar Pichai, about his view on AI. He mentioned that AI fears are very legitimate. He said that "tech has to realize it just can't build it and then fix it". He noted that AI with "agency of its own" doesn't harm humankind. His assessment of AI's potential down sides is similar to critics who warned about potential misuse and abuse of the AI technology.


SpaceX and Tesla founder Elon Musk weighed in saying that AI and prove to be "far more dangerous than nukes." Other tech companies such as Microsoft also embraced the regulation of AI both by companies that create this technology and the government that oversees its use. Mr. Pichai explained that AI, if handled properly, could have tremendous benefits from Healthcare to many other industries.


"Regulating a technology in its early days is hard, but I do think companies should self-regulate", Pichai told the Washington Post. We may not have got everything right when setting up the AI principles but it was important to start a conversation. He once said. "AI is one of the most important things humanity is working on and could prove more profound for human society than electricity or fire." However, the race to build machines that can self-operate is growing in Silicon Valley and is raising a lot of concerns that the technology harms people and eliminates jobs. Where do we

draw the line?

6 views0 comments

Recent Posts

See All

March SDA Plan

For my March SDA I plan to compile a list of a few programming languages that will be good to develop an artificial intelligence with....

AI Programming

I have completed my interview write up and PPT SDA. Check them at the SDA section. Click on the link on the title to view and hear. For...

What’s the latest?

The midterm results are good. Judges feedback was mostly positive with few improvements suggestions in the communication area. I am...

Comments


bottom of page