Elon Musk Paints Dark Picture Of A Future With Artificial Intelligence: Is He A Pessimist On The Matter?

Elon Musk sees artificial intelligence as a real danger, while others see it as an opportunity. Which side is right?

source: Fortune

Elon Musk is telling his fellow human beings that if they do not embrace artificial intelligence and merge with machines, there might be dire consequences for their species.

In a new interview with

The mogul and inventor advocated for humans to merge with machines otherwise they will become an endangered species.

The CEO of Tesla and SpaceX had this to say: "AI just digital intelligence. And as the algorithms and the hardware improve, that digital intelligence will exceed biological intelligence by a substantial margin. It's obvious. We're like children in a playground ... We're not paying attention. We worry more about ... what name somebody called someone else ... than whether AI will destroy humanity. That's insane."

Musk said while he is a strong believer in humankind, he views machines as a threat because they can be more knowledgeable.

He stated: "My faith in humanity has been a little shaken this year. But I’m still pro-humanity."

The businessman went on to explain: "When a species of primate, homo sapiens, became much smarter than other primates, it pushed all the other ones into a very small habitat. So there are very few mountain gorillas and orangutans and chimpanzees — monkeys in general."

Musk offered a solution via his neuroscience company, Neuralink, which consists of having engineers create  a hard drive for the human brain.

He asserted: "To achieve a sort of democratization of intelligence, such that it is not monopolistically held in a purely digital form by governments and large corporations."

In a 2017 speech, Musk made similar remarks where he warned AI is the biggest risk we face as a civilization.

He stated: “Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal. AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

He added: “It [regulation] takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.”

Is AI a dangerous thing?