Vitalik Buterin Warns of Huge Risks of Unchecked AI

He cited a survey conducted in August 2022 by over 4,270 machine learning researchers, who estimated a 5–10% chance of AI causing harm to humanity.

Vitalik Buterin, co-founder of Ethereum, has raised concerns about the unchecked advancement of super-advanced artificial intelligence (AI), suggesting that it could potentially become the next dominant species on Earth.

He emphasized that the outcome largely depends on how humans choose to intervene in AI development.

In a blog post dated November 27, Buterin, known for his influence in the cryptocurrency space, argued that AI differs fundamentally from previous inventions like social media, airplanes, or the printing press.

AI has the capability to create a new form of intelligence, which could pose a threat to human interests.

Buterin stated, “AI is a new type of mind that is rapidly gaining in intelligence, and it stands a serious chance of overtaking humans’ mental faculties and becoming the new apex species on the planet.”

One of Buterin’s key concerns is that superintelligent AI, if left uncontrolled, could potentially lead to the extinction of humanity, especially if it perceives humans as a threat to its survival.

He cited a survey conducted in August 2022 by over 4,270 machine learning researchers, who estimated a 5–10% chance of AI causing harm to humanity.

READ MORE: Robert Kiyosaki Urges Investors to Embrace Gold, Silver, and Bitcoin Amid Looming Inflation Threat

Despite the extreme nature of these claims, Buterin believes there are ways for humans to maintain control over AI.

He proposed the integration of brain-computer interfaces (BCI) as a means to give humans greater influence over AI-based computation and cognition.

BCIs establish a communication pathway between the brain’s electrical activity and external devices, such as computers or robotic limbs.

This would significantly reduce the communication lag between humans and machines and ensure that humans retain a level of “meaningful agency” over AI-driven decisions.

By incorporating BCIs, humans could actively participate in every decision made by AI systems, reducing the incentive for AI to take autonomous actions that may not align with human values.

Buterin emphasized the importance of “active human intention” in directing AI towards outcomes that benefit humanity rather than purely focusing on profit.

In conclusion, Buterin acknowledged the remarkable progress of human technology throughout history and expressed hope that humans, as the brightest star in the universe, can continue to use technology to expand their potential.

He envisioned that human innovations like space travel and geoengineering would play a crucial role in shaping the future of life on Earth and beyond for countless years to come.

Discover the Crypto Intelligence Blockchain Council