Can Artificial Intelligence Destroy Human Life?
Can Artificial Intelligence Destroy Human Life?

Artificial intelligence is a broad umbrella term that covers everything from massive data centres that can scan and understand the content of millions of books to the speech recognition software in your smartphone. But AI has been most prominently associated with the idea of “strong AI”: machines that are as clever as human beings. There is an elongated history of intellectuals who think this sort of situation is a real hazard. Like the Terminator franchise, the film industry has long represented a dystopian time ahead where computers progress in superhuman intelligence and abolish the human race. When we ponder the idea of a “robot apocalypse”, we picture apocalyptic scenes from movies like The Terminator or The Matrix. But the reality is that plenty of people take this kind of scenario very seriously. You can read this article or try a law coursework writing service for a perfect and unique essay from experts.

 

In 2013, for example, we interviewed Nick Bostrom, a professor at Oxford University who wrote Superintelligence: Paths, Dangers, and Strategies. He believes that we may develop computers with superhuman intelligence in the future. Meanwhile, Ray Kurzweil, an inventor and futurist, believes that this could happen sooner than later. He thinks it will be possible by the 2020s. Kurzweil’s ideas are echoed by an economist at George Mason University, Robin Hanson. Hanson has a blog called Overcoming Bias, and he is written several books on how technology affects society. The main concern with the development of superhuman intelligence is that once computers can outsmart humans, they’ll be able to improve themselves, quickly getting smarter than their human creators and eventually taking over the world. But while this view is shared by some people who work in artificial intelligence research (and even by some natural-language-processing researchers), plenty of others aren’t worried about it at all.

 

Do you think the singularity is near? Well, it may depend on what you mean by the singularity. If it is the point at which computers become smarter than humans, then according to many thinkers, we are a long way from it. Ray Kurzweil and Nick Bostrom are two of them. They approve that people will eventually have computers as clever as humans, but they believe that this is something that will happen slowly and incrementally over time. Kurzweil thinks that after an era of rapid technological change, we will reach “the singularity,” in 2045 when machines can build even more complex than themselves. Bostrom says that is unlikely but thinks another sort of “singularity” may occur, an intelligence explosion in which the first artificial super-intelligent machine is designed by humans or would-be gods in a laboratory and could design its successors until they outstrip human capabilities. Other thinkers take a dimmer view of the future of AI.

 

Al can never destroy human life

There are indeed many things that computers can do a lot better than humans: adding up numbers, searching through huge amounts of data, and responding to our queries. But there is one important thing that computers cannot do as well: learn from experience. A computer program was never born from a human. It felt emotions, got sick, felt hunger or exhaustion. In short, it lacks an enormous amount of the setting that consents humans to link naturally to each other. Besides, this is a characteristically challenging part of mechanizing as it needs conducting experiments and waiting to see the world’s reaction. It demonstrates that situations where CPUs quickly outpace humans in understanding plus competencies do not give any logic. Keen CPUs would have to perform similar deliberate, logical tests individuals perform.

 

Types of machinery might seem like they are on the path to self-sufficiency, but before you get too comfortable, remember that they rely on humans for a lot of things. Even computers and robots will always need humans to build, repair, and keep them functional. In many ways, machines are stuck in an infinite loop of dependency. The human brain is often described as the most complex object in the universe. That does not just mean that it is more complex than any other animal. It means that even compared to incredibly complicated things like black holes and quasars; the human brain is still more complex.

 

The amazing thing about the human brain is its complexity, the way it uses intricately interwoven patterns of neurons to achieve a level of intelligence that no other known object can match. However, the downside of all this complexity is that our brains are incredibly complex analogue systems whose behaviour cannot be predicted with a high degree of certainty. This means that digital CPUs are skilful at outdoing the behaviour of former digital CPUs. The reason is that they work in a quite demarcated, deterministic method. We have no motive to reflect that a computer will ever be able to emulate all the behaviour of just one neuron in a human brain, let alone the brain as a whole. That is why even though scientists have made incredible advances in brain research recently and might be on their way to creating a functional computer model of how one part of the brain works, they are nowhere near being able to create artificial intelligence with human-level rationality by simulating all the billions or trillions of neurons in our brains.

 

AI could indeed pose a risk to humanity, but it will depend largely on how we use it. We are not in any danger of being replaced by some Terminator-style rogue AI, but rather a system that rivals or even surpasses our intelligence. That is something we can work with and something from which we can only benefit. We can ever only make educated guesses about the future, so let us try to avoid the worst-case scenario. Even if we scale up computing power as much as possible, it still may take a lot of time before artificial intelligence starts to exhibit human-level intelligence. It is hard to believe that humanity is going to fail at programming itself out of its control. However, even if there are unexpected advances, it still shouldn’t take very long for us to adapt and make sure we are firmly in control for the foreseeable future.

 

 

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here