I just finished reading two fascinating books; The Singularity is Near by Ray Kurzweil, and Superintelligence: Paths, Dangers, Strategies by Nick Boström. Both send us into the future, where the exponential development of robots and more have changed our societies completely. As far as I understand, Ray works at Google and Nick works at Oxford University. Both have already done more than most others achieve during a lifetime, but their descriptions of the future make me wonder.
Ray describes a future where artificial intelligence (AI) has eclipsed human intelligence. Piece by piece, nanorobots and more will take over our bodies and transform us into cyborgs. This transformation will happen around 2045 according to Ray, meaning we have 28 years until humanity changes beyond recognition. According to the “Law of Accelerating Returns,” computers will be able to design technologies themselves to make the development move even further. Thanks to becoming cyborgs, we will also become super smart, Ray says. At the same time, nanorobots could rebel and quickly send us into oblivion.
Nick describes a future where super intelligence will arrive around 2105. By then, machines will be able to learn and perform without needing humans to guide them. Most, if not all, jobs will be handled by robots and machines. This will, in turn, leave the majority of all humans without jobs, forcing their basic needs to be taken care of by others. Meanwhile, the rich will be super rich since they control much of the production. A great thing about Nick’s book is that it reflects even more on the philosophical questions that surround these major developments. For example, when large teams built the International Space Station (ISS), it joined people from the US and the USSR showing others they could work together. We as humans also need such collaboration when creating a super intelligent future, says Nick.
Once I had read these somewhat bombastic descriptions of our future, questions arose:
- Does the projection made by engineers create such a future, just by projecting it? Or will what they describe happen anyway? Given the massive amount of attention Ray and Nick receive, I am not sure.
- Do we want to walk down this path just because we can? Yes, better treatment of diseases is welcome, but designing machines that are smarter than us?
- What do we mean by saying that something is intelligent? Is it descriptive, or normative?
- Does high intelligence equal happiness? Most probably not. Just look at some of the brightest people on Earth so far. Many of them led miserable lives or even killed themselves. Also, a lot of people suffering from depression do so because they see, know, and feel more than others who instead shut down their feelings. Therefore, I wonder what happens when we reach for super intelligence. Will we see Super Depression?
- What happens when the machines start copying no only our strengths, but also our weaknesses? As described by The Verge and by the Guardian, AI can pick up racial and gender bias. As Tim Ferriss and many of his podcast guests have said: We humans are deeply flawed animals and sorry excuses for creatures living on Earth, but we have our highlights. Just pick up any history book to see with which brutal force we have destroyed our planet and other species. What if AI starts mimicking this?
- Things don’t just happen by themselves. For each generation, we can train them to think ethically about what should happen.
- Where are the alternative futuristic descriptions of everlasting happiness, art, wine, and music?
Ray and Nick have written two fascinating books, and now I will complement this by reading Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari. Here, humans agree to give up meaning in exchange for power, and the development will create what he refers to as a “useless class” plus a new religion called ‘Dadaism.’ I am not sure this will feel uplifting to read, but maybe I will feel more intelligent after reading that book too. And perhaps therein lies all the difference.