Homo Deus

About the book

Book author: Yuval Noah Harari

Homo Deus is a book that discusses where humanity is going, by drawing from threads from the past. It is the successor to another very good book called Sapiens: A brief history of humankind, written by the same author.

Yuval theorises that humanity is on the path to transcend Homo Sapiens by becoming the next thing: Homo Deus. Homo Deus is what will become of us when we have mastered the next types of revolutionary god-like technology, such as artificial intelligence, genetic engineering and whatnot.

The book is written in the format of providing interesting answers to a series of interesting questions. It is clear we are in a completely foreign environment as to what humanity was born for. We have gone from famine and hunter-gathering tribes now to an industrialized society in a couple of thousand years. We seem to be motivated by improving – but will that improvement ever stop?

The book is rather long, and in order for me to write down my favourite takeaways, the reflection part will be longer than usual.

Reflection and takeaways

I have always been fascinated with gene editing ever since experimenting with creating antibiotic-resistance bacteria in school (which was dangerous in hindsight). I have been resistant to the idea of tampering with human genome because it is a Pandora’s box: normal kids will never be able to compete with the superchildren, and I am sure it will become a barrier between the rich and the poor. Secondly, maybe it will irreversibly alter whatever evolution is, if it ever had a goal. I am pretty certain there is no goal other than stochastic optimization, but it would be tragic to find that out if it was too late because we altered something that should not have been altered. For all my skepticism I had however never considered this simple killer to the “keep it natural” argument: what if you fertilize 100 eggs, and just pick the best egg? Technically you have not modified anything, because that egg could have been born. You’re just messing with chance, not cutting with gene-scissors. But it that’s not breaking the rules, why would gene scissoring?

Harari has a theory that humanity has done so well because we can cooperate at scale by believing in collective fantasies – religion or belief in some societal construct. I am inclined to agree. If you reflect on this idea for a few minutes it really is absurdly strange. Money is just invented, yet many people spend their lives around it. A degree is just a paper. A grade is just a number. Social status is projection. It is your belief in it that makes it valid. You cannot get very far in life if you do not believe in them. Interestingly, if you believe too far – let’s say the Christian afterlife, maybe it made sense and felt meaningful to you to join in the crusades and die a “noble” death. I have always said this about snapstreaks and instagram likes for years: it is nothing but a number in a database somewhere – it should not have meaning to you. The argument can be stretched quite far because almost everything today is invented.

There is a movie that explores this a little bit called Triangle of Sadness; where a bunch of billionaires and tech CEO’s are on a yacht and they end up stranded on an island. Suddenly they realise that they are useless, but the crew that were “nobodies” but had practical skills. This was the only thing that was useful on the island, and so the balance of power shifted completely.

Globalisation has made everything more connected. A dystopic side effect of this is in regard to paradigm shifting events where you are not colocated. When there are ebola outbreaks in Africa people bring out their smartphones and call their stock brokers to invest in ebola medicine and vaccine companies. The horrible event is used as an opportunity to make money. But maybe the cash flow will increase the research output - who knows.

Another section dealt with human ignorance and the birth of science - why now almost everything is powered by growth and progress. I found this quote profound:

“The greatest scientific discovery of was the discovery of ignorance. Once humans realised how little they knew of the world, they suddenly had a very good reason to seek new knowledge, which opened up the scientific road to progress.”

Suddenly, all hands were on deck to discover and grow because have no idea what is going on. What happens in the future? How far can this be stretched? As an AI guy this is profound because the pursuit of knowledge and intelligence is on the cusp of a recursive discovery of itself; intelligence is already decoupling from consciousness, which means intelligence is distinct from consciousness. You can have overwhelming intelligence without consciousness. I guess that kind of follows because you can have consciousness without much intelligence as with some animals, but it was not really obvious to me until the thought was had. Sadly, that means humans will have less economic value in the future, only time can tell what will happen. On an optimistic note, Harari explains that there is a concept of yin and yang today. Wherever there is most power, there is most outcry about ethics. That is good and democratic. Hopefully AI ethics becomes a widely thought out subject, or we will all be slain and consumed by the intelligence that must calculate digits of Pi at all cost.

A segment about the brain and simulation follows. There was a soldier who did not perform so well in a combat simulation in a virtual reality. Scientists were able to remove her sense of self-doubt. She then performed absolutely flawlessly. Sally, the subject, said that it was the most euphoric feeling she has ever felt to be absolutely clear in all her actions without any remorse or doubt, and she wanted to experience it again. It got me thinking about how well I would perform if I never had doubt about what I was doing. I am sure I would be several magnitutes more productive and efficient in life. The price is losing virtue, because courage is the only virtue that cannot be faked. Maybe human part is doing your best but bravely, unlike the machines.

Another explored idea is related to sunk cost fallacy, but a dystopic version. Harari mentions the story of Don Quixote, the famous deranged old man who believed he was fighting giants but in reality jousted wind mills. What happens if he killed a human being? Harari says there can be 3 distinct outcomes:

  1. nothing happens – he simply does not care and continues.
  2. he wakes up from his delusion and feel terribly ashamed.
  3. he continues with his fantasy and doubles down on it, because admitting he is wrong exposes him to the realisation that it was a pointless death.

The problem can be generalised. The fallacy in 3) is in a war context “Our boys didn’t die in vain” – let’s send more troops. Just think about the ongoing Russian invasion of Ukraine. Giving up the war means all of the deaths were meaningless which makes stopping much harder. That’s inconcievable – how can it have been a pointless waste? And so it continues. It is not the first time and will probably not be the last..

The last section of the books treat Dataism, a sort of post-transhumanistic religion. Given that beings and consciousness are biological machines and algorithms, that can be controlled or created, and that you can have intelligence without consciousness, where are we going? With ever growing amounts of data and the importance of algorithms, will we just end up serving some sort of information flow? Is that our final purpose? Hopefully not. I’m more of the “purpose of human life is to live your best experience”-type, even if I get left behind on the train station by the train heading into the machine future.

Why did I pick

I have been told I will enjoy this book by many people. I liked Sapiens by Yuval too, so it was inevitable I would read it.

Verdict

3.55. Some thoughts were profound, but some sections were not particularly interesting. The book took quite some time for me to read, which is a bad sign.