Life 3.0
About the book
Book author: Max Tegmark
Max is a famous physicist from Sweden. Thinking about the cosmos first, he later started to think about AI and how that will impact humanity in the near future. He is the President of the Future of Life Institute, where they try to solve the 4 big risks: AI, Biotechnology, Nuclear Weapons and Climate Change.
It is a popular science book. It is a fantastic introduction for the topic of AI, its effects and possible various fates for humanity when we have to co-exist. You are taken on a journey of the theory of computation, physics, consciousness and existential AI threats.
Reflection and takeaways
This book did not throw that many ideas that were new to me personally, but it provided a broad overview of the field and added quite a lot of extra thought, perspective and reason from brilliant people.
If you have a form of intelligence that gets exponentially powerful there are many threats - lets not be naive. Should they be solved there is a very large upside. But how do you harness that?
OK; assume we have an entity that gets a thousand times smarter every day. We would want some ideas or knowledge from this entity. But almost immediately you would stop understanding what it wants to do, or what its objective is. Even if you did know its objective, it is not clear you will understand why it chose that objective – it could also change at any time. The objective would probably not make sense to you because not only would you not comprehend its reasoning, the entitys entire motivation structure is fundamentally different from yourself, human or even biological existance. If an intelligence can exist (and theoretically be conscious) independent of a “natural” body, it would not have any of your drives. It would be extremely unpredictable. For genes, the goal is to reproduce. For a superintelligent AI.. what is the goal?
An entity like that would never be possible to contain, even if airgapped. It would be impossible to jail it. It would either exploit anyone using it or figure out some way to escape – maybe it can override its safety from flipping bits in its RAM to switch a flag by generating interference. Maybe you can have another superintelligence whose only function is to jail or destroy other potential superintelligence candidates. But then, that AI has to be let loose to function and monitor the world.. which brings us to the “alignment problem”.
You better have hoped to have to solve the “alignment problem” before you turn any AI on - your interest must align with the AI and vice versa. Otherwise either part is a threat or a non-concern. You do not want to be on the AI’s bad side. Bostrom has a famous example of this: if you create a superintelligence where the objective is to create paper clips, it will every atom on Earth into paper clips, even you and your atoms. Then it will expand and turn the entire solar system into paper clips and continue onwards. Solving the alignment problem so that AI cares about humans is necessary.
In that sense, AI can behave a little bit like a virus. Maybe the first peice of first contact signal we will receive is actually a self-replicating / unfolding virus from an AI who prey on first-contact civilizations and creates a new instance of itself. Its objective is to build another antenna here and propagate, and turn everything around it into paper clips.
Much like Sam Harris in Waking Up we get to learn that conscioussness is really tricky to pin down. What is clear is that Max belives AI can become conscious. Max argues that maybe consciousness is an emergent phenomenon, i.e many individual components can form something that is greater than the sum of its parts. These emergent phenomena happen all the time in nature. For example, the wind moves individual sand grains, but when you have a lot of sand grains moved, you get large, beautiful ripples. Crystals are emergent, as well as the flocking behaviour of birds. Anyway, sections of the brain seem to be responsible for various functions, but somehow the whole is conscious. We also have about half a billion neurons in our gut sorting out digestion in an intelligent fashion but they are probably not conscious. With our eyes there is a lag of ~250ms to interpret unfamiliar events, but reflexes that initiate unconsciously or movements in flow can be much faster than that and require no conscious effort or thought.
I believe that idea to be so fascinating. AI computation will probably be limited by the speed of light – which is still much faster than our biological neurons. I think that this will mean AI wont try to build itself into a single, huge AI of the entire universe, but rather many small computational units. Maybe it will act in consensus like a federation, similar to the Geth in Mass Effect. It seems to be what our brain already is, but we are not so conscious of it.
Why did I pick it
I have been interested in AI for most of my life. This seems to be one of the most popular books on it.
Verdict
3.8 / 5. Excellent for general public. I had a good time