Barrat's goal in this book is to convince readers that AGI and ASI are likely to occur in the near future (the next couple of decades or so) and, more to the point, likely to be extremely dangerous. In fact, he repeatedly expresses doubt as to whether humanity is going to survive its imminent encounter with a higher intelligence.
I find him more convincing in arguing that ASI would carry significant risks than I do in his take on its feasibility and imminence. Barrat aptly points out that building safeguards into AI is a poorly developed area of research (and something few technologists have seen as a priority); that there are strong incentives in national and corporate competition to develop AI quickly rather than safely; and that much relevant research is weapons-related and distinctly not aimed at ensuring the systems will be harmless to humans.
The book becomes less convincing when it hypes current or prospective advances and downplays the challenges and uncertainties of actually constructing an AGI, let alone an ASI. (Barrat suggests that once you get AGI, it will quickly morph into ASI, which may or may not be true.) For instance, in one passage, after acknowledging that "brute force" techniques have not replicated everything the human brain does, he states:
But consider a few of the complex systems today's supercomputers routinely model: weather systems, 3-D nuclear detonations, and molecular dynamics for manufacturing. Does the human brain contain a similar magnitude of complexity, or an order of magnitude higher? According to all indications, it's in the same ballpark.Me: To model something and to reproduce it are not the same thing. Simulating weather or nuclear detonations is not equal to creating those real-world phenomena, and similarly a computer containing a detailed model of the brain would not necessarily be thinking like a brain or acting on its thoughts.
A big problem for AI, and one that gets little notice in this book, is that nobody has any idea how to program conscious awareness into a machine. That doesn't mean it can never be done, but it does raise doubts about assertions that it will or must occur as more complex circuits get laid down on chips in coming decades. Barrat often refers to AGIs and ASIs as "self aware" and his concerns center on such systems, having awakened, deciding that they have other objectives than the ones humans have programmed into them. One can imagine unconscious "intelligent" agents causing many problems (through glitches or relentless pursuit of some ill-considered programmed objective) but plotting against humanity seems like a job for an entity that knows that it and humans both exist.
Interestingly, though, Barrat offers the following dark scenario and sliver of hope:
I think our Waterloo lies in the foreseeable future, in the AI of tomorrow and the nascent AGI due out in the next decade or two. Our survival, if it is possible, may depend on, among other things, developing AGI with something akin to consciousness and human understanding, even friendliness, built in. That would require, at a minimum, understanding intelligent machines in a fine-grained way, so there'd be no surprises.Me: Note that some AI experts, such as Jeff Hawkins, have argued the opposite--that the very lack of human-like desires, such as for power and status, is why AI systems won't turn against their makers. It would be a not-so-small irony if efforts to make AIs more like us make them more dangerous.
Our Final Invention is a thought-provoking and valuable book. Even if its alarmism is overstated, as I suspect and hope, there is no denying that the subject Barrat addresses is one in which there is very little that can be said with confidence, and in which the consequences of being wrong are very high indeed.
UPDATE: More.