The bold futurist and bestselling author explores the limitless potential of reverse-engineering the human brain
Ray Kurzweil is arguably today’s most influential—and often controversial—futurist. In How to Create a Mind , Kurzweil presents a provocative exploration of the most important project in human-machine civilization—reverse engineering the brain to understand precisely how it works and using that knowledge to create even more intelligent machines.
Kurzweil discusses how the brain functions, how the mind emerges from the brain, and the implications of vastly increasing the powers of our intelligence in addressing the world’s problems. He thoughtfully examines emotional and moral intelligence and the origins of consciousness and envisions the radical possibilities of our merging with the intelligent technology we are creating.
Certain to be one of the most widely discussed and debated science books of the year, How to Create a Mind is sure to take its place alongside Kurzweil’s previous classics which include Fantastic Voyage: Live Long Enough to Live Forever and The Age of Spiritual Machines .
Ray Kurzweil is a world class inventor, thinker, and futurist, with a thirty-five-year track record of accurate predictions. He has been a leading developer in artificial intelligence for 61 years � longer than any other living person. He was the principal inventor of the first CCD flat-bed scanner, omni-font optical character recognition, print-to-speech reading machine for the blind, text-to-speech synthesizer, music synthesizer capable of recreating the grand piano and other orchestral instruments, and commercially marketed large-vocabulary speech recognition software. Ray received a Grammy Award for outstanding achievement in music technology; he is the recipient of the National Medal of Technology and was inducted into the National Inventors Hall of Fame. He has written five best-selling books including The Singularity Is Near and How To Create A Mind, both New York Times best sellers, and Danielle: Chronicles of a Superheroine, winner of multiple young adult fiction awards. His forthcoming book, The Singularity Is Nearer, will be released June 25, 2024. He is a Principal Researcher and AI Visionary at Google.
How to Create a Mind: The Secret of Human Thought Revealed, Ray Kurzweil
How to Create a Mind: The Secret of Human Thought Revealed is a non-fiction book about brains, both human and artificial, by the inventor and futurist Ray Kurzweil. First published on November 13, 2012.
Kurzweil describes a series of thought experiments which suggest to him that the brain contains a hierarchy of pattern recognizers. Based on this he introduces his Pattern Recognition Theory of Mind (PRTM).
He says the neocortex contains 300 million very general pattern recognition circuits and argues that they are responsible for most aspects of human thought. He also suggests that the brain is a "recursive probabilistic fractal" whose line of code is represented within the 30-100 million bytes of compressed code in the genome.
تاریخ نخستین خوانش: روز نهم ماه دسامبر سال 2018میلادی
عنوان: آیند� شبیه� سازی مغز�: چگونگی خلق یک مغز غیربیولوژیک� نویسنده: ری کرزویل� مترجم: حسین کاشفی� امیری؛ تهران آینده پژوه� 1396؛ در 330ص؛� شابک 9786007265574؛ موضوع: هوش مصنوعی - خودآگاهی - مغز - از نویسندگان ایالات متحده آمریکا - سده 21م
آیند� شبیه� سازی مغز�: چگونگی خلق یک مغز غیربیولوژیک� درباره ی راز ذهن انسان و درباره ی مغز، چه از نظر طبیعی، و چه مصنوعی، که توسط دانشمند، مخترع و آینده نگر، «ری کورویل»، مورد بحث قرار گرفته است؛ این کتاب نخستین بار در تاریخ روز سیزدهم ماه نوامبر سال 2012میلادی در یک جلد توسط مؤسسه انتشاراتی وایکینگ منتشر شد؛ نویسنده در این کتاب برهان میآورند که ذهن انسان از سلسله مراتب تشخیص الگو تشکیل شده که از یک مدل آماری برای یادگیری، ذخیره، و بازیابی اطلاعات، استفاده میکند، و ...؛
تاریخ بهنگام رسانی 09/06/1399هجری خورشیدی؛ ا. شربیانی
I saw this book while browsing around in a local book store and the title really caught my eye. Kurzweil was a name I already knew and there were good reviews from some very well known people printed on the back - I bought it. However, after just the first few chapters I was beginning to get the feeling I wasted my $25, and nearer towards the end I felt that I wasted my time as well. By the end of the book I felt that it was a real waste of the paper it was printed on.
Kurzweil started off by giving a very brief description of how his Hierarchical Hidden Markov Model (HHMM) has made his speech recognition software so successful, and then the billions he has made from it. He goes on to boast that if others adopted the same model they'd built far more superior machines. He then progresses to speculate that if only our brain would work like his speech recognition software we'd have far more superior minds. At this point I felt that this section read very much like an excerpt of some talk he gave, sprinkled onto his marketing brochure. Totally devoid of any useful information about what you'd expect from a book with such a title. It was as though he sent a draft to his panel of patent lawyers to remove anything that can give away anything at all about the technology he employs in his company. (This was were I felt $25 poorer.)
In the middle of the book he seemed to have lost his focus and started talking about random topics. This to me felt as if he handed over the writing to his interns. (Now I was feeling like my time was wasted.)
Towards the end he was like some over-the-hill-has-been-after-too-much-wine rambling about the philosophy of consciousness, identity, free will, etc. while focussing on his law of accelerating returns (LOAR). This I must say is a gross misuse of the term "law". He rambled on about how his graphs of data on computing power, data capacity, and price were laws in the sense of laws of thermodynamics. These are data extrapolations Mr. Kurzweil! It can hardly be compared to physical laws. (Now I really felt this book was a real waste of paper it was printed on.)
All through the book I was wondering why, with a title like this, there were no references to the very interesting research (that's publicly accessible) done at the Allen Institute. As it turns out, the reason was in the final chapter - Paul Allen had earlier criticised his "laws" as not being actual physical laws in one of Kurzweil's earlier books. I'm imagining that at this point he must be really drunk to dedicate a whole chapter to saying how Paul Allen was wrong in saying that.
I don't normally write lengthy reviews of books that I've read but this was so bad that I felt obliged to warn others not to waste their money and time (and saving some trees in the process). I'd give this book zero stars if I could.
How to Create a Mind: The Secret of Human Thought Revealed by Ray Kurzweil
“How to Create a Mind" is a very interesting book that presents the pattern recognition theory of mind (PRTM), which describes the basic algorithm of the neocortex (the region of the brain responsible for perception, memory, and critical thinking). It is the author’s contention that the brain can be reverse engineered due to the power of its simplicity and such knowledge would allow us to create true artificial intelligence. The one and only, futurist, prize-winning scientist and author Ray Kurzweil takes the reader on a journey of the brain and the future of artificial intelligence. This enlightening 352-page book is composed of the following eleven chapters: 1. Thought Experiments on the World, 2. Thought Experiments on Thinking, 3. A Model of the Neocortex: The Pattern Recognition Theory of Mind, 4. The Biological Neocortex, 5. The Old Brain, 6. Transcendent Abilities, 7. The Biologically Inspired Digital Neocortex, 8. The Mind as Computer, 9. Thought Experiments on the Mind, 10. The Law of Accelerating Returns Applied to the Brain, and 11. Objections.
Positives: 1. Well researched and well-written book. The author’s uncanny ability to make very difficult subjects accessible to the masses. 2. A great topic in the “mind� of a great thinker. 3. Great use of charts and diagrams. 4. A wonderful job of describing how thinking works. 5. Thought-provoking questions and answers based on a combination of sound science and educated speculation. 6. The art of recreating brain processes in machines. “There is more parallel between brains and computers than may be apparent.� Great stuff! 7. Great information on how memories truly work. 8. Hierarchies of units of functionality in natural systems. 9. How the neocortex must work. The Pattern Recognition Theory of Mind (PRTM). The main thesis of this book. The importance of redundancy. Plenty of details. 10. Evolution…it does a brain good. Legos will never be the same for me again. 11. The neocortex as a great metaphor machine. Projects underway to simulate the human brain such as Markram’s Blue Brain Project. 12. Speech recognition and Markov models. Author provides a lot of excellent examples. 13. The four key concepts of the universality and feasibility of computation and its applicability to our thinking. 14. A fascinating look at split-brain patients. The “society of mind.� The concept of free will, “We are apparently very eager to explain and rationalize our actions, even when we didn’t actually make the decisions that led to them.� Profound with many implications indeed. 15. The issue of identity. 16. The brain’s ability to predict the future. The author’s own predictive track record referenced. 17. The laws of accelerating returns (LOAR), where it applies and why we should train ourselves to think exponentially. 18. The author provides and analyzes objections to his thesis. In defense of his ideas. Going after Allen’s “scientist’s pessimism.� 19. The evolution of our knowledge. 20. Great notes and links beautifully.
Negatives: 1. The book is uneven. That is, some chapters cover certain topics with depth while others suffer from lack of depth. Some of it is understandable as it relates to the limitations of what we currently know but I feel that the book could have been reformatted into smaller chapters or subchapters. The book bogs down a little in the middle sections of the book. 2. Technically I disagree with the notion that evolution always leads to more complexity. Yes on survival but not necessarily on complexity. 3. The author has a tendency to cross-market his products a tad much. It may come across as look at me� 4. A bit repetitive. 5. Sometimes leaves you with more questions than answers but that may not be a bad thing� 6. No formal separate bibliography.
In summary, overall I enjoyed this book. Regardless of your overall stance on the feasibility of artificial intelligence no one brings it like Ray Kurzweil. His enthusiasm and dedication is admirable. The author provides his basic thesis of how the brain works and a path to achieve true artificial intelligence and all that it implies. Fascinating in parts, bogs down in other sections but ultimately satisfying. I highly recommend it!
Further suggestions: “Subliminal� by Leonard Mlodinow, "The Believing Brain: From Ghosts and Gods to Politics and Conspiracies---How We Construct Beliefs and Reinforce Them as Truths..." by Michael Shermer, "The Scientific American Brave New Brain: How Neuroscience, Brain-Machine Interfaces, Neuroimaging, Psychopharmacology, Epigenetics, the Internet, and ... and Enhancing the Future of Mental Power..." by Judith Horstman, "The Blank Slate: The Modern Denial of Human Nature" by Steven Pinker, “Who’s in Charge?� and "Human: The Science Behind What Makes Us Unique, by Michael S. Gazzaniga, "Hardwired Behavior: What Neuroscience Reveals about Morality" by Laurence Tancredi, "Braintrust: What Neuroscience Tells Us about Morality" by Patricia S. Churchland, “Paranormality� by Richard Wiseman, “The Myth of Free Will� by Cris Evatt, “SuperSense� by Bruce M. Hood and "The Brain and the Meaning of Life" by Paul Thagard.
I like Kurzweil. But I thought he did a little too much boasting and did not provide enough details.
First half of the book: it appears that we can model the brain with hierarchical hidden Markov models better than we can with neural nets. Some back of the envelope calculations show that Hidden Markov models may contribute to the functioning of the brain. Ok, so far so good.
Second half of the book: wildly uneven coverage of a wide range of topics in neuroscience philosophy, such as identity, free will, and consciousness.
Kurzweil likes to frequently mention all of the contributions that he has made to AI. I think this could have been toned down a little bit. Back in year XXX, I was one of the first to do XYZ.
He has some good ideas in the first part, but I don’t think he comes close to explaining how to create a mind.
Kurzweil's book offers an overview of the biological brain and briefly overviews some attempts toward replicating its structure or function inside the computer. He also offers his own high-level ideas that are mostly a restatement of what can already be found in other books (such as Hawkins' On Intelligence) with a few modifications (he admits this himself though at one point, for which he gets bonus points). Finally, he applies his Law Of Accelerating Returns (LOAR) to field of AI and produces some predictions for the future of this field.
The good: Nice thought experiments section, nice overview of the biological brain (both old brain/cortex and their function), reasonably ok philosophical mambo jambo parts about consciousness and whether it is possible for a computer to be a mind (if you're into that), some analysis of relevant computational trends. By the end, you're almost convinced we're almost there!
The bad: First, his own theories are extremely vague and half-baked (though I forgive this. If he knew more he would be busier with things other than writing this book) and essentially reduce to some form of Hierarchical Hidden Markov Model. That's not especially exciting, I think most researchers in the field will agree on such high-level things. I also find it puzzling that he claims to be talking about the mind in its entirety, but then his exposition focuses almost entirely on temporal modeling/prediction aspects and mostly ignores a lot of other magical components of a mind, such as a flexible and efficient knowledge representation / inference engine, or a reinforcement learning - like actor /critic system that surely exists somewhere at the core of all of our learning and reasoning.
All in all, I would recommend this book to anyone who's interested in some pointers to our efforts to replicate a brain in the computer, who wants to learn a bit about the biological brain, or who's into the philosophy of it all.
One of the most interesting books this year, which describes in a simple and understandable way the development of the human mind, the structure of the brain and the possibility of building artificial intelligence based on pattern recognition. ---------- Одна из интереснейших книг в этом году, которая описывает просто и понятно развитие человеческого разума, строение мозга и возможности построения искуственного интеллекта основанного на распознавании образов.
- > playing 5 min brain games before reading increase reading speed in 30-40%
Irgendwie hatte ich mir das anders vorgestellt. Nach Sichtung des Materials ist für mich persönlich doch recht wenig dabei rum gekommen. Liegt primär daran, dass ich mich nicht all zu sehr für die technischen Datails der KI Systeme interessiere. Er widmet sich ausgiebig Überlegungungen zum Training eines Gehirns, egal ob es ein biologisches ist oder aus Software besteht. Wie lernt ein hierarchisches (digitales oder biologisches) Mustererkennungssystem?! Zum anderen hat Kurzweil zwar eine sehr gut verständliche Ausdrucksweise � das Buch liest sich wie geschnitten Brot- dennoch plätschern die Sätze ausufernd vor sich hin und verleiten dazu, alles andere gerade interessanter zu finden. Oder anders gesagt: Wenn ich mit Kurzweil, im Vorhaben einer anregenden Diskussion, abends auf der Couch sitzen würde, wär ich beim 3. Bier eingepennt. Er macht mir zu viel Werbung für seine Projekte, Bücher, Firmen. Das Buch, liest sich ein bisschen im Duktus einer Verkaufsveranstaltung. Außerdem reißt er die Themen nur an. Ist für mich ehr eine grobe Ideensammlung als eine intensive Lektüre eines spezifischen Themas.
Zu Beginn des Buches beschäftigt sich Kurzweil ausführlich mit dem Neokortex und Mustererkennung des Gehirn. Neokortex: setzt sich aus hoch repetitiven Strukturen zusammen, die es dem Menschen erlauben, beliebig komplexe Ideenstrukturen zu schaffen.
Erinnerungen werden als Abfolge von reduntanten Mustern gespeichert.
„Wir können ein Muster sogar dann wiedererkennen, wenn nur ein Teil von ihm wahrgenommen (gesehen, gehört, gefühlt) wird, und sogar auch dann, wenn es Abweichungen enthält. Unsere Fähigkeit zur Wiedererkennung ist offenbar in der Lage invariante Merkmale von Mustern festzustellen und diese auch dann zu erfassen, wenn die Muster selbst merklich in ihren Eigenschaften verändert sind.�
Die Redundanz muss vorliegen, da den neuronalen Schaltkreisen eine gewisse Unzuverlässigkeit anhaftet.
„Das bewusste Erleben unserer Wahrnehmungen wird im Grunde genommen durch unsere Interpretationen bestimmt.� „Demnach sieht der Neokortex voraus, was er anzutreffen erwartet. Sich die Zukunft vorzustellen ist einer der Hauptgründe dafür, dass wir einen Neokortex haben.�
Die hierarchischen Strukturen sind ein entscheidender Aspekt des Neokortexes. Sie haben sich als Überlebensvorteil herausgestellt. Lernprozesse finden innerhalb weniger Tage statt. Das Erlernen von Sprache erfolgt ebenfalls hierarchisch. Rekursion als entscheidende Fähigkeit: � ...kleine Teile zu einem größeren Stück zusammenzusetzen, dasselbe dann als Teil einer noch größeren Struktur zu verwenden und diesen Prozess iterativ fortzusetzen.�
Unsere Gedanken werden in erster Linie in neokortikalen Mustern repräsentiert. Diese in verständliche Sprache zu übersetzen, stellt eine Herausforderung dar. Das Hervorbringen von Sprache ist eine Hierarchie linearer Muster im Gehirn.
پä : Assoziieren, keine Angst vor noch so wahnwitzigen Gedanken haben, brechen mit kulturellen Normen, lockern professioneller Tabus, eine größere Menge an Neokortex effektiv für eine Aufgabe aktivieren - Keine Spezialisierung!
Kurzweil sieht den Vorteil von Softwarebasiertem Neokortex hierin:
„Eine Restriktion des menschlichen Neokortex besteht darin, dass es dort keinen Prozess gibt, der widersprüchliche Ideen eliminiert oder kritisch überprüft. Dies erklärt, warum menschliches Denken oft eklatant inkonsistent ist. Der Mechanismus, den wir haben, um dieses sogenannte kritische Denken abzurufen, ist schwach, und diese Fähigkeit kommt seltener zum Einsatz, als es nötig wäre. In einem Software-basierten Neokortex könnten wir einen Vorgang einbauen, der Inkonsistenzen zum Zwecke einer weiteren Überprüfung identifiziert.
Der "hierarchial hidden Markov model" Algorithmus (neuronales Netz, das mit statistischen Methoden aus einer Datenmenge Muster herausfiltern kann) nimmt viel Raum im Text ein. Hidden-Markov-Modelle werden z.B. bei der Sprachsynthese angewendet. „Sie codieren die Wahrscheinlichkeit, dass spezifische Soundmuster in jedem Phonem gefunden werden, wie die einzelnen Phoneme einander beeinflussen und die wahrscheinliche Reihenfolge der Phoneme. Das System kann ferner die Wahrscheinlichkeit von Netzwerken auf höheren Ebenen der Sprachstruktur einschließen � beispielsweise etwa die Reihenfolge von Worten, die Einbindung von Redewendungen usw. entlang immer höherer Stufen der Hierarchie der Sprache.
Sie reduzieren überflüssige Verknüpfungen und modellieren die erwartete Größenverteilung jedes Inputs (in einem Kontinuum), indem sie die (Existenz-)Wahrscheinlichkeit für das fragliche Muster berechnen.�
In den letzten Kapiteln widmet er sich der Qualia. Was ist freier Wille? Gibt es ihn überhaupt? Hier waren für mich die Ausführen zu Stephen Wolfram besonders interessant:
“Obwohl also unsere Entscheidungen vorherbestimmt sein mögen (weil unsere Körper und Gehirne Teil eines deterministischen Universums sind), sind sie doch grundsätzlich unvorhersagbar, weil wir in einem Automaten der Klasse IV leben (und Teil von ihm sind). Wir können die Zukunft eines Automaten der Klasse IV nicht vorhersagen, sondern nur die Zukunft sich ereignen lassen. Für Dr. Wolfram genügt dies, um die Möglichkeit eines freien Willens zu erlauben.
If you don’t know much about the current state of artificial intelligence, brain science, or the philosophy of consciousness, and don’t mind a little bit of technical discussion, Kurzweil does a fine job of articulating the current rapid converge between these areas of understanding. However, if you already do know the basics, this book probably isn’t going to do much to expand your own consciousness.
Speaking as a software engineer who has a fascination with AI, I largely agree with Kurzweil's glowing assessments about the future of machine intelligence, though I'd probably push his timeframe back a few decades and could do with a bit less of his self-promotion. Though there's a lot we still don't understand about how the human brain operates, neuroscience and computer science are starting to form the same fundamental insights about how intelligence "works", whether it's represented as neurons or a mathematical process. In a truly intelligent machine, data from the outside world is taken in by a large, hierarchical array of pattern-recognizers, which gradually rewire themselves to better anticipate the messy-but-hierarchical patterns of the real world (visual squiggles to letters, letters to words, words to syntax, syntax to meanings, meanings to relationships, relationships to concepts, concepts to insights -- and back down again). To some extent, the software world has already made useful progress in this direction.
However, most of the insights Kurzweil offers aren’t anything new. Indeed, most of what he says was explored in Jeff Hawkin’s 2004 book, On Intelligence, and in academia before that. Briefly stated, the hierarchical architecture of the human brain’s neocortex is the major engine of human intelligence, and it seems to start out mostly as a blank slate, a generalized learning machine that builds neural connections through experience, eventually forming a complex inductive model of reality, which constantly makes predictions about what comes next. Kurzweil shares some of his own successes solving certain kinds of problems decades ago, but the new ideas he advances seem somewhat vague and underdeveloped (maybe he’s saving the nuts and bolts for his new job at Google).
Still, there's plenty here for a general audience, when he gets away from the geekery. Kurzweil is passionate and pretty convincing about his belief that even limited gains in awareness of how the human brain works still provide AI researchers with some powerful springboards, and that, conversely, advances (or missteps) in AI teach us more about the brain. As he points out in discussing Watson, the IBM computer system that famously won on Jeopardy after acquiring most of its knowledge from scanning natural-language documents (the sampling of questions it got right is impressive), things have already come a long way. And there's no reason to believe that the rapid convergence won't continue, especially in the post-cloud computing world. After all, the specific, idiosyncratic way our monkey-rat-lizard brains were shaped to think as our ancestors crawled/darted/clambered around undoubtedly isn't the only way an evolutionary process can discover thought.
There’s also a succinct but informative history of the field of AI, with brief overviews of significant thinkers and developments. And Kurzweil wades a little bit into the philosophy of consciousness, exploring some its more paradoxical aspects in light of what science knows about the human brain. For example, it's been shown that the two cerebral hemispheres, in patients with a severed connection, operate almost as two separate brains. Yet, each one still seems to think it has a conscious link to the other. Maybe such individuals are more like two people in one body, but don't realize it? Eerie, huh? His other thought experiments are nothing new, but still fun. Everyone should know what the Chinese Room is.
Finally, there’s a section in which Kurzweil responds to critics, and calls out a few flagrant misunderstandings of his ideas. While it’s debatable how on-target his past predictions about technology have been, as far as I’m concerned, if he was even halfway right, then he’ll be fully right soon enough.
Overall, I think I would recommend this book most to AI neophytes who haven’t read anything by Kurzweil before. His enthusiasm for the topic can be quite inspiring. For other readers, especially those who have read On Intelligence, I don’t think you’re missing anything essential. I’d probably give this one 4 stars for the former audience, 2.5 for the latter, 3.5 overall.
Well, I am simply in love with Kurzweil. How could I not be? This was one of the best books on Philosophy of Mind that I could imagine reading. Early on in the book, Kurzweil respectlfully disagreed with Steven Pinker, and imo, setting himself apart from the good genes crew (Dawkins et. al.). He went on to take his lucky reader on a tour of the future of the mind, teaching them about everything that has been done to date to try to create a mind.
In 2008, I took a cognitive science class that featured a lot of Kurzweil's work, as well as many other things included in this book. I later took two courses in Philosophy of Mind. All of these courses focused heavily on AI. I loved those classes so very much and this book brought everything flooding back.
You will be treated to the role Hidden Markov Models (HHMs) play in speech dictation. In fact, this very book was written not by hand, but was dictated using Dragon Dictation (which is a product of HMMs). Kurzweil also provided his reader with a short but excellent history of Philosophy of Mind by including Jackson's Mary (The Knowledge Argument), Searle's Chinese room, Chalmers zombies, and Dennett's ideas about all of that. I was sad that he didn't include Andy Clark, but even with that oversight, it was one of the best and most relatable summaries of Philosophy of Mind that I have read. He took out the jargon and, instead, made every concept easy enough for a middle schooler to grasp, yet interesting enough for academics.
Kurzweil chose the most interesting bits of neuroscience to include in this book, all of which are still exciting in 2016. I can only imagine what I would have felt like if I had read this book in 2012. I would have been blown away.
The efforts to create a mind have been ongoing for decades. There is not stopping it, much to the chagrin of many. If you want to be informed about how this process works, read this and Kevin Kelly's The Inevitable. They pair nicely with one another.
This is a fascinating look into how our brains operate, and how the first synthetic brains have been operating, and will operate as they become more sophisticated (and, eventually, sentient).
Um tanto repetitivo no começo, mas com um ótimo desenvolvimento. A perspectiva do Kurzweil é bem diferente da maioria dos escritores e atendeu bastante o que eu procurava. Como ele trabalhou programando algoritmos de inteligência artificial, pode falar como poucos sobre reconhecimento de padrões e aprendizado orientado a isso. É um bom livro para essa interface entre cérebro e tecnologia que deve ser cada vez mais comum.
*A full executive summary of this book is available here:
When IBM's Deep Blue defeated humanity's greatest chess player Garry Kasparov in 1997 it marked a major turning point in the progress of artificial intelligence (AI). A still more impressive turning point in AI was achieved in 2011 when another creation of IBM named Watson defeated Jeopardy! phenoms Ken Jennings and Brad Rutter at their own game. As time marches on and technology advances we can easily envision still more impressive feats coming out of AI. And yet when it comes to the prospect of a computer ever actually matching human intelligence in all of its complexity and intricacy, we may find ourselves skeptical that this could ever be fully achieved. There seems to be a fundamental difference between the way a human mind works and the way even the most sophisticated machine works--a qualitative difference that could never be breached. Famous inventor and futurist Ray Kurzweil begs to differ.
To begin with--despite the richness and complexity of human thought--Kurzweil argues that the underlying principles and neuro-networks that are responsible for higher-order thinking are actually relatively simple, and in fact fully replicable. Indeed, for Kurzweil, our most sophisticated AI machines are already beginning to employ the same principles and are mimicking the same neuro-structures that are present in the human brain.
Beginning with the brain, Kurzweil argues that recent advances in neuroscience indicate that the neocortex (whence our higher-level thinking comes) operates according to a sophisticated (though relatively straightforward) pattern recognition scheme. This pattern recognition scheme is hierarchical in nature, such that lower-level patterns representing discrete bits of input (coming in from the surrounding environment) combine to trigger higher-level patterns that represent more general categories that are more abstract in nature. The hierarchical structure is innate, but the specific categories and meta-categories are filled in by way of learning. Also, the direction of information travel is not only from the bottom up, but also from the top down, such that the activation of higher-order patterns can trigger lower-order ones, and there is feedback between the varying levels. (The theory that sees the brain operating in this way is referred to as the Pattern Recognition Theory of the Mind or PRTM).
As Kurzweil points out, this pattern recognition scheme is actually remarkably similar to the technology that our most sophisticated AI machines are already using. Indeed, not only are these machines designed to process information in a hierarchical way (just as our brain is), but machines such as Watson (and even Siri, the voice recognition software available on the iPhone), are structured in such a way that they are capable of learning from the environment. For example, Watson was able to modify its software based on the information it gathered from reading the entire Wikipedia file. (The technology that these machines are using is known as the hierarchical hidden Markov model or HHMM, and Kurzweil was himself a part of developing this technology in the 1980's and 1990's.)
Given that our AI machines are now running according to the same principles as our brains, and given the exponential rate at which all information-based technologies advance, Kurzweil predicts a time when computers will in fact be capable of matching human thought--right down to having such features as consciousness, identity and free will (Kurzweil's specific prediction here is that this will occur by the year 2029).
What's more, because computer technology does not have some of the limitations inherent in biological systems, Kurzweil predicts a time when computers will even vastly outstrip human capabilities. Of course, since we use our tools as a natural extension of ourselves (figuratively, but sometimes also literally), this will also be a time when our own capabilities will vastly outstrip our capabilities of today. Ultimately, Kurzweil thinks, we will simply use the markedly superior computer technology to replace our outdated neurochemistry (as we now replace a limb with a prosthetic), and thus fully merge with our machines (a state that Kurzweil refers to as the singularity). This is the argument that Kurzweil makes in his new book 'How to Create a Mind: The Secret of Human Thought Revealed'.
Kurzweil lays out his arguments very clearly, and he does have a knack for explaining some very difficult concepts in a very simple way. My only objection to the book is that there is a fair bit of repetition, and some of the philosophical arguments (on such things as consciousness, identity and free will) drag on longer than need be. All in all there is much of interest to be learned both about artificial intelligence and neuroscience. A full executive summary of this book is available here: A podcast discussion of the book will be available soon.
Според него основната функция на висшите части на мозъка е да са прост разпознавател на шаблони � и това е основният алгоритъм на неокортекса. Той твърди, че в мозъка и особено в неокортекса е налице огромна повтаряемост и че дори “…спокойн� може да се каже, че в един-единствен неврон има повече сложност, отколкото в целокупната структура на неокортекса.� И от тази си проста теза изковава всъщност цялостна теория за появата на съзнанието и не по-малко важно � как то да бъде пренесено в дигиталното пространство с всичките му предимства и недостатъци.
I consider myself a singularity skeptic, and I'm definitely not convinced by Kurzweil's so-called "Law of Accelerating Returns", but starry-eyed idealism about the future aside, this book is quite well-reasoned and well-argued. I've seen firsthand how deep learning applications can deliver some pretty amazing results, and it's hardly a stretch to say that can only get better faster as long as Moore's Law holds (which could end tomorrow or a century from now).
But honestly what surprised me the most out of this book was how willing Kurzweil was to grapple with philosophical issues, as opposed to merely technical ones. Though I seem to recall he once or twice attributed quotes or ideas to the wrong people, the ideas were fully formed, and highly relevant. I've definitely reevaluated my opinion of Kurzweil, and I think I'll check out his other stuff.
Kurzweil is not for everyone, but he is for me. He covers a wide range of topics from how the brain works, quantum physics, logical positivism and Ludwig Wittgenstein up to what does it really mean to be human.
I get a little glossy eyed during the description of the brain and its interactions, but he explains them as good as anyone and I could follow them but not well enough to repeat it to others, but when he's talking about what constitutes a thinking human is where he really excels and excites and I can and will repeat to others his thoughts on that stuff.
The narrator really added to the books enjoyment. I thought he was narrating the book exactly the way the author would have been while he was writing the book.
This was mostly boring and repetitive. It can be summed up as follows: "The human brain can be modelled as a series of pattern recognizers. It is possible to recreate these pattern recognizers in software. If technology keeps advancing at the current exponential rate, we will soon be able to model a human brain with software - and since this model will be indistinguishable from the original we will be able to speak of this creation as being a conscious intelligent entity" No need to read most of the book which is just conjecture mixed with actual scientific evidence to support this argument. Also Kurzweil really wants the reader to think of him as an Einstein/Darwin type revolutionising a branch of science?
Beyond some spurious dialog of computer modeling, the book is cleanly written and well-argued. The chapter on consciousness offers an amazing discussion of how a computer can (or can’t) replicate a human mind. The author finishes by taking on objections to his ideas. Highly recommended.
While the brain has been considered by many to be beyond the scope of comprehension, history is replete with claims of what couldn’t be done. How to Create a Mind offers a thoroughly supported argument for the eventual reverse engineering of the human brain.
Very interesting look at how to create a mind. One of the most fascinating and real world experiences I now better understand is how the Dragon Speech to text engine was created. In the last couple of years, I have been working with dictation applications and the struggles we have had with the tool. You often hear why doesn't it understand what I'm saying. Listening to this book, I now understand how the fundamentals of recognition were constructed and why folks may be struggling. Very interesting and not something I expected to learn from this book.
Neuroscience has had such an impact on the development of AI, in particular, with regards to the development of deep neural networks. But what if the goal was to mimic the mind? To do so would require the need to fulfil a few key criteria; processing sensory information, crucially, understanding information fed in, and possessing some level of consciousness (sufficient to pass a Turing test).
There were a few elements to the sensory processing section which were quite fascinating, for one, the idea that you could solve this issue of trying to create a model cochlea (for detecting specific frequencies of sound), by untying a Gordian knot. By creating lots of band pass filters with a technique called (which is suited for pattern matching), we can quite easily select for detection of specific frequencies of sound.
The plasticity of the central nervous system is a remarkable thing; it’s seen in so many places, from simple spinal circuits, to the hippocampus, where we store memories. Computer software systems have a superb capacity for mimicking this. For example, the UIMA programme in IBM’s Watson does so in quite an interesting way; it looks at the multiple programmes running within it, and optimises overall function by playing around with individual systems, to affect how they work as a unit.
The final chapter on the law of accelerating returns was a useful reminder that, as humans, we are quite bad at thinking in exponential terms, although most of the world changes exponentially (just look at $GME!).
There was a nice discussion of why intelligence persists in humans towards the end. Kurzeil analyses how the development of intelligence let humans use time more effectively, accomplishing more tasks in less time, which would have been a characteristic favoured by the blind watchmaker of evolution. I actually wrote a few years back, focusing more on neural circuitry than selection pressures, but still found this to be quite an interesting exploration of what I would say are similar ideas.
My only real issue with this book was a lack of discussion on how we could use brain motor control as a stepping stone for understanding how to optimise AI motor control.
Overall, still quite a good book for anyone who is interested in AI or neuroscience.
So this was probably the fattest, densest science book I've read this year. As programmer, I want to understand the theory behind the latest advancements in AI/machine-learning, but as a normal human, I'm fascinated by the brain and all these concepts (conscience, identity, mechanics of memory, etc) science hasn't quite figured out.
I'll say this about Ray Kurzweil just from reading this book. This guy has been in the field for more than 30 years and is highly respected, and in his writing he comes across as a guy who likes to toot his own horn. I guess it's just normal if you've been in the field for this long, but he loves to refer to his own accomplishments. When he doesn't do that though, his line of argumentation is very convincing, and the model he puts forward in the beginning of the book, the hierarchical model of pattern recognition (turtles all the way down) was quite revelatory for me personally. I don't know if it's his theory, but no matter. This model answers quite a lot of questions I've asked myself and stirred me to form deeper, more involved lines of thought, so for that this book is already worth it.
I hear that there's criticism of this theory, as you'd expect, but for a layman it's a good starting point to get into the subject matter.
After that, the book moves towards more philosophical stuff. It's thought provoking, sure, but a lot of it wasn't that groundbreaking to me, though still enjoyable. I'll say the best bits are in the first half for sure.
The last chapter is just him refuting some guy who once criticized an essay of Kurzweil and it was jarring how petty it felt. He literally went over a bunch of arguments that person wrote that they didn't agree with in Kurzweil's essay and refuted them, and put that in print (I presume, I'm reading the ebook version). The last chapter feels utterly unnecessary and just underlined my impression that Kurzweil is a bit of a twat haha
I had always dismissed Kurzweil's theories about "strong" artificial intelligence to be wishful thinking but this book changed my mind. I'm not quite as optimistic about scaling things up to human adult levels, but reading this book gave me new found respect for his ideas and the evidence and theories he uses to back them up. I had no idea how powerful "hidden Markov models" are for solving problems, and Kurzweil makes a good argument that neocortical pattern recognition (essentially a form of probabilistic prediction making) is computationally approximate enough to these hidden Markov models that, if you put 300 billion such pattern recognizers and gave it the entire internet to "grow up" in, then you could create a reasonable approximation of the intelligence worth wanting: categorizing pattern recognizing problem solvers with huge memories and lightning speed. Also, when Kurzweil delved into heady philosophical territory he held himself fairly well and exposed many of the fallacious and sadly misinformed criticisms of his views, many I once held myself due to lack of familiarity with what his views actually amount to, which are more modest than his vocal popularizers would have you believe. Granted, this is the only book of his I have read, so I can't pretend to stand behind all his ideas, but the AI stuff in this book seemed solid to me. His view of an "intelligent mind" is really a modified form of Jeff Hawkin's thoery of neocortical intelligence as a giant massively redudundant, hierarchical, recursive, and self-learning memory-prediction machine.4/5 stars.
In How to Create a Mind, Ray Kurzweil argues that the human mind is composed of hierarchy of pattern recognizer that uses a statistical model to learn, store, and retrieve information. He then goes on to argue about how this model can be used to develop artificially intelligent machines. He argues that in fact huge strides have been made towards this goal in such machines as Watson (the computer that handily defeated Ken Jennings at Jeopardy!)
This may seem dry, but this book has engaged my imagination in ways that few novels have. He finishes with the philosophical and social implications that such advances in technology could have and addresses potential objections to his arguments. I found myself stopping on occasion to reflect on what I have read. Highly recommended for those interested in cognitive psychology or in artificial intelligence.
As my friends well know, a great deal of my neocortex is dedicated to pattern recognition in search of ways to prevent the robot apocalypse. Kurzweil paints a bright picture over a frightening future where humans and computer minds blur and robots overtake the world. When the Kurzweiltron 3000 (controlled by a copy of Ray's consciousness) has been destroyed and I stand on top of a pile of mangled rivets and torn metal, I'll rip the neocortex extender out of my forehead (allowing my amygdala let me feel feelings again) and shout "Kurzweil, you magnificent bastard, I read your book!"
The thing about fiction is that I accept errors or lack of reference as long as the story is interesting. In nonfiction, I need all of those elements there. So, when you're completely ripping off Plato, maybe you should give him a hat tip (and not just vaguely 100 pages later about an entirely different topic).
A fascinating weave of neuroscience, artificial intelligence, and the philosophy of mind.
Kurzweil presents the pattern recognition theory of mind (PRTM), which holds that the fundamental unit of computation in the brain is a group of ~100 neurons in the neocortex that recognises a pattern. The clever part is that these patterns can exist within arbitrarily complex hierarchies, containing "pointers" to other patterns, and feeding input/output to both sub-patterns and parents.
It's a nice theory and seems to explain some empirical findings in neuroscience, though I would defer to a domain expert to comment more on that. Some of his arguments seem quite suspicious: for example, he often justifies his view of how the brain works by implementing the procedure on a computer � if it works on a computer, that is likely how it works in the brain.
The second half of the book is somewhat disorganised, covering the whole gamut of standard topics within the philosophy of mind: computation, consciousness, free will, identity, etc. Nevertheless, it's a highly accessible introduction containing plenty of references to famous thought experiments and philosophical writings.
The chapter relating to the "Law of Accelerating Returns" is scary. Due to the linear nature of our neocortices, humans consistently misunderstand exponential growth trends. As a result, the future will come much faster than people realise. Kurzweil convincingly argues that within the next twenty years, we will have reached the singularity and that by the end of this century, cybernetic enhancements will be the norm.
Well worth a read if you are interested in neuroscience, AI, philosophy, or futurism. This book is an excellent complement to sci-fi books dealing with uploading consciousness (e.g Ubik, Greg Egan's short stories) � which may not be fiction for much longer!
This book was fascinating and mildly terrifying. Kurzweil's main point is the Pattern Recognition Theory of Mind: that idea the human brain is nothing more than a series of pattern recognizers and mechanisms for interpreting and acting on those patterns. The suggestion that all of a human's experience (yes, including consciousness) can be reduced to and explained by such a system, and the subsequent implication that this system can be modeled with machines, is depressing, if not a little insulting. But it's tough to argue with.
Kurzweil does a decent job of explaining complex concepts in a way that is somewhat understandable by the layperson, although I'm not convinced that it was necessary to provide explanations - even simplified ones - of every topic introduced (yes, it was cool to feel like I understood vector quantization, but no, it was not necessary for my understanding of the book overall). Additionally, while I appreciated the discussion of consciousness near the end of the book (I was worried he would conveniently avoid the topic), it was overly philosophical and dense - I needed multiple sittings to get through chapter 9.
I'm also glad Kurzweil addressed (successfully, in my opinion) objections to his theories and predictions (although it was a little awkward for him to call out Paul Allen all over the place - I think he could have made his points without naming any names). This book was well-organized and certainly comprehensive, and it has made me welcome the idea of one day becoming a cyborg.
I've never thought about combining a biological examination of the neocortex, the study of language recognition (and speech recognition), the development of Artificial Intelligence, and a dive into some of the trickier questions of consciousness, free will, and identity.
Yet, that is exactly what Kurzweil does in this book.
His arguments regarding the functioning of the human mind, and our attempts to mimic and improve upon those processes are compelling, even if at the time of writing proof was lacking in some areas.
I'm not always a fan of Kurzweil. He spends a lot of time talking about himself and his own experiences in the field of AI. In some cases, I feel it's appropriate: He has done incredible work. Moreover, it does make it clear that in many cases in the book, Kurzweil his sharing his opinions and not facts.
But there are places where it becomes unnecessary. A whole chapter is dedicated to objections, not necessarily focused on the topic at hand, but on articles written against his theories. Likewise, I'm aware he has an excellent track record at prediction. I don't need to hear about it again.
I'd very much recommend this book for people looking to get a background in AI, or even human cognition. Particularly recommended for those interested in how the two intersect.
I'd picked this book up assuming that it'd be just about artificial intelligence.
But along with AI, it also taught me the biology of a human brain, the definition of consciousness and the philosophy of life.
Read these lines: True mind reading, therefore, would necessitate not just detecting the activations of the relevant axons in a person’s brain, but examining essentially her entire neocortex with all of its memories to understand these activations How beautifully and simply, we understand how brain works with just this one line
And how about this? At the far end of the story of love, a loved one becomes a major part of our neocortex. After decades of being together, a virtual other exists in the neocortex such that we can anticipate every step of what our lover will say and do. Our neocortical patterns are filled with the thoughts and patterns that reflect who they are. When we lose that person, we literally lose part of ourselves. This is not just a metaphor—all of the vast pattern recognizers that are filled with the patterns reflecting the person we love suddenly change their nature. Although they can be considered a precious way to keep that person alive within ourselves, the vast neocortical patterns of a lost loved one turn suddenly from triggers of delight to triggers of mourning. Tell me if this is not the best definition of love you've ever read.
Ray Kurzweil really pushes the boundary of our understanding of the brain, and goes as far as claiming that the brain is a much simpler structure than we think. He proposes a basic structure comprised of several neurons that accounts for all learning in the brain. He then explains how he thinks we will be able to simulate this structure using computers and eventually create machines who can think and even be deemed conscious. Some of his claims might be wild, but they definitely spark curiosity and open the reader's mind to a world that might look very very different in a few years.
These are my key takeaways from the book. -Kurzweil emphasizes a nonstandard approach of brain modeling that is close to current neurological research and less related to the latest deep learning model designs. One will not find elaborations on the latest LSTMs or other “hot topics� in the book. Kurzweil wants to take a top-down approach and model the brain as realistically as possible. He goes into depth about how the brain is physically wired, how many neurons are in individual sections, how large these sections are, how many sections there are, and how they are connected. For example, Kurzweil says that the main learning comes not from inter-neuron connections (individual neurons that flexibly attach themselves to each other) but from inter-section connections of neuron clusters with about 100 neurons each. Kurzweil also explicitly states that he follows a multi-disciplinary approach. (pp. 115-116) As I have a computer science background, the book seemed to lack technical solidness at first, but this was because of my stereotypes about what a book about the mind should be like. -Evolution led to increasing abstraction. While physics was the first relevant field of study, chemistry became relevant when molecules formed from atoms. Then DNA evolved, making biology a useful field of study. Then neurology comes into focus as we travel down the evolutionary history. (p. 2) -The mammalian brain is capable of hierarchical thinking by using the neocortex. (pp. 2-3) -Brain Architecture: The neocortex is complex from the outside, but it is built with significant redundancy, making its design less complex than expected. This is due to repeating (hierarchical) patterns. (p. 11) “There are about a half million cortical columns in a human neocortex, each occupying a space about two millimeters high and a half millimeter wide and containing about 60,000 neurons (resulting in a total of about 30 billion neurons in the neocortex). A rough estimate is that each pattern recognizer within a cortical column contains about 100 neurons, so there are on the order of 300 million pattern recognizers in total in the neocortex.� (p. 38) Kurzweil states that there are about 10e15 connections in the neocortex, but they only take up 25m bytes of information in the genome (after lossless compression) (p. 90), while most of these 25m bytes constitute biological information (p. 155). A human brain is estimated (by Kurzweil) to be able to recognize a low 8-digit number of patterns. (p. 40) A neocortical pattern recognition module consists of: Dendrites that send signals in and out. Axon (output), expected pattern (signal from above), size parameter, weight, expected variability of the lower-level pattern, Inhibitory signals from above, inhibitory signals from below. (pp. 42, 66-68) Kurzweil’s description of the hierarchical nature of the pattern recognition design reminds of convolutional neural networks. (pp. 43-47) Language is just one higher-level pattern in our brain. Our thoughts are not necessarily in language format and need to be converted into this higher level. (p. 56) Kurzweil citing Donald Hebb: “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.� (p. 80) “The central assumption in Hebb’s theory is that the basic unit of learning in the neocortex is the neuron. The pattern recognition theory of mind that I articulate in this book is based on a different fundamental unit: not the neuron itself, but rather an assembly of neurons, which I estimate to number around a hundred. The wiring and synaptic strength within each unit are relatively stable and determined genetically--that is, the organization within each pattern recognition module is determined by genetic design. Learning takes place in the creation of connections between these units, not within them, and probably in the synaptic strengths of those interunit connections.� (p. 80) The brain’s physical structure is 2D, not 3D. (p. 82) Kurzweil compares neocortex modules to FPGAs. (p. 83) “The brain does not have sufficient flexibility so that each neocortical pattern recognition module can simply link to any other module� an actual physical connection must be made, composed of an axon connecting to a dendrite.� (p. 90) “Signals go up and down the conceptual hierarchy. A signal going up means, “I’ve detected a pattern.� A signal going down means, “I’m expecting a pattern to occur,� and is essentially a prediction. Both upward and downward signals can be either excitatory or inhibitory.� (p. 91) The optic nerve carries only ten to twelve output channels, each which only limited information. For example, one channel is responsible for detecting edges. (pp. 94-95) This sparse coding makes sure that the neocortex is not overwhelmed. (pp. 95-96) The brain is slow, but parallel. (p. 195) Today’s computers are at least 10m times faster than a brain. But the overall memory and throughput requirements of the brain are immense. (p. 195) -Hippocampus: Each side of the brain has one hippocampus. The hippocampus can remember novel events. If the neocortex sees unknown patterns, it will forward them to the hippocampus. The hippocampus then creates pointers to the neocortex. (p. 101) The hippocampus enables the short-term memory of the brain. It plays the memory sequence to the neocortex over and over again and thereby forms long-term memories. (p. 102) -Movement is controlled by the cerebellum and the neocortex, whereby the neocortex has taken over most functions from the “old� cerebellum. (p. 103) -“Finding a metaphor is the process of recognizing a pattern despite differences in detail and context--an activity we undertake trivially every moment of our lives.� (p. 115) -Similar to Elon Musk (maybe he got this idea from Kurzweil?), Kurzweil argues that the devices we use today are already extensions of the brain and we will soon interface our brains with technical devices via direct neural connections. (pp. 116-117) -The main contribution of the neocortex is that it sped up the process of learning. Complex learning is no longer only possible over many generations, but also within the lifespan of a single entity. (p. 122) -There are already brain scans that allow scientists to follow individual connections through the brain and attempts to fully simulate regions of the brain or even the whole brain. (pp. 129-130) For full brain emulation, one needs scanning, translation, and simulation. (p. 130) The Cyc project aims to aggregate all commonsense knowledge. (pp. 162) -Vector quantization is a way to preprocess visual information, arguably similar to how optic information is preprocessing information by the brain. (pp. 138-141) -Kurzweil strongly advocates the use of hidden Markov models to simulate how different layers (pattern recognizers) interact. (p. 143) “Today, the HHMM together with its mathematical cousins makes up a major portion of the world of AI.� (p. 155) Evolutionary genetic algorithms could be used to determine hyperparameters of pattern recognizers. (p. 147) Watson uses UIMA (unstructured information management architecture) to bundle the intelligence of its hundreds of subsystems. The special element of Watson’s design is that subsystems can contribute without providing a definite answer. The subsystems can also help to narrow down the answer. The subsystems consist of hidden Markov model variants,rule-based approaches, and other models. (pp. 167-168) “Neuromorphic� chips and similar approaches to simulate the behavior of brains� parts can be more efficient than emulating the brain only with software. (p. 195) -LISP was popular in the AI community of the 70s and 80s because of the hierarchical nature that it supported. The elements of a LISP list can store other lists and even allow for recursion. (p. 154) -Rule-based systems can help to allow a system to be learned on-the-fly (self-labeling). Complex models usually take much data to be accurate while rule-based systems do not. A combination of statistical and rule-based systems leads to an optimized learning curve. (pp. 164-165) -One of the key differences that I found when comparing Kurzweil’s architecture to today’s state-of-the-art deep learning is that pattern recognizers (which are roughly comparable to neural network layers, I would say), do not just feed forward, but also backward. They feed backward and confirm that the pattern is “expected�. (p. 173) -Ideas that Kurzweil proposes to add to an artificial brain: A critical thinking module which would perform a continual background scan that would revolve cognitive contradictions. Unlike humans, the AI could then avoid holding conflicting views. Also, he would add a module that identifies open questions in each field and would search for answers in other fields. Another background process would be metaphor search. (pp. 176-177) -”Simply repeating information is the easiest way to achieve arbitrarily high accuracy rates from low-accuracy channels, but it is not the most efficient approach. Shannon’s paper, which established the field of information theory, presented optimal methods of error detection and correction codes that can achieve any target accuracy through any nonrandom channel.� (p. 184) The brain uses Shannon’s principle too (redundancy). (p. 185) The left and the right brain are to a large part redundant. A human can often function reasonably well after one side of the brain is removed. (pp. 224-225) Each side of a human brain is probably conscious by itself. (p. 227) -Predictions: Kurzweil predicts that there will be the first artificial humans in 2029 and become “routine� in the 2030s. (p. 210) If one were to clone a person into a robot so that this robot would be a 100% perfect emulation of the person, we would probably accept this robot to be an independent entity. But we would also say that this robot is not exactly this person, it is a clone. Now if we take the same person and gradually replace all of its brain and other organs, while keeping its self-awareness at all times, until it becomes fully artificial, we would probably say that this is still the same person. But this is contradictory if the second robot is exactly the same robot as the first (cloned) robot. The human body also gradually replaces itself (replacement of cells and molecules within the cells). (pp. 242-245) People will replace their organs more and more until their thinking will completely be in the cloud. (p. 247) Exponential growth (of computing power) will continue. Our current circuit technology (and the associated paradigm Moore’s law) will be replaced by another breakthrough paradigm that resolves the current limitations of circuits. (p. 255) The final limit to the physics of computation will probably be reached by the end of the century. This limit is defined by molecular computing. We still have a trillion-fold increase to go until we reach this limit. (p. 256) It is not true that only hardware is improving: Professor Martin Grötschel of the Konrad-Zuse-Zentrum für Informationstechnik Berlin observed that for a particular linear programming algorithm, the efficiency between 1988 and 2003 improved by a factor 1,000 due to hardware and by another 43,000 due to better software. (p. 269) The scientist’s pessimism seems to be present for a long time among the computer scientist community. The progress that CPUs made over the last decades was not seen as possible by many people back then. (p. 272) Kurzweil thinks that the question of whether we can break the speed of light will be a key question at the beginning of the twenty-second century as this will determine how quickly we can expand in the universe. (p. 281) -Some of the intelligent machines that we will build may not behave humanlike, but may still be “conscious�. (p. 213) Entities that are not convincingly conscious or do not even try to be could still be conscious. (p. 215) -Wittgenstein first proposed that the discussion about consciousness is circular, but the newer writings by Wittgenstein say that this discussion is really important. (pp. 220-221) -Freedom of will: Vilayanur Subramanian Ramachandran says that “free won’t� is a better term. Actions are already prepared unconsciously. Sometimes, the human can stop the action. But action does not normally form purely consciously. (p. 230) Wolfram says that the universe is deterministic, but we cannot have enough computing power to compute the future. Therefore, it is not really deterministic for us. (p. 239)