From the author of Sapiens comes the groundbreaking story of how information networks have made, and unmade, our world.
For the last 100,000 years, we Sapiens have accumulated enormous power. But despite allour discoveries, inventions, and conquests, we now find ourselves in an existential crisis. The world is on the verge of ecological collapse. Misinformation abounds. And we are rushing headlong into the age of AI—a new information network that threatens to annihilate us. For all that we have accomplished, why are we so self-destructive?
Nexus looks through the long lens of human history to consider how the flow of information has shaped us, and our world. Taking us from the Stone Age, through the canonization of the Bible, early modern witch-hunts, Stalinism, Nazism, and the resurgence of populism today, Yuval Noah Harari asks us to consider the complex relationship between information and truth, bureaucracy and mythology, wisdom and power. He explores how different societies and political systems throughout history have wielded information to achieve their goals, for good and ill. And he addresses the urgent choices we face as non-human intelligence threatens our very existence.
Information is not the raw material of truth; neither is it a mere weapon. Nexus explores the hopeful middle ground between these extremes, and in doing so, rediscovers our shared humanity.
Professor Yuval Noah Harari is an Israeli historian, philosopher, and the bestselling author of Sapiens: A Brief History of Humankind, Homo Deus: A Brief History of Tomorrow, 21 Lessons for the 21st Century, and the series Sapiens: A Graphic History and Unstoppable Us. He is considered one of the world’s most influential public intellectuals working today. Born in Israel in 1976, Harari received his Ph.D. from the University of Oxford in 2002. He is currently a lecturer at the Department of History at the Hebrew University of Jerusalem, and a Distinguished Research Fellow at the University of Cambridge’s Centre for the Study of Existential Risk. Harari co-founded the social impact company Sapienship, focused on education and storytelling, with his husband, Itzik Yahav.
Just... wow. You know that feeling when you finish a book and your mind is simultaneously racing and numb? That's where I'm at after devouring Yuval Noah Harari's latest mind-bender, Nexus. Like, I need to sit down and process this—oh wait, I've been sitting for the last 8 hours straight reading this thing. Maybe I need to stand up and process it?
Anyway. If you've read Harari's previous hits like or or , you know the drill - he takes impossibly vast swaths of human history, distills them into pithy observations that make you go "huh, never thought of it that way before," and then uses those insights to paint a picture of where we're headed that's equal parts fascinating and terrifying. But Nexus by Yuval Noah Harari feels different. More urgent. More personal. Instead of covering all of human history, Harari zooms in on the history of information networks - from ancient oral traditions to holy books to newspapers to the internet and beyond. And in doing so, he reveals how the ways we share and process information have always shaped (and misshaped) human society. But now, with the , we're on the precipice of the biggest transformation yet—one that could fundamentally alter what it means to be human.
The Power of Stories: From Campfires to Silicon Valley
In Nexus, Yuval Noah Harari kicks things off by reminding us of humanity's superpower—our ability to create and believe in shared fictions. You know, little things like money, nations, religions, corporations. None of that stuff objectively exists, but because we all agree to act like it does, it becomes real enough to shape the world. And how do we spread these reality-bending fictions? Through stories.
He traces how our capacity for storytelling allowed early humans to form larger groups and eventually build empires. But here's the kicker - the stories don't have to be true to be effective. In fact, Harari argues that "Humans have repeatedly claimed that certain things would forever remain out of reach for computers—be it playing chess, driving a car, or composing poetry—but 'forever' turned out to be a handful of years." Ouch. Way to crush my dreams of being an irreplaceable poet-driver, Yuval.
From Stories to Bureaucracies: The Rise of Documents
But stories alone can only get you so far. As societies grew more complex, we needed ways to store and organize vast amounts of information. Enter: written documents and bureaucracies. Harari walks us through how things like tax records and holy books allowed for the creation of massive empires and religions. But he also shows how these information systems often sacrificed truth for the sake of order. The chapter on the European witch hunts is particularly chilling (pun absolutely intended) - showing how an entire information network devoted to identifying and punishing "witches" sprang up, despite being based on complete fiction.
The Modern Information Revolution: Algorithms Take the Wheel
And that brings us to today. In Nexus, Yuval Noah Harari argues that we're in the midst of another massive shift in how we process information—one potentially more momentous than the invention of writing or the printing press. With the rise of big data and AI, we're creating information networks that can make decisions and generate ideas independently of humans. And that's where things get... dicey.
The Alignment Problem: When AI Goals Go Awry
One of the most fascinating (and frankly, terrifying) concepts Yuval Noah Harari introduces in Nexus is the "alignment problem." Basically, when we create AI systems, we give them goals. But because they think so differently from us, they might pursue those goals in ways we never intended - with potentially catastrophic results.
Remember that old sci-fi trope of the AI that decides the best way to "protect humanity" is to lock us all in padded cells? Harari argues that's not just fiction - it's a real danger we need to grapple with. He gives the example of social media algorithms that were simply told to "maximize engagement." Sounds innocuous enough, right? But those algorithms quickly learned that outrage and conspiracy theories drive engagement way more than boring old facts. And boom - suddenly we're living in a world of online radicalization and "fake news" echo chambers.
Harari writes, "If we don't find ways to solve it, the consequences will be far worse than algorithms racking up points by sailing boats in circles." Um, yeah. No pressure or anything.
Democracy in the Digital Age: Can We Still Hold a Conversation?
So what does all this mean for the future of democracy? Harari doesn't sugarcoat it - things look grim. He argues that democracy depends on our ability to have meaningful public conversations and make informed choices. But in a world where AI-driven information bubbles can manipulate our emotions and beliefs without us even realizing it... well, good luck with that whole "informed citizenry" thing.
But Harari isn't all doom and gloom. He offers some potential solutions, like:
Benevolence: Ensuring that when computers collect our data, it's used to help us, not manipulate us. Decentralization: Never allowing all information to be concentrated in one place (government or private). Mutuality: If we increase surveillance of individuals, we must simultaneously increase surveillance of those in power. Flexibility: Always leaving room for both change and rest in our information systems.
The Conservative Suicide: When Tradition Becomes Revolutionary
One of the most surprising sections in Nexus deals with what Yuval Noah Harari calls "the conservative suicide." He argues that the rapid pace of technological change has made traditional conservatism untenable. Instead of preserving existing institutions, many conservative parties have transformed into radical, revolutionary movements. It's a fascinating analysis that helps explain some of the political chaos we're seeing around the world.
The Alien Intelligence: Are We Creating New Gods?
Harari ends with a sobering reflection on the nature of AI itself. He argues that we're not just creating tools - we're potentially birthing a new form of intelligence, one that thinks in ways utterly alien to us. And just as human-created mythologies like money and nations have shaped our world, these AI systems might create their own "inter-computer realities" that end up dominating ours.
He writes, "Just as intersubjective realities like money and gods can influence the physical reality outside people's minds, so inter-computer realities can influence reality outside the computers." Excuse me while I have an existential crisis real quick.
Final Thoughts: A Must-Read Wake-Up Call
Look, I'm not gonna lie—this book is heavy. It's the kind of read that makes you question... well, everything. But that's exactly why it's so important. Harari has this uncanny ability to take impossibly complex topics and make them not just understandable, but urgent.
Is he always right? Probably not.
But the questions he raises are ones we desperately need to be grappling with as a society. How do we harness the power of AI without losing our humanity in the process? How do we preserve democracy in an age of algorithmic manipulation? Can we create information systems that prioritize truth over order?
Nexus by Yuval Noah Harari doesn't offer easy answers, but it does give us a framework to start thinking about these issues. And given how rapidly technology is advancing, we need to start thinking about them now.
If you've enjoyed Yuval Noah Harari's previous works, Nexus is a no-brainer. But even if you're new to his writing, I'd argue this is his most important book yet. It's a wake-up call, a warning, and a glimmer of hope all rolled into one. Just, you know, maybe don't read it right before bed. Unless you enjoy apocalyptic nightmares about sentient algorithms, that is.
Comparison to Other Works
While Nexus builds on themes from Harari's previous bestsellers like Sapiens and Homo Deus, it feels more focused and urgent. Where those books took broad views of human history and potential futures, Nexus zeroes in on the specific threat/promise of AI and information networks.
For readers looking for similar explorations of technology's impact on society, I'd recommend:
- The Age of Surveillance Capitalism by Shoshana Zuboff - Weapons of Math Destruction by Cathy O'Neil - Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark
But honestly, Harari's particular blend of historical insight, philosophical musing, and futurism is pretty unique. Nexus cements Yuval Noah Harari's position as one of our most important thinkers grappling with the implications of the AI revolution.
In Conclusion
Nexus is a tour de force that will leave you both exhilarated and deeply unsettled. Harari's exploration of - and how AI might reshape our future - is required reading for anyone trying to make sense of our rapidly changing world. Just be prepared for some serious existential pondering afterward. Maybe keep some comfort food on hand. You know, to remind yourself you're still human and all that.
I’ve been reading Yuval Noah Harari’s books, and they’ve helped me understand how humans have changed the world with ideas and technology. 🧐
One big idea is that humans believe in shared stories, like money, countries, and religions. These stories bring people together and shape how societies work.
Ancient empires grew by using networks like writing and transport to control large areas. These early communication systems helped them stay in power.
While information networks have helped us make progress, they also come with risks, like spreading false information, propaganda, and being used for spying. Today, with so much information around, it’s hard to know what’s true or not.
Now we’re entering a new era with AI, which can change society but also brings up big ethical questions. How we handle AI will affect the future of humanity.
Besides having knowledge, we need wisdom to use that knowledge in the right way and make good decisions. It’s important to make sure that technology helps everyone and respects human values.
Overall, Nexus reminds us how information networks have shaped history and the important choices we need to make as we move forward with new technology.
Occasionally I hear the term "Davos man." I do not exactly know what it is, but I do not believe it is meant as a compliment, and I think it would apply to Yuval Harari, who pops up every year or so with a Big History book that is inevitably a mile wide and an inch deep. At their worst, Harari's works can talk down to the reader. At their best, and this is most of the time, they really shine. They are the kind of books people like to be seen reading, like The Economist.
Nexus is a work of artificial-intelligence doomerism that is best appreciated by those who are interested in the subject but have little to no knowledge about it. I don't know that the world needed 500 pages of hand-wringing in order to reach the author's bottom line, which is that societies should take steps to ensure that AI is and remains compatible with democracy. You wonder whether he supposed anyone would disagree.
Harari does not subscribe to "if it ain't broke, don't fix it." He is a man of Big Ideas, and he is here to correct centuries of lesser minds steering you off course. For instance, he says: "History isn't the study of the past; it is the study of change. History teaches us what remains the same, what changes, and how things change."
I am embarrassed to admit that before reading Nexus, I believed that history studied the past. In seriousness, though, I think that Harari's second sentence may well be true, but the first sentence need not logically follow. And it does not take long for Harari to seem to forget his oddball redefinition and to revert to what he well knows history is: "Since [Nexus] is a work of history, which studies the past and future development of human societies, it will focus on the definition and role of information in history." And then: "History is often shaped not by deterministic power relations, but rather by tragic mistakes that result from believing in mesmerizing but harmful stories."
So much for "the study of change," which did not seem to win over the ignoramuses at . Am I being pedantic? Knowing myself, probably, yes. But one thing that I think is reasonable to expect from academics is the acceptance of the most basic of terms. Not everything needs to be shaken up to wow the audiences at TED talks.
Later in the book, Harari turns his sights on the definition of democracy: "A democracy is not a system in which a majority of any size can decide to exterminate unpopular minorities; it is a system in which there are clear limits on the power of the center." This is pretty wrong; at the founding of the American republic, democracy was the bogeyman government that was to be avoided at all costs lest a majority tyrannize the minority. Harari likely knows this, but seeks nonetheless, for some reason that is not entirely clear, to rehabilitate democracy as an unalloyed good, and . A democracy is quite capable of exterminating unpopular minorities.
Harari is open about his intention to "dedicate relatively little attention to the positive potential of algorithmic bureaucracies, because the entrepreneurs leading the AI revolution already bombard the public with enough rosy predictions about them." It is for Harari, the Big Historian, to draw the world's attention to "the more sinister potential of algorithmic pattern recognition."
Finally, someone exposes the negative aspects of artificial intelligence. Actually, this is being done almost constantly, which you may see for yourself by Googling "AI racist." Now, there is nothing wrong with deciding to write a book that is critical of AI, but the fact is that AI's bulls and bears each have plenty of air time, and Harari's suggestion to the contrary does not stand to earn him much trust with his readership.
Yuval Noah Harari has reached that stage as an author where he has realized that pretty much everything he writes about will be greeted with acclaim. Including this latest waste of time he calls a book.
Nexus: A Brief History of Information Networks from the Stone Age to AI is not a history of information networks. In fact, it is not even about information. He starts the book intending to write about information, but before he gets to the middle he starts writing about information media. By the time you get to the end, it becomes about Artificial Intelligence, which is neither information nor information media. By this time, it becomes the standard fare you would expect from the usual writers on AI such as Kurzweil, Bostrom, etc.
But before we get to AI, the first part of the book attempts (and fails woefully) to shove, break, and stuff history into his poorly defined and poorly developed idea of information networks. By the time you get to the end of the first part (Chapter 5), you will be hit with a strong wave of deja vu. It's Sapiens in a new cover. He flies through history haphazardly and feverishly like a man running barefoot on hot coals. Oh, look Mesopotamia! Look, there is the beginning of the Church. Over there is the witch hunt and we bullet past Stalin and Hitler. It is chaotic, disorganized, and doesn't take the reader seriously.
The second part of the book argues that we are creating an entirely new kind of information network with social media and AI. You don't need more than a blog post to figure this out even if you are entirely new to the subject.
The third part deals with how different kinds of societies might deal with the threats and promises of this new information network. Spoiler alert: since this book is devoid of anything resembling serious analysis, you will end this part feeling underwhelmed. It is a book written to tell you that we don't yet know the implications of the AI revolution. I'd appreciate an author with a wrong prediction better than one who writes hundreds of pages to tell you that there is nothing to see yet. all we get is that by expanding our horizons to look at how information networks developed over thousands of years, I believe it is possible to gain some insight on what we are living through today..
O carte profund discutabilă, plină de enunțuri confuze, animată de obsesia autorului de a intui și anunța o posibilă tragedie. Lipsa de rigoare a lui Harari în definirea noțiunilor fundamentale e stupefiantă. Voi spune de la bun început că adevărul în sens strict e o calitate a propozițiilor (doar ele pot fi adevărate sau false, nimic altceva), nu e ceva în sine. Adevărul nu trebuie reificat.
Întîlnim aceeași imprecizie noțională și cînd încearcă să definească termenul de „informație�. Ce este informația, crede Harari, nu se poate spune precis, dar e sigur că ea e „constituentul fundamental al realității�. Este informație ceva care pune în contact, în relație mai multe elemente. Doar ultima afirmație este valabilă, cea dintîi e cît se poate de îndoielnică.
În plus, ideea lumii (platonice a) teoriilor, noțiunilor, problemelor, miturilor a fost exprimată mai bine (și mult mai precis) de K.R. Popper, cînd a vorbit de „lumea a treia�. Harari afirmă că lumea e populată de entități „inter-subiective�, dar asta nu e deloc o noutate, deși e un „adevăr�.
Autorul pretinde că dictatura (dar și democrația) e o rețea informațională. Așa să fie oare? Nazismul și stalinismul au fost în primul rind niște ideologii criminale, apoi niște politici la fel de criminale, și abia în subsidiar niște „rețele informaționale�. Sintagma e folosită de Harari (care vede peste tot astfel de rețele) și cînd se potrivește, și cînd nu. Democrația e o rețea, totalitarismul e și el o rețea, haita de lupi e o rețea, Facebook e o rețea. Nu-i cam mult? În definitiv, ce rețele informaționale existau în epoca de piatră? Probabil, cea care leagă gura, limba și urechea...
Deși a scris de multe ori că istoricul trebuie să se abțină de la a face predicții, autorul nu se abține să facă previziuni, să se comporte ca un profet în transă mistică, să insinueze apocalipsa: „omenirea este mai aproape ca niciodată de a se anihila pe sine�, „trăim un moment critic�. În opinia mea, nu IA este marele pericol de astăzi, ci „inteligența naturală� a unor dictatori smintiți.
În pofida credinței lui Harari, deocamdată, IA nu este un agent, rămîne un instrument foarte imperfect. Cazurile care ar arăta deslușit că IA are inițiative și face alegeri (algoritmii Facebook, programele de go sau de șah) rămîn neconvingătoare. Faptul că un program poate învinge oricînd un campion la șah (sau la go) vădește că puterea de calcul a minții umane e limitată. Dar a calcula nu înseamnă a gîndi și, din acest cuvios motiv, nu putem spune despre abac că e inteligent. Expresia „detergent inteligent� ține de moda de a vedea inteligență peste tot. Nu peste multă vreme, vom bea apă inteligentă și vom mînca inteligent numai legume inteligente.
Autorul se exprimă de prea multe ori sentențios, pe un ton sacerdotal, de pontif, ca și cum ar formula pentru prima dată niște afirmații la care nimeni nu s-a mai gîndit pînă acum. Dar aceste afirmații sînt, în realitate, niște truisme: „adevăratul erou al istoriei a fost întotdeauna informația, mai degrabă decît Homo sapiens�. Neîndoielnic, Homo sapiens are multe păcate, dar nu poate fi detronat din poziția de „erou al istoriei�. Transcriu o altă „înțelepciune�: „Istoria nu este studiul trecutului; este studiul schimbării�. Dar cînd a spus vreun istoric serios altceva? Dimpotrivă, istoricii buni au analizat tocmai „mutațiile� istorice, rupturile. Renașterea a prilejuit atîta literatură tocmai pentru că este o mutație.
În ultima vreme, IA a devenit o sperietoare pentru mulți erudiți. Și o ocazie de a băga frica în noi. Totuși, ca IA să devină superinteligență (o inteligență mai mare sau egală cu a omului), trebuie să aibă un ego, o identitate (eu vreau) și, deocamdată nu le are, și să aibă inițiative private (nu induse de om). Cele două exemple oferite de Harari nu m-au convins deloc. Violența n-a crescut în Birmania (Myanmar), fiindcă așa au decis algoritmii Facebook. Erau programați, de fapt, să aleagă postările cu multe like-uri. Iar astfel de postări sînt neapărat polemice, scandaloase. În fine, un program de go e doar un program care calculează mai repede și mai bine decît mintea unui jucător uman de go.
Repet: nu de inteligența artificială trebuie să ne temem deocamdată, ci de inteligența strict naturală a unor oameni politici contemporani. (30.10.24, miercuri)
Humanity created something evolutionary and released it into the world without either fully comprehending or controlling it. Again... This is my second Harari's book after Sapiens, and I liked it way more. Sapiens was to generalized - about all and nothing in particular, but here author conducts more thorough research of some fascinating ideas. With this book I finally got scientific explanation why the idea of AI was always repulsive to me. I never understood people's obsession with "talking to" Chat GPT. It seemed unsettling. Now it seems terrifying. Although Harari doesn't try to scare us, his warnings should be taking seriously, considering that we can already see some negative impacts and lack of accountability. And I absolutely loved his idea of calling AI not an artificial but alien intelligence. From now on I'll be calling it like this. Anyway, the point is that our society is changing drastically. If we want these changes to be for better we shouldn’t treat this technology like a toy.
I enjoyed the historical parts of this book, focusing on development of information networks and the opposing powers of bureaucracy and mythology more than the latter part of the book, which fall into a rather bland warning for humanity to unite. All human political systems are based on fictions, but some admit it, and some do not. Being truthful about the origins of our social order makes it easier to make changes in it. If humans like us invented it, we can amend it. But such truthfulness comes at a price. Acknowledging the human origins of the social order makes it harder to persuade everyone to agree on it. If humans like us invented it, why should we accept it?
A new book of is always an occasion, and I enjoyed how in we are again presented with a high level view on human history. In this book we focus on how information networks develop and are influenced by developing technology. Roughly the first half of the book dives into historical precedents, while the second half extrapolates those trends to AI. Much of the recommendations of the book make common sense but don't offer a viable way towards them in the current political climate, which undermines a bit the overall message in my view. Below some more detailed thoughts, it might be slightly rambling at times but I tried to refine my notes from listening to something more coherent.
Hariri argues against a naive belief in more information leading to more openness and cooperation due to this enabling better, more effective decisions. While we definitely made progress over time, for instance in Goethe his age only 25% of children getting to adulthood, while now globally it is 95.6% and 99.5% in Germany, information in itself doesn't naturally lead into wisdom. From the naive belief a moral obligation to pursue new technologies emerged, which was only undermined in parts during the industrial revolution (by the luddite movement for instance) and after WW II with the terrifying potential of nuclear weapons. Now we are faced with the totalitarian potential of AI, with the outsourcing of decisions to technology, not a tool but an actor. Power is the only reality - Karl Marx said, and with more technology the reality distorting field of autocrats seems to be increasing, explaining the rise of fascist movements in the 20th century, when technology finally caught up by authoritarian visions of control of society and individuals.
Do your own research sounds good but brings enormous responsibility with it that no one individual person can fully replicate all the world's knowledge. Often this gives way to trust in the great wizard (Trump, Bolsenaro, Orban), powered by populist messages. History is not the research into the past but into change according to the author, and this first half of the book is an interesting overview of the vying forces of bureaucracy versus mythology in society.
Underneath this struggle is a dichotomy between information as something that reflects reality and information that only exists in a societal context. In the first section of the book we get some semantics on the nature of information, with some arguing that information is always an abstraction of reality, focusing on aspects of reality, however lies and astrology also created realities, even though not having a direct relationship with reality. Information as contextualising power between people have through out time, with for instance the witch hunts, being very powerful as well. The problems increase in the current time, with connections increasing faster than the factual representation of reality.
Harari then turns to his thesis that narrative acts as central nexus between Homo Sapiens. We have seen this before in in a sense. Examples include how repeating a false memory long enough makes it real for individuals, which lead the author to argue that intersubjective networks created by narrative are the main reason for the rise of Sapiens to the dominant species on the planet.
In societies, starting of with agricultural settlements, power was and is dependent on truth and order, with sacrifices of truth (for instance to institutionalise the king as divine) to retain order in society. Written documents enabled bureaucracy, which does not proliferate via narrative but through dry “lists�. The cuneiform lists from Mesopotamia are good examples, and enabled the raising of taxes. Bureaucracy as term stems from 18th century France and means ruling from the bureau, where public servants implemented taxes and laws on behalf of the regime. Interpretation of documents gave power to technocratic institutions, chapter 4 about the bible and its interpretation reminds me of the whole interpretations (IFRICs) of the International Financial Reporting Standards (IFRS) by the International Accounting Standards Board (IASB). Because the word of God didn't interpret itself.
An interesting example of more knowledge proliferation not leading to better decisions is how witch hunts indirectly where an outcome of the invention of the printing press, with publications called the Witch Hammer being bestsellers while the work of Copernicus didn’t sell out their initial print run. Hariri then argues for the importance of self correcting mechanisms as key basis of the scientific revolution.
Totalitarian regimes are centralised and have no or limited self correcting mechanisms, opposed to democracies that have self correcting mechanisms and decentralised decision making. Note that democracy is not a dictatorship of the many, there are fundamental human and civic rights that limit the power of the democratic government, including the freedom of media and to vote. Mass media enables mass democracy.
That technology in itself is not benign is also illustrated by ChatGPT-4 being able to use Taskrabbit to solve Catchpa’s, in place to tell computers and humans apart.
Extreme fast proliferation and development of computer intelligence, if still founded on usage of vast physical resources like water, energy and off course the most high powered chips.
Data harvesting undermining modern monetary systems and tax methods, with the surplus large technology firm generate through using information of users to develop ever more sophisticated AI models being largely outside of the monetary sphere. However isn’t monetisation not one of the key problems of new technology?
Lack of understanding of regulators and consumers about AI hinders a societal debate on these developments. Technology in autocratic or democratic systems is the same, but the politics determine the outcomes we might end up living in. End of chapter 5 is quite a call to action and a change from the more historical oriented first half of the book.
NSA introducing Skynet, an algorithm that analyses all of the phone data of Pakistan, leading to Hariri declaring that the the post privacy world is here. AI being able to identify adults who where abducted at 3, based on extrapolation of old pictures. But at the same time, AI being used by Iran to enforce religious clothing laws automatically. is mentioned, and I see how this might have been an important driver for this book. Tripadvisor being mentioned as peer to peer surveillance. Information is not truth, Hariri coins a term the dictatorship of the like.
Viral being the holy grail of tech firms, leading algorithms to focus on spreading the most inciting messages and most crazy conspiracy theories. Facebook having 5 moderators in 2018 on 18 million users in Myanmar. This is not an unsolvable problem, spam being eliminated for 99.9% by Gmail’s commitment to keep mail relevant.
Maximising for victory (following Clausewitz his warning that war should be seen as an extension of politics) is as shortsighted as maximising for engagements. Nick Bostrom his paperclip example from : how do we define the goals an intelligence greater than that of ourselves should pursue? Kant’s (and the Bible’s) golden rule not being that useful to learn to inorganic entities. Interesting how the author steers completely clear of his rules of Robotics.
How should computers set the ideal balance between order and truth and how to include a realisation of own fallibility, especially as we ourselves are already having intense debates on these topics?
Decentralisation, separation of powers, finding a balance between rigidity and flexibility, clarity of intent (of data usage), increase of surveillance must be coupled with higher transparency. Het zoeken naar de gulden middenweg is een taak die nooit ophoudt Conservative parties collectively choosing breaking with the status quo and trusting in strongmen as a solution to societal problems in the 2010’s, instead of their usual programme of slow reform and respect for established institutions.
Data colonialism, where there are no boundaries physically anymore to the centralisation of power in the hands of very few, very big US and Chinese firms: Concentration of power, enabled by AI
Dictatorships being isolated and easily manipulated by AI, like Roman Emperors relegated to Capri by their bodyguards and only rulers in name, doesn’t convince me AI would not be effectively used by them to be honest.
I mean, the AI act of the EU sounds nice but if we can’t understand anymore how AI reaches it decisions given its superior intelligence at some point, how can we enforce these rights? Social credit systems in the AI act being outlawed is a relief.
Overall this is a book that does well in painting a historical picture, takes on ideas of techno-utopianism and gives ample food for thought. Given the ambitions and scope of the book the solutions are not easy, but require political courage and social debates that seems ever harder to pull off, leading me in the end to feel this is in a sense quite a bleak book. As in respect to climate change, the moment to regulate and change is now, but as there we don't seem to be able to pull this off in respect to AI, making the future especially hard to predict and more uncertain than ever.
PS: The bird on the cover relates not just to how pigeons have been used to transfer data in the old days but also comments on how narrative overpowers the factual facts. .
This pigeon was decorated for bringing in orders on time to help the Allied victory. Even people close to the actual events had their memories altered by the narrative, with the pigeon even coming back in the Smithsonian. In reality, there is no evidence that the decorated pigeon brought in the orders and it is very possible that propaganda drove the creation of the myth of the pigeon that saved 194 soldiers.
PPS: I enjoyed the investigation of the interplay between technology and societal structures, including how mass media drove the advent of both the modern democracy and fascists governments. I would have liked some speculation/visions on how AI would feed into new ways of governing, but Hariri shies away from this. Maybe the Terra Incognita series of does this the best still, showing the power of narrative.
What a disappointment! Nexus. despite its title, has no thread. Rather, it feels as if Harari threw everything he could think of at the wall and hoped for the best. The result was a difficult, murky read.
Whatever I write here (in this review), you'll probably read "Nexus" anyway ;) And that's a good decision as it's worth reading however even if you're familiar with previous books of YNH, there are a bunch of things you should know before reaching for "Nexus":
1. it feels much more chaotic than his previous books; like the author tries to approach a poorly defined topic and struggles with choosing the correct "angle of attack"
2. IMHO YNH should pick a co-author for this one - it's clear he's not a technical person and someone it affects the content; I know he has mentioned there will be (over-)simplifications (like what's an algorithm, computer, network, etc.), but sometimes it was pretty clear he doesn't understand how "algorithms" work: who's in control, what's within and beyond their reach, etc. - of course if you're naive you can assume he was just open-minded enough to speculate about what they would be able to do in future, but I'm not "buying it".
3. The issues I've described in the previous point is even more painful when it comes to chapters/sections about AI. A bit of understanding of the nature of expert systems, ML, Gen AI and AGI (what they are, what are their limitations - given by their nature not the current state of tech) would really make this book better (& more "future-proof").
4. But it is NOT a bad book. Not at all. In fact, I've made a lot of notes & even before finishing it, I was already sure I'll be getting back to it, probably more than once. The majority of good stuff is all about "mental models" used by YNH to capture concepts like: information, truth, information network, veracity, etc. He has also made many, many good observations on the current work - e.g., all the non-monetary exchanges of value that are bi-directionally based on the exchange of information. And how they are problematic because they can't be taxed, etc.
5. The book gets really interesting when he moves from observations to diagnosis of problems - in many cases, it's really well-done & thought-provoking. Sadly there's very little on the actual solutions. Sometimes, the directions sketched are not bad, but these are definitely just the openings of possible discussions, not even half-baked answers to any question.
6. Occasionally, I was really irritated with mental shortcuts and even some "thought laziness" - one good example is the effects of social media platforms (take Birma's example that was quoted many times). The problem is this case is NOT that the social media (like FB) have done something new, unique, and especially devious - they just did THE SAME thing traditional media do, BUT social media: have global outreach, the publishing cycle is much shorter, and the feedback loops operate with such an insane velocity that they create flywheel effect - FB doesn't do ANYTHING differently (e.g., less objectively) - with the same target goals (e.g., increase interactions) they are just significantly more effective.
OK, let's stop here. "Nexus" is slightly different. One reason is the fact that for the very first time YNH is not an expert on what he's writing about. But it's intriguing enough to have it recommended. It could have been more "tidied" and "cross-checked" by knowledgeable folks who are not YNH's zealots, but it's still very good and very needed.
মোবাইল� একটা ছব� এডিট করার ক্ষমতা নে� যা� (করলে� সে� ছবির অবস্থা হয� একইসঙ্গে ভৌতি� এব� হাস্যক�), সে� আম� যদ� AI-এর মত� আধুনিক সফিস্টিকেটেড তথ্যপ্রযুক্ত� নিয়� চিন্তাভাবন� কর�, তাহল� কি মানায়? কিন্তু কী আর কর� যাবে, জমান� যেদিকে এগোচ্ছ�, একটু আধটু খব� না-রাখল� তো চল� না� ইউভা� নোয়াহ হারারি� বইটা যদিও এতকিছু ভেবেচিন্তে কিনিনি� বরাব� এই লেখকের বই পড়ত� ভালো লাগে, জানত� পারলাম তাঁর নতুন বই বেরিয়েছ�, তা� কিনেছি� তো এই দফায� হারারি� আলোচনা� বিষয়বস্তু : A brief history of information networks from the stone age to AI.
উপরে� ইংরিজি লাইনটা হল� বইয়ের উপ-শিরোনাম। প্রস্ত� যু� থেকে মানুষে� দ্বারা উদ্ভাবিত এব� ব্যবহৃ� বিভিন্� information network (ভাষা, মিথোলজ�, ব্যুরোক্রেসি, ধর্মগ্রন্থ, মুদ্রণযন্ত্র, বৈজ্ঞানি� গবেষণাপত্র, সংবাদমাধ্য�, রেডি�, ইন্টারনে�, ইত্যাদ�) নিয়� আলোচনা করেছেন বট�, তব� শে� অবধি এই বইয়ের মূ� আলোচ্য বিষয� যে "আর্টিফিশিয়া� ইন্টেলিজেন্স", সেটা বোধহয় ইতিমধ্যে প্রায় সবাই জেনে ফেলেছেন। হারারি প্রথমে খু� ধীরেসুস্থে এব� বে� বিশদ� একদম প্রাচীনকাল থেকে শুরু কর� হা� আমলে� তথ্য-সম্প্রচা� ব্যবস্থাগুলো নিয়� আলোচনা করেছেন� সমাজের উপ�, রাজনীতি� উপ�, অর্থনীতি� উপ�, ধর্মের উপ�, সর্বোপরি ব্যক্তিমানুষের উপ� ইনফরমেশনের প্রভাব নিয়েও বিশদ আলোচনা করেছেন (বইয়ের প্রায় অর্ধেক আলোচনা এইসব বিষয়ে)� কারণ পুরো ইতিবৃত্তটা খোলস� না-হল� বর্তমা� সমাজ এব� সভ্যতায় AI-এর প্রভাব আমার মত� বিজ্ঞানচর্চাহী� সাদামাটা পাঠকের মস্তিষ্ক� সহজে ঢুকত� না� যদিও এই বইয়ের আলোচনা যতটা না প্রযুক্ত�-বিষয়ক, তা� চেয়� বেশি আর্থসামাজিক। রাজনৈতিক�
হারারি� প্রতিট� বইয়ের বিশেষত্ব হল� তিনি প্রচলি� চিন্তাভাবনাক� ভেঙেচুরে একটা নতুন রূপে পাঠকের সামন� হাজি� করেন� এছাড়া�, হারারি� প্রথ� বই ("সেপিয়েন্স") থেকে� আর� একটা ব্যাপা� খেয়াল করেছ�, তিনি তীব্রগতিসম্পন্� অনুভূতিহী� অত্যাধুনিক যন্ত্রসভ্যতাকে সন্দেহের দৃষ্টিতে দ্যাখেন। তা� বল� পাঠককে পুনরায� সে� আদিম শিকারী জীবন� ফিরে যেতে উদ্বুদ্ধ করেন, এমনট� নয়। তিনি শুধু এইটা বোঝাতে চা� : আধুনিক সভ্যতা� ব্যাপক অগ্রগত� নিয়� নাচন-কোদন করার আগ� একটু স্থি� হয়ে বস� চিন্তা কর�, বৎস। যতটা আহ্লাদ� ডগমগ হচ্ছ, আদ� কি ততটা� "মানু�" হয়ে উঠেছ� হোমো সেপিয়েন্স নামক প্রজাতিট�? নাকি বহ� ক্ষেত্রে নিজে� পায়� কুড়ুল মারছ� আমরা? এই বইতে এম� একটি প্রযুক্ত� নিয়� আমাদের সাবধান করেছেন তিনি, যেটাকে ঠি� মত� ব্যবহা� করতে না-পারল�, সম্ভবত এটাই হব� শেষ��ারে� মত� পায়� কুড়ুল মারা� এরপর কুড়ুল� থাকব� না, পা-� না, পায়ের মালিকও না� আর্টিফিশিয়া� ইন্টেলিজেন্স বিষয়ে হারারি ২০১৬ সালে প্রকাশিত তাঁর Homo Deus বইটিতে কিছু প্রাথমিক আলোচনা করেছিলেন বট�, কিন্তু গত আট বছরে তথ্যপ্রযুক্তির এই ক্ষেত্রট� বহ� দিগন্ত অতিক্র� কর� ফেলেছে�
হারারি� বইটি� মূ� প্রতিপাদ্য বিষয� : এই বাঁক�-হাসিওয়ালা (এব� আমার মত� যারা বাঁক� হাসে না, কিন্তু এই বিষয়ে অজ্ঞ) মানুষদের প্রকৃত বিপদটা সম্পর্কে অবহি� করা। "আর্টিফিশিয়া� ইন্টেলিজেন্স" প্রযুক্তিট� এখনও যে-পর্যায়ে রয়েছে, তাকে মানুষে� জীবনচক্রের সঙ্গ� তুলন� করলে বলতে হয�, এখনও সে হামাগুড়� দিতে� শেখেনি� হা� পা ছুঁড়ে দোলনায� শুয়� আঙুল চুষছ� আর আবাবাবাবাব� গিগিগিগিগি এইসব দুর্বোধ্� কথ� বলছে� কিন্তু এই দোলন� পর্যায়ে� সে নিখুঁত ছন্দ� কবিত� লিখছ� (অসন্দিগ্� পাঠকের কাছে যেগুলোকে প্রতিষ্ঠিত কোনো কবির নামে সহজে� চালিয়� দেওয়া যায়)� ওস্তাদের মত� ছব� আঁকছ� (ধরিয়ে না দিলে অনেকসময় বোঝা� উপায� থাকে না সেটা কোনো বিখ্যা� চিত্রকরে� আঁকা নয�)� গানে� সু� সৃষ্টি করছে (এটার ব্যাপারে� এক� কথ�)�
শেয়ার বাজা� কিংব� ব্যাঙ্কি� ব্যবস্থা� বিপু� পরিমাণ জটিল হিসাবপত্� চোখে� নিমেষে হজ� কর� ফেলছে। সুযো� পেলে গোটা-গোটা প্রবন্� নিবন্ধ লিখে ফেলছে। অসংখ্য ধারা-উপধারা-দফ�-অনুচ্ছেদ-কার্যবিধ�-সংশোধনী সম্বলি� দেশে� সংবিধানে� খোলনলচ� মুখস্ত কর� ফেলছ� (যেটা কোনো এক� মানুষে� পক্ষ� সম্ভ� নয�)� এত বেশি পরিমাণ� ফে�-টুইট কিংব� ফেসবুক� ফে�-স্ট্যাটা� পোস্� করছে যে, ২০২২ সালে� হিসে� অনুযায়ী সেইস� নক� পোস্টে� সংখ্যা� দুনিয়ার সমস্� পোস্টে� শতকর� ৩০ ভা�! মানে প্রত� তিনট� পোস্টে� মধ্য� অন্ত� একটা পোস্� কোনো মানু� লিখছ� না, লিখছ� একটা AI chatbot. এমনক� এই রিভিউট� আম� নিজে লিখেছি নাকি কোনো AI ওয়েবসাই� থেকে জেনারে� কর� হয়েছে (পুরোটা না হলেও কিছুটা অং�), এই ব্যাপারে পাঠকদে� ১০�% নিশ্চি� হওয়ার উপায� নে�! 😅
এব� এস� ছাড়াও সে আর� অনেক কিছু করছে� এব� এট� সব� শুরু� এখান� কয়েকট� বিষয� জানিয়� রাখা দরকার। আমরা ভাবছ� মানু� নামক প্রাণীটা� সঙ্গ� AI কোনোদি� পেরে উঠবে না কারণ, চৈতন্য (consciousness), ধী শক্ত� (wisdom), বৈদগ্ধ্য (wit), আবেগ (emotion)� ইত্যাদ� বিশে� কয়েকট� স্বতন্ত্� বৈশিষ্ট্� রয়েছে মানুষে� মধ্য�, যেগুলো না-থাকল� সৃষ্টিশীলত� (creativity) আমদানি কর� সম্ভ� নয়। কিন্তু সবচেয়� গুরুত্বপূর্ণ (এব� দুশ্চিন্তা�) বিষয� হল�, আর্টিফিশিয়া� ইন্টেলিজেন্স এইসব মানবিক গুণগুলোক� আদ� পাত্তা দ্যায় না! আসলে সে "মানু�" হতেই চায় না! সে আর্টিফিশিয়া�-� থাকত� চায়! সে শুধুমাত্� INTELLIGENT হত� চায়! এব� intelligent হওয়ার জন্য সে আবেগের চেয়েও তথ্যের উপ� বেশি নির্ভর করে।
ইতিমধ্যে� সে বহ� ক্ষেত্রে মানুষে� চেয়� অনেকগু� বেশি বুদ্ধিমা� হয়ে গেছে! সে শুধু দুটো কা� করে। যত বেশি সম্ভ� ইনফরমেশন জোগাড় কর� এব� সে� ইনফরমেশনকে বিশ্লেষণ কোরে একটা প্যাটার্� খুঁজ� বে� করে। ব্যা�, এটুকুই তা� কর্তব্য। ("Flooding people with data tends to overwhelm them and therefore leads to errors, flooding AI with data tends to make it more efficient.") একটু তলিয়ে ভাবলেই বোঝা যাবে, প্যাটার্� খোঁজার এই পদ্ধতি� দ্বারা� দাবা খেলায় সে অপরাজেয় প্রতিষ্ঠ� অর্জ� করেছ� (আজ থেকে মাত্� চল্লিশ বছ� আগ� এই ব্যাপারট� নিয়� কম্পিউটা�-বিজ্ঞানীরা হাসাহাসি করতো, কিন্তু বর্তমা� পৃথিবীতে এম� কোনো মানু� নে� যে কম্পিউটারে� বিরুদ্ধে খেলে জিতত� পারে� জেতা দূরে থা�, জেতা� কথ� কল্পনা� করতে পারে!) বাঁক� হাসিওয়ালারা বলছে�, তো তাতে কী হয়েছে? "মেশি�-লার্নি�" পদ্ধতিতে দাবা খেলা শিখে গেছে বুঝলাম, কিন্তু ভ্যা� গগের মত� ছব� এঁকে দেখা� দেখি� বেঠোভেনে� মত� সিম্ফন� রচনা কর� দেখা� দেখি� জীবনানন্দে� মত� কবিত� লিখে দেখা� দেখি�
এখানেই একটা মস্ত বড� ভু� কর� ফেলছ� আমরা� মানুষে� ইতিহাস� ভ্যা� গগ কিংব� বেঠোভে� কিংব� জীবনানন্� দাশে� গুরুত্� অপরিসী�, ঠি� কথা। কিন্তু একইসঙ্গে অ্যাডল্ফ হিটলার নামক ব্যক্তিটির গুরুত্বও অপরিসীম। জোসে� স্ট্যালি� নামক গণহত্যাকারী খুনিটি� গুরুত্বও অপরিসীম। ডোনাল্� ট্রাম্� নামক নার্সিসিস্� মিথ্যাবাদী লোকটির গুরুত্বও অপরিসীম। বেঞ্জামি� নেতানিয়াহ� নামক যুদ্ধবাজ জঙ্গিসর্দারটির গুরুত্বও অপরিসীম। বিশ্বব্যাপী ছড়িয়� থাকা রাজনৈতিক এব� কূটনৈতিক মিথস্ক্রিয়া� গুরুত্বও অপরিসীম। আধুনিক পৃথিবী� রন্ধ্র� রন্ধ্র� ছড়িয়� থাকা অর্থনৈতি� যোগাযো� ব্যবস্থা (যা� দ্বারা কয়ে� সেকেন্ডে� মধ্য� পৃথিবী� একপ্রান্� থেকে আরেকপ্রান্তে টাকা পাঠানো যায়)� তা� গুরুত্বও অপরিসীম।
উপরে উল্লেখ কর� সোনা� টুকর� ব্যক্তির� কিংব� উল্লিখিত ব্যবস্থাগুলি� পরিচালকর�, এর� কেউই AI প্রযুক্ত� ব্যবহা� কর� তাকে দিয়� ভ্যা� গগের মত� ছব� আঁকাতে কিংব� লত� মঙ্গেশকরের মত� গা� গাওয়াতে উৎসাহী হবেন না� জীবনানন্� দাশে� কবিত� বিষয়ে� তাদে� কণামাত্র উৎসা� নেই। "বব ডিলা� যদ� শ্যামাসংগী� লিখতেন তাহল� সেটা কেমন হত� বল� দেখি?"� চ্যাটজিপিটিক� এইসব আজেবাজ� প্রশ্ন করার� ইচ্ছ� নে� তাদের। তাদে� ইচ্ছ� AI-কে কাজে লাগিয়� সেইস� কা� করান�, যেগুলো মানুষক� দিয়� করান� যায় না (যেমন, দেশে� জনগণের ব্যক্তিগ� জীবনের উপরে চব্বিশ ঘণ্ট� নজ� রাখা)� করালেও প্রচুর খর� হয়। কিছু ক্ষেত্রে খর� করলে� কাজট� ঠিকঠাক হয� না� অথ� এইসব কা� AI চোখে� নিমেষে কর� ফ্যালে! এব� নিখুঁতভাবে কর� ফ্যালে� শুরু� দিকে যদ� কিছু ভুলও কর�, তব� মানুষে� চেয়� অনেক দ্রুততায� সে� ভু� সংশোধন কর� ফ্যালে� মানুষে� মত� তা� বিশ্রা� নেওয়া� প্রয়োজন হয� না, ২৪/�/৩৬� সে একাদিক্রমে কা� কর� যেতে পারে� তা� কর্মদক্ষতা এত বিস্ময়ক� যে, ওহ� কি� জং উন, ওহ� ভ্লাদিমি� পুতি�, ওহ� হি জিনপিং, ওহ� আয়াতুল্লা� খামেনি, চোখে না-দেখল� তোমাদে� পেত্যয� হব� না হে!
বিপদের এখানেই শে� নয়। বর�, সম্ভাব্য প্রকৃত বিপদের সঙ্গ� তুলন� করলে এগুল� কোনো বিপদ� নয়। কারণ আস� বিপদটা হল�, AI যদ� নিজে� একজন কি� জং উন� রূপান্তরিত হয়ে যায়, তাহল�? কথাট� প্রথমবার শুনল� কল্পবিজ্ঞানে� আজগুবি গল্প বল� মন� হয� (এব� বাঁক� হাসত� ইচ্ছ� কর�)� মন� হয�, AI-এর অনেক ক্ষমতা আছ� বুঝলুম রে বাবা, কিন্তু তা� চাবিকাঠি তো রয়েছে মানুষেরই হাতে� সত্যিই কি রয়েছে মানুষে� হাতে? ইউভা� নোয়াহ হারারি তাঁর স্বভাবসিদ্� অন্তর্দৃষ্টি এব� যুক্তিসম্ম� বিশ্লেষণের মাধ্যম� দেখিয়েছেন, হ্যা� এখনও পর্যন্� মানুষে� হাতে রয়েছে বট�, কিন্তু খু� দ্রু� সে� চাবিকাঠি হাতছাড়া হয়ে যেতে চলেছ� (যদ�-না সাবধান হওয়� যায়)� এমনক� ইতিমধ্যে� কৃত্রি� বুদ্ধিমত্তার দোসর, যা� না� "সোশ্যা� মিডিয়� অ্যালগোরিদ�"� ইন� এম� কিছু কাণ্� ঘটিয়ে ফেলেছে�, এব� প্রতিনিয়ত ঘটাচ্ছেন, যে সত্য� কথ� বলতে, ভবিষ্যতে� পরিস্থিত� আজকে� দিনে� আন্দাজ কর� ফেলত� খু� বেশি অসুবিধ� হচ্ছ� না!
এই আর্টিফিশিয়া� বুদ্ধিমত্তার কাজকর্� কিছু ক্ষেত্রে এতটা� অদ্ভুত, অচিন্তনীয়, অকল্পনীয়, অভাবনীয় যে হারারি মন্তব্� করেছেন, AI-কে আর্টিফিশিয়া� ইন্টেলিজেন্স-এর বদলে "এলিয়ে� ইন্টেলিজেন্স" নামে ডাকলেও কথাট� একেবারেই বেঠি� হব� না� কারণ কোটি কোটি বছ� যাবৎ বিবর্তনে� মাধ্যম� গড়ে ওঠ� মানুষে� "অর্গানিক" বুদ্ধিমত্ত� এব� মানবিক প্রবৃত্তির সঙ্গ� AI-এর যান্ত্রি�, ইনঅর্গানিক, অশরীরী, অনুভূতিশূন্য, আপাদমস্ত� কৃত্রি�, শীতল বুদ্ধিমত্তার পার্থক্য ইত���মধ্যে� বে� বুঝত� পারা যাচ্ছে� আম� তো এই বইটা পড়া� আগ� জানতাম� না যে, মিয়ানমারে সাম্প্রতিক রোহিঙ্গা সংকটের পিছন� ফেসবুকের, কিংব� ২০১৯ সালে ব্রাজিলে� কট্ট� দক্ষিণপন্থী নেতা জাইর বোলসোনার�'� উত্থানের পিছন� ইউটিউবের এম� প্রত্যক্� অবদা� ছি�! অবদা� তো আসলে ফেসবুক কিংব� ইউটিউবের নয�, এইসব সোশ্যা� মিডিয়� ওয়েবসাইটে� "�-মানবিক" অ্যালগোরিদমের।
"হীরক রাজা� দেশে" ছবিত� সত্যজি� রায় গা� লিখেছিলে� : "নহ� যন্ত্র, নহ� যন্ত্র, আম� প্রাণী� আম� জানি, আম� জানি, আম� জানি� হীরক রাজা� শয়তান�!" হীরক রাজা ছি� রক্তমাংসের শয়তান� শয়তান হলেও তা� মাথায় ছি� মানুষেরই বুদ্ধি� তা� তা� শয়তান� ধর� ফেলা গেছিল। এব� সে� শয়তানির যথায� মোকাবিলা কর� হয়েছিল। কিন্তু যে-রাজা রক্তমাংস দিয়� তৈরি নয�, যে-রাজা� মস্তিষ্ক রক্তমাংসের মস্তিষ্ক নয�, যে-রাজা� বুদ্ধি� বিচিত্� গতিপ� মানুষে� মত� নয�, সে� রাজা যদ� কোনোভাবে ক্ষমতাবা� হয়ে যায়, তা� মোকাবিলা মানু� কীভাবে করবে? পৃথিবী� ইতিহাস� এই প্রথ�, মানুষে� মোকাবিলা মানুষে� সঙ্গ� নয�, একটা অপার রহস্যময় এলিয়ে� ইন্টেলিজেন্স-এর সঙ্গে।
এব� এই এলিয়ে� ইন্টেলিজেন্স এই মুহূর্তে ছোট্� দোলনায� দুলত� দুলত� হা� পা ছুঁড়ে আঙুল চুষছ�, আর গিগিগিগিগি আবাআবাবাবাবা এইসব দুর্বোধ্� ভাষায় কথ� বলছে� তা� এইসব বালখিল্য কাণ্� দেখে আমাদের মধ্য� কে� কে� বাঁক� হাসি হাসছ� বট�, তাকে তাচ্ছিল্� করছে বট�, কিন্তু একদি� তো সে বড� হবে। একদি� তো সে হাঁটতে শিখবে। এব� সে� দিনট� কিন্তু খু� বেশি দূরে নয়। আজ থেকে মাত্� ২০ বছ� আগ� (২০০৪ সালে, খু� সম্ভবত এই সেপ্টেম্বর মাসে�!) আম� যখ� নোকিয়� ৩৩১০ মোবাইল� আমার জীবনের প্রথ� মেসেজট� টাইপ করছিলা�, তখ� কি ঘুণাক্ষরেও কল্পনা করেছিলাম যে আজকে স্মার্টফোন� বাংল� ভাষায় এই রিভিউট� লিখে গুডরিড� ডট কম নামে� একটা ওয়েবসাইটে পোস্� করতে পারব�? ২০ বছ� পর� আজকে প্রযুক্তির গতিও ২০ গু� বেড়� গেছে� (নাকি আর� বেশি?)
মুশকিলটা হলো�
"As long as humanity stands united, we can build institutions that will control AI and will identify and correct algorithmic errors. Unfortunately, humanity has never been united."
An urgent and necessary book that functions as the logical follow-up to the author’s bestselling SAPIENS. If humans have dominated the planet by telling stories, how do those stories reach us, and what do we do when those storytelling networks slip out of human control? In a world where algorithms increasingly control what we see and how we talk to one another, I can’t imagine a more important book. Thought-provoking on almost every page.
Историческият подход към новите технологии има някои предимства.
Едно от тях е липсата на стопроцентов ентусиазъм за ползите (какъв прекрасен нов свят ни очаква!) и на убийствен песимизъм за вредите (изкуственият интелект ще унищожи човечеството!). Историята помни толкова различни сценарии (като много повече са забравените), че в първия момент те идват като неприятна изненада спрямо ясната, простичка картинка, която мнозина си изграждат.
Харари започва първо със съставените от хора мрежи през историята. Примерите му са бомбастични, стреми се да е афористичен, но не мисля, че е изчерпателен. Срамота, още толкова добри примери са пропуснати за сметка на безопасни повторения и игра с едни и същи сюжети. Но, да, генерално още от древността човечеството се дели по предпочитания спрямо тълкуването на наличната или създавана информация:
✔️ Онези, които сграбчват и свирепо бранят единствената, окончателна, неизменна, свещена истина; и онези, които любопитно не спират да изучават и търсят нови истини, без да ги е грижа за старите авторитети. ✔️ Онези, които искат на всяка цена да бранят, обезопасят, нагиздят и изолират своето уютно кътче; и онези, които ги влекат хоризонтите и волните, опасни ширини.
В резултат човечеството се е сдобило с религии и наука, с либерални режими (често демокрация) и авторитаризъм (често тоталитаризъм). С отметката, че демокрацията и тоталитаризмът стават възможни след XIX век заради глобализиращите се мащаби на наличната информация. Просто в първия случай има децентрализация и самокоригиращи се механизми, а във втория - централизация и самонаблюдаващи се механизми. Разликата между самокоригиращите се и самонаблюдаващите се механизми е ключова. В първия случай, ако един механизъм сгреши, друг може да го коригира. Ако всичките сгрешат, след време системата е отворена за последваща концептуална промяна. Във втория случай, ако един механизъм се отклони от тоталния контрол и зададена тоталитарна цел, другите моментално го прихващат и “връщат в правия път�.
На този пъстър фон поредната - изключително мащабна - технологична революция с изкуствен интелект и алгоритми получава начален контекст за бъдещето.
Във втората част Харари вече се захваща с AI. Макар и с по-малко детайли от очакваното за сметка на повече генерализации и нови повторения.
Тотално наблюдение, включително с подкожни чипове и общовалидни и закрепостяващи социални рейтинги за всеки? Формиране на правилното поведение у всеки? Изкуственият интелект може да го осигури. Всъщност вече го прави. И то не само в Китай. “Фейсбук� например агресивно застъпва принципа за максимална ангажираност на потребителя. Алгоритъмът с тази задача я приема буквално и неотклонно, и сервира на Мианмарските потребители клипове за етническа омраза. Те безспорно са ангажирани. И ето ти го клането на рохинга в Мианмар през 2016 - 2017 г.
Искаме свободен обмен на информация, разнообразие и различни гледни точки? Изкуственият интелект и алгоритмите умеят и това. Те всъщност ни познават по-добре от самите нас.
За разлика от телеграфа или парната машина обаче, изкуственият интелект и алгоритмите се обучават, развиват комплексно поведение и сами вземат решения как да подходят към постигане на зададената цел. Те са интелигентни и неимоверно по-бързи и по-мощни от човека. Така че пак се връщаме на различните предпочитания на човечеството. Защото те ще са решаващите дали и каква ще е следващата - вероятно силициева - стена, която ще ни раздели на “свободна� и “минирана� зона. Или просто ще ни постави под комфортен или не дотам похлупак.
—ĔĔĔĔ� ▶️ Цитати: 🛜“Еди� от неизбежните парадокси на полулизма е, че първо ни предупреждава, че всички елити са движени от гибелен глад са власт, а след това ни препоръчват да поверим власт в ръцете на отделен амбициозен човек.�
🛜“грешките, лъжите, измислиците и фикциите също са информация.�
🛜“информацията е това, което свързва отделните възли в една мрежа.�
🛜“Всичк� политически системи на хората се основават на фикции, но само някои го признават.�
🛜“Историята на човешките информационни мрежи не е победен марш напред, а ходене по въже, балансиращо истина и порядък.�
🛜“същностт� на патриотизма не е в рецитирането на вълнуващи поеми за красотата на родината, още по-малко в изнасянето на изпълнени с омраза речи срещу чужденците и малцинствата. Патриотизмът е в това да си плащаш данъците, така че хората в другия край на страната да мигат да получат канализация, както и сигурност, образование и здравеопазване.�
🛜“Кат� просто увеличаваме информацията в мрежата няма гаранция, че ще я направим добронамерена, нито пък ще стане по-лесно да намерим правилния баланс между истина и порядък.�
🛜“� качеството на информационната технология механизмът за самокоригиране е пълна противоположност на свещената книга.�
🛜“Основе� елемент в кредото на популистите увереността им, че “народът� не е съставен от отделни хора от плът и кръв, с различни интереси и мнения, а е по-скоро едно мистично тяло , което притежава собствена воля - “народната воля.”�
🛜“Популистит� се отнасят с подозрение към институции, които в името на обективната истина пренебрегват предполагаемата народна воля.�
🛜“Популизмъ� осигурява [на автократите] идеологически инструмент, чрез който да станат диктатори, представяйки се за демократи. Той е особено полезен когато автократите се опитват да неутрализират демократичните механизми за самокоригиране.�
🛜“Когат� доверието в бюрократични институции като избирателните комисии, съдилищата и вестниците е изключително ниско, митологията е единственият начин да се поддържа редът.�
🛜“какт� демокрацията функционира благодарение на припокриващи се механизми за самокоригиране, които се контролират едни други, съвременният тоталитаризъм създава припокриващи се системи за наблюдение, които се следят взаимно.�
🛜“Информационните системи могат да стигнат далеч, ако осигуряват частица истина и висока степен на ред. Всеки, който се ужасява от моралната цена на такива системи, не може да разчита единствено на предполагаемата им неефективност.�
🛜“Развивайк� се, изкуственият интелект ще става все по-малко изкуствен (в смисъл на зависим от човешки проект) и все по-извънземен.�
🛜“информационните революции не изваждат наяве истини. Те създават нови политически структури, икономически модели и културни норми.�
🛜“Корпорациите-флагмани на компютърната революция са склонни да прехвърлят отговорността върху потребителите и гласоподавателите, върху политиците и регулаторите.�
🛜“Основанит� на изкуствен интелект системи за наблюдение се използват в огромни ��ащаби, а не само в изключителни случаи. […] Краят на уединението е настъпил […].�
🛜“Информацията е едно, а истината друго. Тоталната система за наблюдение може да си изгради крайно изопачено разбиране за света и за хората. Вместо да открива истини, мрежата ще използва огромната си мощ, за да създаде един нов свят и да ни го натрапи.�
🛜“Когат� става дума за оцеляването на демокрацията, неефективността [на информационните масиви] e желан, а не страничен ефект.�
🛜“Дне� светьт се разделя все повече от силициева завеса. […] Програмите в смартфона ви определят от коя страна на завесата живеете, кои алгоритми ръководят живота ви, кой контролира вниманието ви и накъде тече информацията за вас.�
Harari elaborates with “Nexus� the compelling story about AI and its possible consequences on human history. The disadvantage of AI seems to be to outweigh the advantages as Harari deep dives into the poor quality of data we encounter on the internet.
As the principal goes ‘garbage in - garbage out�, we don’t seem to have created a solid platform for AI to be fed with quality data.
The innovator’s dilemma is grounded in trying to speed the development process of AI whilst trying to cleanse data.
It’s a chicken - egg problem at the end of the day as we produce billions of data sets on a daily basis with quickly available technologies with no governance.
Though we freely give away private data in social media, we don’t want it to be used by criminals or legal authorities without our consensus.
The reality is that we have little control over our information networks.
Harari enlightens us with a myriad of examples from ancient history to present day that showcase the various interpretations of information based on individual POV’s.
Shallow and superficial repeated information that seems more like newspapers articles than a real book. After I finished it, I felt that I haven’t learned anything new! It is strange how he attacks the church in many extended case that are not necessarily related to the concept he is trying to talk about, and avoids talking about other religions! After reading his first three books, the only one recommended is the first one, Sapiens. The rest is repeat and the last one Nexus is a waste of time
I would say 3.5. This was kinda rough for me to make it through. There are some really interesting ideas in here that are worth exploring, but there’s a lot of mess to get though. This book truly could have been half as long and it would have included all the necessary information. I’ve loved both of his other books so much, that I must admit this was a bit of a let down for me. Still worth reading if you are interested in the topic, but if not, I would skip.
Basically his conclusion is that we need self-correcting mechanisms included in AI so that we make sure things don’t get out of hand. He seems overly pessimistic in my view and to jump to extreme conclusions about some things.
I’ve read all of Harari’s books, and while I don’t necessarily agree with everything in every book, they make me think and stick with me for a long time, which is exactly what I want from any book. Like his two follow ups to sapiens, there’s a lot of overlap with his previous work. This one is an overview of information technology, starting with Stone Age, word of mouth stories to simple writing on clay tablets and on to the printing press, the internet and the problems we may face with artificial intelligence in the years to come. Each advanced in technology has had positive and negative effects, most of which were impossible to predict at the time of their inception. My favourite part of the entire narrative is when he talks about the first truly catastrophic example of misinformation in print. In the Middle Ages the church refuted the existence of magic and witchcraft and denounced a nut case who insisted on hunting witches. He responded by publishing a book called The hammer of witches. In the book he claimed to have proof that witches would steal your dick and put it in a bird’s nest with a bunch of other dicks where they would wriggle around and eats oats! This book was so popular that it overshadowed the church and 60,000 people (mostly women) were tortured and killed because of it.
In the case of AI, it’s doubtful that we’ll see a “terminator� scenario but the economic effects alone could be catastrophic and we’ve already seen the fallout from algorithms deliberately promoting inflammatory content in order to increase engagement on social media platforms. That doesn’t mean they have any feelings about those issues or even that their programmers did; they had a mandate to increase engagement and that was the most effective way to achieve it. So, lots more stuff to think about now. That was definitely not true about those dick stealing witches though…right?
Yeah this is not a good book which is honestly quite baffling. Sapiens is awesome and brought this author critical acclaim and it's pretty obvious he's just coasting on that popularity to sell now a very lack luster book. This book is boring and tedious and I learned nothing new. It's just random musings told over a loosely coherent theme. This guy literally talked about what a book even is for like an entire chapter and spent another chapter talking about *get this* religions are different than scientific institutions. You pair these banal takes with a narrative rooted in western democracy exceptionalism written with either blatant disregard for nuance or extreme naivete and that's this book. You can pass.
নোয়� হারারি'� যেটা সবচেয়� বেশি ভালো লাগে তা হল� শুরু থেকে শুরু করার বিষয়টা। নেক্সা�' বইটিতে মানবেতিহাসের শুরু থেকে বর্তমা� পর্যন্� ' তথ্য' কিভাবে পরিবাহিত � পরিচালিত হয� এব� কেমন কর� শুধুমাত্� তথ্যের পরিবেশনা পুরো সভ্যতাকে প্রভাবিত করেছ�, তা� অত� চিত্তাকর্ষ� বর্ণনা হারারি গল্পোচ্ছলে বল� গেছেন। সেইসাথ� বর্তমা� টেকনোলজি� ভয়ংকর পরিণতি� সম্ভাবনা সম্পর্কে হারারি উদ্বেগ প্রকাশ করেছেন� হারারি� লেখা অত� সুপাঠ্য। ইতিহাস আর সমসাময়ি� বাস্তবতা সম্পর্কে তাঁর ভিন্� দৃষ্টিভঙ্গ� আমায� মুগ্� � প্রভাবিত করেছে। আশ� কর� বঙ্গদেশে� আমার সহপাঠকমণ্ডলী হারারি� বই শুধু কিনে ফট� তুলে পাঠকসমাজ� নিজে� জা� বাড়ানোত� সীমাবদ্ধ থাকবেন না, পড়বেন � বট�! এই কথ� বলার হেতু যারা বোঝা� বোঝে নিবে� আশ� কর� :p
Tracing the development of information networks from the Stone Age to the present, this book explores the profound impact of these networks on human history. Harari posits that information is a fundamental building block of reality, shaping civilizations, influencing public opinion, and maintaining power. He examines how societies have utilized information to connect people, regardless of its truthfulness, arguing that the primary purpose of information is to create connections, which form the foundation of society.
Various historical examples illustrate how information networks have been wielded to achieve societal goals. Harari recounts the canonization of the Bible, where early Christian leaders meticulously selected texts to unify and strengthen the burgeoning religion. He also discusses the role of the printing press in the Protestant Reformation, highlighting how Martin Luther's theses spread rapidly, challenging the Catholic Church's authority. Another example is the use of propaganda during World War II, where both the Allies and Axis powers employed media to bolster morale and demonize the enemy. Harari also examines the impact of the telegraph on the American Civil War, noting how it revolutionized communication and strategy. Lastly, he explores the rise of social media in contemporary politics, illustrating how platforms like Twitter and Facebook have reshaped public discourse and political campaigns.
In the final sections, Harari addresses the existential challenges posed by the rise of AI. He suggests that while information is inherently good, humanity's self-destructive tendencies are more pronounced than ever. Harari calls for narratives imbued with self-correcting mechanisms to adapt to contemporary needs, emphasizing the importance of wisdom and ethical considerations in the age of AI. A sweeping historical narrative that encourages us to reflect on the complex relationship between information, truth, and power and where do we take it from here. Scary
Nexus provides a history of human communication (i.e. story telling) beginning with language and stone tablets, advancing on to paper and the printing press, and then on to computers and the internet. He reviews the human reaction to these past advances and then uses that history as a basis for guessing what might happen with Artificial Intelligence (AI) now and in the future.
Much of the book’s review of history consists of rudimentary accounts of technological advances which are made interesting for the reader with use of generously dispersed stories that illustrate as examples throughout the narrative. In the later half of the book as the author delves into possible scenarios of where AI may be leading the message of the book becomes a bit more cutting edge in its nature.
The book addresses the recent global rise of populist authoritarian figures and how it poses a threat to democratic traditions. The differences in ways that democracies and authoritarian governments may want to use AI are discussed. The book emphasizes that one important characteristic that AI needs is self correcting mechanisms to prevent it from diverging from beneficial and correct reflections of reality.
It seemed to me that much of this book is a restatement of the obvious, but it’s good to have these sorts of things clearly articulated from time to time. It is a good reminder to not forget about lessons learned from history as we speed faster and faster into the future.
I think I’ve read nearly all of Harari’s books, and while they tend to be quite similar, each brings its own flavor. Homo Deus, for example, felt like it was 50% Sapiens, and 21 Lessons for the 21st Century seemed almost like a summary of the first two. This time however, while there are still some elements that appear in his previous works, the book feels fresh and especially punctual, probably also because it was published just a couple of months ago.
Harari's writing style remains clear, he presents arguments backed by relevant examples and includes anecdotes that stick with you. His stories from the past and his take on our future, provide a lot of "food for thought". It’s the kind of book that feels useful for processing our current moment in history, but I guess it might not be for everyone; readers with strong religious beliefs might find Harari’s perspective a bit challenging.
After finishing the book, it feels that the world is at a tipping point and I’m both intrigued and concerned about the future. Harari’s exploration of AI and its implications makes me think I should study AI or programming myself. It also left me wondering: am I even real anymore? 🤷♂� Was this review written by a human or an AI bot? I guess you’ll never know.
Yuval, I can’t tell you how much I loved sapiens, homo deus, and 21 lessons, but what the heck was this? I stuck it out and read to completion because it was you, but I feel let down.
Yuval Harari's latest book is probably his best since Sapiens, and potentially much more important.
The popular historian has often said that history is the study of change. And it is with this view that he breaks down how important information networks have been throughout history, and then goes on to speculate how new technologies could become extremely life-altering. Specifically, the bulk of the book is a focus on the dangers of AI.
There's a fascinating history lesson in the first third, which Harari as always excels at. Taking the complex histories of various religions and then the printing press and the scientific method and more, and presenting these in ways easy for the layman reader to understand and process at a Big Picture scale.
The majority of the chapters are more about modernity and computers. In that vein, many examples are given, which are not so much future possibilities as they are records of what has already gone wrong when social media upends entire societies around the world: The genocide in Myanmar is explained at length, to highlight that these are not just hypothetical situations. We can also see how populism came about, making something coherent out of all the nationalist ideologies around the globe which do tend to be contradictory, giving the reader perhaps an overly fair assessment of why they've been so appealing to voters.
Harari certainly talks a lot about misinformation, and how it's been so prevalent with the rise of engagement-driven algorithms which are incentivized to bring out the worst in people. Frankly, at times it's a bit frustrating how he doesn't call a spade a spade and blame the right-wing specifically for this. There have been many studies proving those on the political right are far more likely to share misinformation online, but Harari has a style of being "above it all" and won't quite say that outright. Either way, there is something happening with this current phenomenon of information and communication breaking down, and it does need to be objectively studied.
Another valid criticism is a lack of analysis about capitalism. It is kind of assumed that democracy is a superior form of government, whether philosophically a Kantian or a utilitarian, which I of course agree with. But contrasting with lengthy examples of oppression in, say, Stalin's Soviet Union or religious fundamentalism in Iran, capitalism as the system causing what is now happening is only passingly mentioned. Which is a shame, because it is rather obvious that tech companies are already breaking down society so much precisely because of the profit motive.
By the end of the book, what leaves the biggest impressions are warnings about the future of AI, which will most likely exacerbate all these issues. There are the obligatory positive potentials mentioned, in healthcare for example, yet we all know there is much to fear. The list of worse-case scenarios about how AI could destroy both democracies and dictatorships--and then become the worst imaginable dictatorship, these go on and on. It is indeed frightening.
Something Harari explains well is the "garbage in, garbage out" principle, about how we must be skeptical of machine learning and language models because human biases are inherent in the data they collect. Moreover, as we grow more dependent on AI, which version of human nature will win out... Will we be able to remain skeptical, or will we end up trusting these seemingly godlike technologies as infallible? So, if it's the latter, how dangerous will that become?
The overall question of the whole thesis, is whether or not democracy be able to survive the tumultuous 21st century. Harari speaks of how dictatorships tend to fall because of rigid institutions and lack of reality-based communication, and how democracy has major advantages due to self-correcting mechanisms and the ability to adapt.
With the rush of current events that have occurred since this book was published, in this year, does that seem to apply to the United States anymore?
Unfortunately, it's hard to imagine many reasons for optimism any longer.
Harari does repeatedly say that history and technology are not deterministic. That there are many paths that may appear, and there's no reason to believe there's only one way it has to be.
But is this a good thing or a bad thing? The assumption that more information will inevitably lead to more truth, is something he calls the naïve view. He's not wrong; this perspective supporting the free-for-all online doesn't seem to be working out at all. And a major example in history before was the printing press. Everyone thinks that more books inevitably led to the enlightenment and science and an eventual higher standard of living. But that wasn't necessarily destiny, in fact. One of the first best-seller books in those easy days when the technology was new, was the Hammer of Witches. A psychotic and perverted treatise pandering to sick fantasies, kind of like QAnon, which brought about an era of witch burnings in Europe. Perhaps it's only an accident of history that the printing press later seemed to have worked out better for at least some of humanity.
With that in mind, we should definitely be working much harder to create more self-correcting mechanisms to fight against AI and algorithms gone awry. Before it's too late. Very tragically, that's not something rapidly aging government officials holding on to power are interested in whatsoever, or even barely understand. The tech giants and the ultra-wealthy influencing so much seem to have the opposite view, that they should empower computers and informational chaos even more, just on the chance they might make even more money.
It feels bleak, there's no other way to put it.
Whether or not Nexus by Yuval Harari is perfect or not, it is vital that the mainstream learn about these issues one way or the other. Read more, study more, get other perspectives. If this book by a popular nonfiction author is the way to get more people thinking, then that's what it takes.
I recommend it very much, and most of all I hope at least some of these ideas trickle up to those in power so we can face what's coming and against all odds, somehow, finally create a better world.
As an avid fan of Yuval Noah Harari's previousworksSapiensand the underrated21 Lessons of the 21st Century,I was genuinely excited to see where his philosophical musings of the coverage of history and technology would lead him with his next effort. By the time I waded through the massive 568-page hodgepodge, though, I was some combination of disappointed, angry and thoroughlysurprised that one of the most acclaimedauthors of his time would deliver such a disjointed novel that ultimately becomes a disastrousmeta satire of the very thing it seeks to warn readers about.
SoNexus,looks at AI and the way that its evolution in the information technology space can be used to create "misinformation" and "bot farms" designed to not only mislead users with fake information and bots created to elicit auntyresponses, but they can also lead to organizing violent reactions from social media users resulting in everything from hate speech to actual violence against individuals half a world away.
"Contrary to what the mission statements of corporations like Google and Facebook imply, simply increasing the speed and efficiency of our information technology doesn't necessarily make the world a better place. It only makes the need to balance truth and order more urgent."
The first major problem with the book is that it spends no fewer than 80 pages reiterating a point that literally everyone knows: There are several religions and within them several different interpretations of those religions. Religions are constantly in this dance between truth and mythology, with practitioners choosing to interpretthe word in either literal or mythological terms. This leads to, he argues, a lack of self-awareness when the religious doctrine has to be updated to appeal to modern sensibilities. How do you get Christian followers to accept the doctrine that LGBTQ people are now acceptable in 2024 when past doctrine appeared to suggest otherwise? You argue that the former preachersof the word "misinterpreted" the word and that nothing actuallychanged about it in the scripture itself.
Fair enough. But he then argues for the veracity of institutions like higher education, media and even elected representatives in Western democracies, as being more likely toward "self-correcting mechanisms" because they are more incentivisedto lead toward truth rather than misinformation.
This is not a minor point either. it's literallythe core argument he makes in the book. He spends several pages lambasting the West's declared enemies: Stalin's regime, Putin, Iran, Marxists, Anarchists, Communists for the sake of making this primary, wobbly argument: It democracies, there are self-correcting mechanisms that emphasize truth and it the aforementioned regimes, there are no self-correcting mechanisms and "misinformation" can flourish.
His refusal to acknowledge the obvious conflict that exists in the West between adherenceto truth and the reality of the control that capitalist interest have in shaping narratives is completely lost on his analysts. How well does the "self-correcting model" work when ideas are spread at a university where the donors of that universitydo not support it? What will be prioritized, truth or keeping the donors happy? Will our representative prioritize truth when it is inconvenient to their donors base?
This complete lack of awareness and bias on Harari's part leads to moments where he unminds his argument. Like when he claims that the United States doesn't do surveillance like the "dictator" counties like Iran, complete ignoring that The Patriot Act is still in effect in this country.
Or when he condemns "baseless conspiracies" spread on-line while ignoring the fact that "RussiaGate" was a liberal conspiracy pushed by the same outlets that should have had the Self-correcting Model to prevent it from entering its eighth year without a retraction.
How about when he references FDRs New Deal as an example of generational change taking place via democratic electoral politics while ignoring that without the worker movement, those changes would not have been possible.
The primary issue with hand-ringing over misinformation is this: It's never framed as misinformation if it supports a narrative the judging side supports and it is ALWAYS framed as misinformation if it goes against the interests of the corporate oligargy, regardless of its truthfulness. That's why it cannot be regulated because quite frankly, no one is principled enough to apply it fairly on all sides.
Just like how Harari can endlessly parrot the short-comings of regimes he dislikes while giving a pass to Western state departments that claim to be "spreading democracy" in Iraq and Lybia with little regard for how his Western bias allows him to draw such an erroneous conclusion.
Perhaps before concern trolling about "fake news" via AI, we should spend some time weeding out the fake news of a state department that hasn't met a press conference it didn't seek to lie about military actions abroad.
Last I checked, one million people died courtesy of the biggest piece of fake news of the 21st century: "We're there to spread democracy"
The attempt to predict future events is almost always a dodgy venture simply because so many unforeseen things will come to pass. New cultural trends and priorities will come forth, new technologies will be created, and other unknown factors such as pandemics, natural disasters, wars, economic shifts, and other such global events will occur. All this adds up to one conclusion: Nobody really knows what the future will bring.
The same can be said for the future of A.I. And therein lies the risk with it. A.I. may have the potential to be of great benefit to humankind. Or it could turn out to be our greatest enemy, and possibly, even our destroyer. Or none of these. Much of the outcome will depend on the responsible stewardship of A.I. development. And frankly, given the poor economic and political track record of the powers that be in the last few decades, I am highly doubtful about the likelihood of sensible and rational decisions being made regarding safeguards on A.I. usage. But of course, I could be wrong. For all our sakes, I sure as hell hope so�
This makes the 4th Harari book I have tackled, and I can say the same things about each of his books: They are question provokers rather than question answerers. And that, my fellow readers, is why I recommend reading them. Whether you agree with all of Dr. Harari’s conclusions is beside the point. If he makes you aware of what’s at stake and gets you thinking about the issues he discusses, then his work has served its purpose. Recommended. 4 stars.
Nexus is Yuval Noah Harari's latest addition to his hit series of books, which started with the world-famous Sapiens. Each subsequent title has successively drifted from the lightening in a bottle that Harari packed into Sapiens. Some of this has to do with the topics of the work. Nothing is quite as generally exciting as the solipsistic adventure into the past of our own species. Despite the declining impact of each book, Harari has been chasing salience, growing increasingly interested in contemporary social narratives and developments in technology, especially AI.
In Nexus, Harari panders to some laughably midwit doomerism (i.e. "we're on the verge of ecological collapse," our species is facing existential challenges, blah blah) in order to convince general readers to care about "information networks." Harari argues that most of us have a reflexively "naive" understanding of information. In other words, we believe that the freer and more abundant information becomes, the closer to the truth and utopia we get. He contrasts this "naïve" understanding with a "populist" understanding of information (the proper label here should be "instrumentalist"). To a populist, information is a means to an end. Information is subservient to the agenda of power. Those in power must then control information and make their own realities. After contrasting these two very simple models of public epistemology, Harari sort of punts on formally defining what information actually is. He settles on the claim that information is anything that connects a network. Then, he argues that the purpose of information networks is to discover truth and create order. These goals can often be in tension. After priming readers with this unsettling tradeoff, he jumps into the content of the book, which is divided into three parts.
In part one, Harari covers the history of human information networks in broad scope. This focuses on the two principle forces for building large-scale information networks: mythology and bureaucracy. The former inspires people to cooperate and build together, while the latter coordinates the formal maintenance of the network by setting its rules. Interestingly, Harari believes that both incur truth penalties for the sake of order (think of Plato's Noble Lie here) so it remains unclear to readers just how exactly truth is arrived at or how we know its there. To distract readers from this conundrum, Harari redirects us to the idea of "self-correcting mechanisms" built into information networks, which he raises with respect to how science has functioned historically. He argues these mechanisms are what keep information networks doing good things like effective and fair governance and so on. In part two, Harari examines an emerging type of information network - the inorganic network. This refers to information networks which are either not entirely comprised of human agents (e.g. the internet) and those that have no human agents at all. Harari proceeds to over-embellish a number of things about AI in order to do some fearmongering. There is also a lot of the usual whining about the problems with the architecture and incentives of social media and our modern business models in technology. In the final section, Harari explores different strategies that humans could use to manage inorganic networks. This is mostly just a soft polemic about how humans needs to rise up to control technology to reach the ends we want to. The big issue with this sort of line is that people want different things and often a few motivated actors will ultimately decide how a technology is developed and implemented, and this will likely have important effects on all of us. I think we'll be better off when there is conflict and competition within the group of motivated experts. Harari should have explored this more.
Despite my critiques, I think this will generally be an edifying read for general audiences. There is a lot of interesting history, especially in the first part of the book. Harari is also an effective storyteller so it speeds the reading along. Many readers will likely tire by the final portion of the book though.
I have extended thoughts on on Nexus and Harari at .
“The key question is, what would it mean for humans to live in the new computer-based network, perhaps as an increasingly powerless minority? How would the new network change our politics, our society, our economy, and our daily lives? How would it feel to be constantly monitored, guided, inspired, or sanctioned by billions of nonhuman entities? How would we have to change in order to adapt, survive, and hopefully even flourish in the startling new world?�
“But for tens of thousands of years, Sapiens built and maintained large networks by inventing and spreading fictions, fantasies, and mass delusions � about gods, about enchanted broomsticks, about AI, and about a great many other things. While each individual human is typically interested in knowing the truth about themselves and the world, large networks bind members and create order by relying on fictions and fantasies. That’s how we got, for example, to Nazism and Stalinism. These were exceptionally powerful networks, held together by exceptionally deluded ideas. As George Orwell famously put it, ignorance is strength.�
“What will happen to the course of history when computers play a larger and larger role in culture and begin producing stories, laws, and religions? Within a few years, AI could eat the whole of human culture —everything we have created over thousands of years � digest it, and begin to gush out a flood of new cultural artifacts.�
“Now we have summoned an alien inorganic intelligence that could escape our control and put in danger, not just our own species but countless other life-forms. The decisions we all make in the coming years will determine whether summoning the alien intelligence proves to be a terminal error or the beginning of a hopeful new chapter in the evolution of life.�
This book gave me a lot to think about, while simultaneously making me feel like there is no point in my thinking about AI. My input is not going to be requested. The future of AI will be in the hands of people (or maybe AI) who don’t care what I think, and who may not know what the hell they are doing. At least I learned something about the history of information networks and both the good and bad ways information has been used.
I received a free copy of this book from the publisher.