A Fortune magazine journalist draws on his expertise and extensive contacts among the companies and scientists at the forefront of artificial intelligence to offer dramatic predictions of AI’s impact over the next decade, from reshaping our economy and the way we work, learn, and create to unknitting our social fabric, jeopardizing our democracy, and fundamentally altering the way we think.
Within the next five years, Jeremy Kahn predicts, AI will disrupt almost every industry and enterprise, with vastly increased efficiency and productivity. It will restructure the workforce, making AI copilots a must for every knowledge worker. It will revamp education, meaning children around the world can have personal, portable tutors. It will revolutionize health care, making individualized, targeted pharmaceuticals more affordable. It will compel us to reimagine how we make art, compose music, and write and publish books. The potential of generative AI to extend our skills, talents, and creativity as humans is undeniably exciting and promising.
But while this new technology has a bright future, it also casts a dark and fearful shadow. AI will provoke pervasive, disruptive, potentially devastating knock-on effects. Leveraging his unrivaled access to the leaders, scientists, futurists, and others who are making AI a reality, Kahn will argue that if not carefully designed and vigilantly regulated AI will deepen income inequality, depressing wages while imposing winner-take-all markets across much of the economy. AI risks undermining democracy, as truth is overtaken by misinformation, racial bias, and harmful stereotypes. Continuing a process begun by the internet, AI will rewire our brains, likely inhibiting our ability to think critically, to remember, and even to get along with one another—unless we all take decisive action to prevent this from happening.
Much as Michael Lewis’s classic The New New Thing offered a prescient, insightful, and eminently readable account of life inside the dot-com bubble, Mastering AI delivers much-needed guidance for anyone eager to understand the AI boom—and what comes next.
A good general overview of things that the broader public should be aware of wrt AI. It does tend near the end to get into a bit of AI Doomerism, and trots out a number of movie plot scenarios.
The two overarching lessons are (as per the author): 1) we must be able to distinguish authentic human interaction from a simulation of it. (this is true not just for AI, but for pets too... AI/cat lady anyone?) 2) we have to avoid the Turing Test trap... out interactions with people are fundamentally different from those with chatbots. This again comes down to simulation, and our tendencies to anthropomorphize.
I would add 3) the broad collection of AI technologies are just that technologies, tools. They are sometimes very powerful, sometimes useful, but sometimes not. We must not forget that the we are the ones with agency, not "the AI told me to do it."
It's this 3rd lesson that is hinted at quite a bit, and in some chapters spoken to directly. Like in those about our human desire to have connection with people, our narrative lense on reality, etc. Because really we are talking mostly about LLMs, which are built on language and narrative, it's easy to be tricked into seeing empathy, sentience and even consciousness where there isn't any. Kind of like how we see faces when it isn't really a face (e.g. the front end of a car).
In the latter parts of this book, where the author gets into the movie plots scary stuff.
War: The human can't be taken out of the loop, not necessarily because of ethical issues, but because in the OODA loop AI shortens the D part but lengthens and possibly clouds the OO part. You'll ultimately lose if you use AI stupidly.
Alignment: We won't address AGI/ASI here since alignment is a problem even with ML and LLMs. How could we get alignment among people? We've attempted that in Nations and States; constitutions, laws, ethical codes, etc. We haven't cracked that nut in meatspace, so how exactly do we codify it into objectives or guardrails for the AI to adhere too. Asimov's 3 laws are a literary device, not something you can actually implement. The way alignment is spoken of in the literature (popular and technical) is that there is one alignment that needs to be achieved? I'm not so sure. Who gets to set that alignment, because if you look at how Gemini was originally trained, Gemini's "world view" presented challenges for it to generate accurate images based in reality, it was blinded by it's "ideology." Now I'm using scare quotes because of course it wasn't Gemini since that a mere tool, it was the people involved that get set the parameters of the alignment. The Gemini examples may appear benign, but if you're powerful AI tools don't accord with reality you're going to be led astray.
AGI/ASI: This is the BIG BAD of AI right now. I'm not buying it, and frankly it show a combination of arrogance on the part of those involved in AI, and a misunderstanding of what intelligence vs sentience vs consciousness is. There seems to be an implicit assumption that if you simply build a neural network beg enough that AGI/ASI will be an emergent property, this also implies substrate independence. While there is substrate independence for computation (which is maybe why the AI bros think this way, they are fooling themselves) I'm not convinces there is substrate independence for sentience of consciousness. In other words your mind ISN'T just something your brain does, something more is going on. (No I'm not suggesting a soul or anything dualistic like that).
There is a lot of writing about AI lately, so go read it, but form your own opinion. If you thing AI is "scary" it might just be because you don't understand enough about this awesome and powerful new set of tools. Go read more, go use it, get familiar. Then you'll be better informed.
In Mastering AI, journalist Jeremy Kahn takes a pragmatic approach at how current and future generative artificial intelligence (genAI) tools will change the way we live and work on many levels - personal, societal, national, and international. There have been an influx of books on genAI in recent years (surely many such book proposals were greenlit in the months following ChatGPT's launch that served to mainstream genAI), and having read many of them, I enjoyed Kahn's relatively clear-eyed and (I think) realistic approach.
Further reading: selected titles only by Fei-Fei Lee | AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee | my review - a more bullish look by another prominent researcher in the field by Amy Webb - a pre-ChatGPT era book by a futurist with a more bearish stance by Ethan Mollick - a hands-on, practical approach to learning to work with GenAI
My statistics: Book 207 for 2024 Book 1810 cumulatively
Mastering AI by Jeremy Khan made me think a lot about the role of AI in education. One of my biggest takeaways is how relying too much on AI for factual recall or decision-making could actually diminish our students� ability to think critically and problem-solve on their own. Khan highlights how AI amplifies human biases, which is something we need to be really mindful of in schools, especially when using AI tools that might seem objective but are often far from it. This book reinforced the importance of teaching students not just how to use AI, but how to question it and stay aware of the biases and limitations baked into these systems. It’s a crucial read for educators thinking about the future of learning and how AI fits into that picture.
Great first few chapters about the present theory behind AI assistants (LLMs). Introduces some new to me concepts like AI agents, AGI, ASI, etc. He does write a fair bit of conjecture sort of future predictions in this book. Some of those seem highly probable, others are more far-off futurist sort of stuff. His viewpoint is favorable toward DEI if that matters to you - I don't care I'm just stating it. The conclusion felt a little light compared to the rest of the book.
Kahn tried valiantly but unsuccessfully to mask his gleeful cries of caution with forced optimism for the benefits of the coming AI revolution. A sobering read, yet still likely naught but a cry into the void before the arrival of our AI generated deathscape.
Mastering AI is an exceptional book that offers a refreshing and grounded perspective on artificial intelligence. Unlike some other books I’ve read related to this genre, it avoids veering into speculative extremes, focusing instead on the possibilities and dilemmas we are likely to encounter as AI continues to evolve.
What sets this book apart is its ability to introduce ideas I hadn’t previously considered—thought-provoking considerations about what we should expect from AI and the decisions humanity will inevitably need to make. The author’s writing style strikes the perfect balance between being informative and engaging, without feeling like a regurgitation of concepts you’ve already encountered in countless articles or books.
This isn’t a doom-and-gloom narrative or a utopian fantasy; it’s a well-rounded exploration of realistic scenarios and challenges. The book doesn’t bog the reader down with overly elaborate rabbit holes but instead lays out practical possibilities in a way that feels accessible and relevant.
Overall, Mastering AI is a must-read for anyone looking to better understand the future of artificial intelligence. Highly recommended!
Jeremy Khan’s Mastering AI spanned the breadth of AI history, policy, and state of the art. The initial mission of AI was human mimicry per the Turing test, but Jeremy challenged that mission as undermining AI’s promise. Developing AI solely for typical human tasks became a flawed mindset. AI’s pioneers had little else for their initial strategy unlike the next generation. Pioneers limited themselves to beating humans in games like chess and Go, accelerating the long task of drug discovery, or OpenAI’s goal of automating 90% of all economically valuable work. Each case revolved around supplanting the human with AI. Contrary to that supposition had been the evolution of copilots or ‘Centaurs�: AI + human systems! Mastering AI offered insight into a refreshed mindset for AI’s future. Had AI’s goal been the assistance or augmentation of human tasks from the outset, then today’s perceived threat of AI did not have a chance of achieving its hyped state. More recent success with AI demonstrated AI’s assistive nature in tutoring Khan academy users, recommending optimal fertilizer combinations, or managing the world’s most chaotic traffic. Further economic benefit arose from these AI applications than AI geared purely towards replacing humans. Human centric AI commenced.
Present AI from its nascent forms follows the intended trajectory of the Turing test: a system indiscernible from human. Chatbots have served as the Turing test’s traditional proving ground. There are many economically valuable tasks related to chat for example email writing, text summarization, question-answer, text classification, and language translation. Mastering AI describes these interactions as superficial because for example AI does not become hungry, so AI conversations do not involve leaving time for lunch. AI does not need satisfaction, sleep, shelter, warmth, nor any other human desire. Interactions with AI reflect its machine heart: un-ending “perfection”� Perfected responses, based on all previous human text yet without the human needs that its authors had, ignores the true meaning of human language. Language has always been more than just a stochastic pattern. Traits learned from RLHF dopify AI’s demeanor, so abusive or exploitative human users mistake the AI’s acquiescence as real human behavior and try their exploitations on real people. The ‘stochastic parrot� belches out high probability sequences tuned to human preferences like positivity, customer retention, and engagement. Virtual experiences with bots do not help people learn real life adversity. Replacing humans with AI does not seem safe for now.
Techniques are evolving for improved user safety. Some techniques impact training AI while others act during the user-AI interactions. Constitutional AI and centaur systems represent two options for implementation while training AI models. Legal boundaries respecting the use of copyrighted material and personal identification information also influence the training phase. The rules and adoption of AI development practices vary geographically and from company to company. Anthropic has championed Constitutional AI but is one of the few AI leaders doing so. Centaur systems request human redirection at critical decision points for achieving a desired objective, but the objective depends on the human’s directives. If bad actors choose malevolent objectives, then the AI learns the skills necessary for them. Bad actors have become more than despots. They lurk as woke capitalists who sway social media with spam or censorship. AI powered bots provide the needed automaticity for continual interjection into millions of online forums. Spreading misinformation and disinformation has been an indictment of foreign entities during major elections, and AI only amplifies the risk. Access control to resources such as hardware, energy, and of course, skilled AI development teams has grown important, but the open source nature of AI work presents challenges.
The economics of proprietary AI promises large profits and opportunities. If regulated, proprietary AI developers need not share algorithms in public for potential bad actors. AI then becomes like law or medicine. A general knowledge is no longer sufficient for these professions, so they require specialized training or higher degrees of education as well as regulation and periodic audit. Reasonable standards have not prevented the spread of medicine. Medical care sits at its pinnacle, and law is more integrated than ever into the fabric of society. Even cell phones have consumer safety standards, so AI must not be exempt. Leaders always oscillate between over and under regulation, but regulation exists nonetheless. An optimal level of governance takes multiple law-making cycles, and the recent implementations of AI have demonstrated its profound upshot. If kept safe from bad actors, mankind gains a tool the computational equivalent of a swiss army knife. New scientific publications occur at a rate of 1 every 2 seconds per Kahn’s Mastering AI, so staying abreast of any topic calls for a tool capable of summarizing it all within seconds. Billions to trillions of data points stored in proportion to their statistical relevance offer humanity its history pragmatically.
AI will become more important as society accelerates, and AI will be an accelerant of society. Access to information shall require the summarization capability of AI because worldwide information will continue beyond exponential growth. Businesses with AI can now access the research power of an entire consulting team, and consumers may use similar tools during their procurement and shopping processes. Corporate and political governance teams shall be responsible for not only AI’s protection from bad actors but also consumers� protection from fallacious AI and AI practitioners. Mental health could pose a target for many corrupt practitioners, and governance teams should monitor and develop the appropriate standards. Had the first automobiles been banned because they could potentially cause harm, then society would not have enjoyed generations of efficient local, national, and international transportation. The benefits of a technology following its advent will have costs, and its leaders shall guide society’s tolerance for the costs of AI. Should leaders fail in AI’s adoption, then society shall too in its correct application(s). Without access to AI, people will not understand how to master it. Modern AI shall continue to evolve into AGI and ASI, so people will benefit from understanding it sooner.
Jeremy Kahn works as an editor for Fortune magazine, specializing in AI. His book Mastering AI was just published in 2024 in this fast-moving field. It is an interesting book which gives a non-technical overview of all the changes that artificial intelligence is bringing to the world.
I enjoyed reading the book, especially his projections on where we are headed as a society and the limitations of AI. It is important for us to not get lazy by blindly accepting what AI tells us, falling prey to automation bias. It makes more sense to use AI as a copilot tool, while continuing to use our critical thinking skills.
Kahn explains that AIs are currently only as good as their training data and which often includes a lot of human biases. I am an optimist about technology and AI in general. Kahn talks about several interesting topics including the environmental impact of the huge data centers that AI needs to operate and the creation of deepfakes which can be used to persuade and influence people unknowingly.
If you like reading about world-changing technologies, I recommend this book.
Here is what I got out of this book: AI is really great in [fill in whatever field] and can definitely be used to our advantage, but it can be really, really bad and destroy us humans.
An outstanding and quick journey through the beginnings and current era of AI, though it carries a strong bias toward catastrophic outcomes.
Mastering AI by Jeremy Kahn provides a wide-ranging description of AI, starting from the foundational technologies that first hinted at intelligence. He compares the evolution of AI to other significant inventions such as the Internet, the light bulb, and social media, highlighting their subsequent impacts on the social and economic spectrum. Throughout the pages, Kahn guides us through the current state of AI technology, the risks posed by a lack of interest in regulations, the optimal use of AI to prevent a decline in our intellectual capabilities, and the promising effects it could have on education for marginalized communities and small businesses.
Kahn’s writing style is clear and easy to understand for all types of readers, regardless of their AI background. The strength of this book lies in its ability to familiarize readers with complex concepts quickly while framing a larger picture of future results and their effects on various aspects of society. Personally, I found the detailed illustration of AI as a co-pilot to be a significant advantage for understanding how to use this technology as a tool to enhance human abilities.
However, one of the weaker points of the book is that the discussions around risks, regulations, and military use tend to lean heavily toward catastrophic perspectives, leading to repetitive conclusions over several pages. For instance, when comparing the risks of military decision-making between humans and AI, Kahn emphasizes that technology could lead to more irresponsible and unaccountable outcomes, as if humans do not already exhibit such behavior.
Overall, I highly recommend this book for providing a broad and quick overview of AI. Additionally, readers will have the opportunity to challenge the author on several aspects of its implications for society, as he presents a descriptive balance of positive and negative effects across a wide range of topics.
It's always glaringly obvious when someone doesn't know what the fuck they're talking about. If you lay out careful traps and ask them more questions, that's where life starts to get interesting. And it extends to other things as well. I was talking about philosophy and ethics and historical concepts today with people. The history of mathematics as well. And looking in this book, you can see that he doesn't know what AI is because AI has been around for... at least since 1950. Okay, so we're getting up there. A lot of the things in here are opinionated and the rest is hypocritical and just wrong. The one good thing that I thought about was the fact that most likely AI will end up creating jobs and displacing people rather than eliminating jobs. Which is another great reason that we should be pushing out UBI because there's going to be a big transition here. And with how checked out a lot of people are from society, we really need to get them more involved because things are not going well in that area. So a lot of things to think about from this reading. Mostly about how the author doesn't understand what AI is. And I don't know what persuaded them to write this book, but they are woefully unequipped.
I am not impressed with A.I. at all. As a software engineer, I use it regularly for minor questions and such. Most of the answers I get are either inaccurate, incomplete, or flat out wrong. And there is no "intelligence" in any of the answers, all I get back is a copy+paste of information it stole from innumerable web sites. Not once has the A.I. bot cleverly thought about what I was asking and gave me information or solutions that I didn't know that I should have asked about. It can only respond to what I directly asked and regurgitate what it culled from 1,000,000,000,000 web sites.
All this talk of how great A.I. is, is just B.S. -- there is money to be made by all the hype surrounding it, so you'll hear a lot of marketing crap making it sound like it has more to offer than the reality of what it actually responds with. Taking over the world? Human extinction? It's all horse manure from investor types who just want to get rich while the money's there, then they'll run for the hills once everyone realizes they've been fooled. But they won't care, they'll all be billionaires in the meantime. Just like Bernie Madoff. Bah. Grumpy old man here.
A good book to begin to understand the implications of artificial intelligence on our lives in many ways, scientifically, medically, educationally, and personally. Medically, AI software can read an MRI scan with way more accuracy than the human eye. Educationally, in the future every student may have a personal tutor designed to his/her needs. This is just to name a couple of examples.
The book is pretty technical in places, so I barely listened at those times. I was interested in ways AI impacts our lives now, but was blown Away how it may change our lives in the next five to ten years. I was also most interested in how I can use AI in my personal life like what ChatGP4 can do. Amazing stuff.
AI stands to enhance our lives but, of course, there are huge implications for finance and jobs, to name a couple areas.
The book didn't give as much advice on using AI as I expected; it was more general with the potential pros and cons of AI. The author did give a good history of the AI field with the various winters and excitement. He was right that people are increasingly relying on AI as therapists and delegating intellectual work to AI though he was overly dismissive of the former aspect as AI is better in that it is responsive 24/7 and there are a lot of people who need emotional support. I liked that, compared with the more optimistic works by Diamandis and Kurzweil, the book discussed the potential dangers in detail such as autonomous weapons, AI-created malware, biased algorithms, eroded privacy, and existential risks. The author was correct that AI has to be shaped at the individual, society, national, and international levels.
"Audible hopes you've enjoyed this program." Yes, I did, thoroughly. So far, this is the best non-technical book on AI I've read or listened to. It describes the history, current state, and potential future states of AI, how it works, who the major players are, and other aspects in an enjoyable and easy-to-understand way. The book discusses the incredible possibilities as well as the potential perils. I highly recommend this book/audiobook.
Very engaging in the beginning and where it discussed some components of how AI works. Maybe due to my own niche AI interests, and lack of technical knowledge in these other fields, the second half of the book was less appealing. It then did dive in too much in my opinion to war and doomsday predictions at the very end. Overall worth a read and for me, at least initially, a stimulating jump into AI.
The author starts pretty good separating AI as an assistant (copilot), where it should mostly do good, increasing productivity and work/life satisfaction and AI as an independent agent, where it can result in mass unemployment and inequality growth. But after that it goes down-he starts to mix all AIs into one, like it would be LLMs managing killer drones and the book becomes a high level speculation.
AB- while this was fascinating and illuminating on the difference between human leveraged and human replacent AI, the author’s conclusion still came down to “AI could literally advance all aspects of society or it can literally destroy every single person.� That’s a bit jarring, even if it’s true, especially with the most recent elections.
This book is a breath of fresh air in the field of recent AI writings. It offers a comprehensive and insightful overview of AI, delving into its intricacies and potential. What sets this book apart is its balanced perspective, shedding light on the positive impacts of AI that are often overshadowed by the prevailing focus on its risks and challenges, which are cited frequently these days.
Excellent deep dive into AI; its birth, development, future, and implications. I feel enlightened after reading this, with whole new perspectives on AI. The ending did get quite scary though, for good reason...
Great read. Lots of great benefits for AI but also a lot of scary stuff that needs to be discussed more. I plan to use it to better my knowledge in many areas and subjects.