|
|
|
|
|
|
|
|
|
|
|
|
|
my rating |
|
|
|
|
|
|
|
![]() |
|
|
||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1541774752
| 9781541774759
| 1541774752
| 3.92
| 145
| unknown
| Oct 18, 2022
|
it was amazing
|
“The Equality Machine� isn’t the kind of book that screams for attention. It doesn’t indulge in scare tactics or paint artificial intelligence as eith
“The Equality Machine� isn’t the kind of book that screams for attention. It doesn’t indulge in scare tactics or paint artificial intelligence as either a savior or a destroyer. Instead, it offers something harder to find: clarity. The author takes a calm, evidence-based approach to AI’s role in addressing and sometimes reinforcing human biases. This restraint might not win clicks or outrage-fueled debates, but it does something more important—it equips readers with tools to think critically about technology’s potential and pitfalls. One of the book’s standout points is its exploration of how AI can uncover biases that humans often miss. For instance, hiring algorithms have been scrutinized for perpetuating discrimination, but the author flips this narrative. They show how these systems can also expose patterns of bias that recruiters might overlook. A human evaluator might unconsciously favor candidates from certain schools or backgrounds without even realizing it. AI, on the other hand, can flag these tendencies, offering a chance to correct them. Examples like this—loan approvals skewed by geography, medical diagnoses influenced by race—are present throughout the book. Another strength is the argument that fixing biases in AI is often simpler than tackling them in human systems. Human decision-makers rarely like being questioned on their own judgments. An algorithm, however, can be audited and adjusted. If a model shows racial disparities in loan approvals, you can retrain it. You can’t easily “reprogram� a person entrenched in societal prejudices. The author makes this point not to glorify AI but to highlight its unique advantage: it can be debugged. The process isn’t foolproof or automatic. But, it is far easier to improve where there is will. The book also dives into the ethical gray areas of bias correction. Not all biases are inherently bad. A basketball coach prioritizing height or speed isn’t discriminatory, which is easy to understand. Similarly, and more difficult to agree with, could be a factory foreman building a cohesive team by emphasizing shared cultural experiences. AI makes such choices explicit. Synthetic data plays a key role here, although it is not discussed in the book, which was written at a time when its capabilities were less apparent. By creating balanced datasets free from historical inequities, AI lets users decide which biases to keep, reduce, or discard. The emphasis is on intentionality—you can’t fix what you don’t acknowledge, and AI shines a light on these trade-offs. What’s refreshing is how the book counters the reflex to blame technology for problems rooted in society. Critics often attack AI for failing to meet idealistic standards, but the author counters that fairness is inherently messy. Should an algorithm prioritize diversity over qualifications? Should it reflect societal norms or challenge them? These are more moral, social, political, and even context-dependent economic questions than technological. Blaming AI often unites people with diverse and irreconcilable views on thorny subjects. Technology may make situations worse from the viewpoint of one side discussing the evils of the other, but the same technology could help both see the problems more clearly if they so desire, leading to more constructive resolutions. The book’s pragmatic approach would be boring for many. Readers seeking dramatic warnings about AI’s dangers may find the tone too measured. But for those tired of sensationalism, this is a breath of fresh air. The author doesn’t shy away from complexity. She is happy to reach practical conclusions with less-than-perfect utopian scenarios. Ultimately, the book succeeds because it refuses to oversimplify to capture the theoretical high ground. AI is not merely the proverbial weapon that one can blame or not blame when abused; it is actually a powerful tool that equality-seeking people can use to bring about effective changes that would be impossible otherwise. For anyone interested in AI’s role in shaping society, this is a must-read. This is because it’s not flashy or alarmist, but logical and honest. ...more |
Notes are private!
|
1
|
Apr 14, 2025
|
Apr 18, 2025
|
Apr 17, 2025
|
Hardcover
| |||||||||||||||
0593185749
| 9780593185742
| 0593185749
| 4.36
| 495
| unknown
| Jul 16, 2024
|
it was amazing
|
"Why Machines Learn" is a unique perspective on the evolution of machine learning, making it a valuable resource for a specific audience. While the bo
"Why Machines Learn" is a unique perspective on the evolution of machine learning, making it a valuable resource for a specific audience. While the book garnered a 5-star rating from this reviewer, it's important to understand its strengths and weaknesses to determine if it's the right fit for people picking up the book. The book's core strength lies in its comprehensive exploration of the mathematical foundations of machine learning, particularly in the pre-deep learning era. For those unfamiliar with the field, "machine learning" (ML) is a subfield of artificial intelligence that enables computers to parse data rigorously and far more elaborately than regressions and other simpler linear statistical methods that are grossly inadequate in practical life. This book fearlessly dives into the mathematical underpinnings of various ML algorithms, providing a reasonably accessible journey through the concepts for those well-initiated. It covers a significant period, from the late 60s until the arrival of Convolutional Neural Networks (CNNs) adequately and diligently. It works through the key phases as defined by the arrival of major concepts like Principal Component Analysis, Support Vector Machines, Backpropagation, Gradient Descent, and the likes. However, the book's mathematical depth can be a double-edged sword. Readers far removed from the field might find the heavy emphasis on mathematical formalism overwhelming. Conversely, experts deeply entrenched in modern machine learning may find much of the content too basic. The book's brevity also means it dedicates limited space to newer, dominant methods. The book hits a sweet spot for a specific group like this reviewer. Those with a solid understanding of pre-transformer ML concepts will find it a refreshing and insightful read. It allows for a rapid yet thorough review of familiar territory, offering new perspectives by presenting these concepts in close proximity. In some ways, and this is in the language of this reviewer, Machine Learning is about optimizing and squeezing "degrees of freedom" (DoF) to analyze data without overuse. In ML terms, it means combining concepts like Principal Component Analysis (PCA), which reduces DoF, with techniques like Kernel methods, which increase DoF, and Support Vector Machines (SVMs), which aim to find optimal solutions within this complex space, to name some of the concepts from the book. A key challenge in traditional machine learning was balancing the trade-off between overfitting and underfitting. Overfitting, where a model memorizes data instead of learning general patterns, led to poor generalization, while underfitting, where a model was too simple to capture meaningful trends, reduced its predictive power. The appropriate term for underfitting in this context is underparameterization, which describes a model that lacks sufficient flexibility to capture complex patterns in the data. Classical models addressed this challenge through regularization techniques, dimensionality reduction, and careful model complexity selection. These strategies ensured that models remained interpretable and grounded in statistical rigor, balancing variance and bias. The book shines in tracing how these methods, rooted in a disciplined, almost statistical approach, sought to find the "best" hyperplane to categorize data. Before the rise of neural networks, machine learning was deeply rooted in mathematical and statistical formulations. All these methods were built on deterministic optimization techniques with clear mathematical foundations. While the terminology used in statistics differed from machine learning, the core principles remained aligned, focusing on structured ways to represent data and optimize models. Techniques like kernel methods enabled transformations that made complex data separable, and optimization-driven classifiers such as SVMs sought to maximize decision boundaries with rigorous mathematical constraints. The reliance on well-defined equations and statistical properties made these approaches fundamentally different from the heuristic-driven methods that would later emerge with deep learning. The book is primarily about the disciplined era of ML. The book's biggest missed opportunity lies in its extremely light treatment of the shift towards more heuristic approaches in modern machine learning, specifically the deep learning revolution. Neural networks, especially deep neural networks, challenged the traditional "variance-bias tradeoff." The emergence of neural networks, particularly deep learning, introduced a fundamental shift in how models were designed and trained. On the surface, increasing the number of layers in a neural network seemed analogous to adding kernel transformations in traditional ML, as both approaches expanded the feature space and allowed for greater model expressiveness. However, neural networks diverged sharply due to their reliance on heuristics and stochastic processes. Instead of deterministic optimization, they employed randomly initialized weights, stochastic gradient descent, and non-convex loss functions, making their learning process far less predictable. Unlike classical models where solutions were mathematically determined, neural networks produced different outcomes depending on initialization and training conditions, fundamentally changing how learning was approached. What the book does not discuss in sufficient detail is that the role of randomization in deep learning further distinguished it from earlier machine learning methods. Weight initialization was no longer based on deterministic rules but rather on probabilistic methods, leading to different learning trajectories even for the same dataset. Training deep networks required iterative adjustments using mini-batches of data rather than full-batch optimizations, introducing additional uncertainty. Techniques like dropout and batch normalization, which had no direct analogs in traditional ML, emerged as necessary adjustments to stabilize learning. While these approaches improved performance, they lacked the formal mathematical justifications that defined classical ML, making deep learning more of an experimental science than a strictly theoretical discipline. The book’s clear bias towards theoretical justifications made it go extremely light on these aspects of the methods important today. The following is covered in the book, but relatively in passing compared to the space offered to pre-NN concepts and compared to the importance of these phenomena. One of the most surprising aspects of deep learning was its ability to disrupt the classical variance-bias tradeoff. In traditional models, increasing complexity inevitably led to overfitting, but deep networks, even with billions of parameters, often generalized better than simpler models. This phenomenon, known as double descent, contradicted long-standing assumptions in statistical learning theory. It suggested that adding parameters improved rather than harmed performance beyond a certain threshold, a counterintuitive result that remains only partially understood. Unlike earlier ML models where generalization was explicitly controlled through mathematical constraints, deep learning seemed to achieve it through emergent properties (not everyone’s favorite term), raising questions about the underlying mechanisms driving its success. Deep learning neural networks are highly complex models that defy descriptions. This is perhaps the reason behind their light treatment. They work in ways that are not yet fully understood by theoreticians. The book mentions the defiance of the tradeoff, including concepts like double descent, but could have covered the latest neural architectures far more than mere mentions. These almost post-ML methods rule the world of AI today, but not in the book. While their mysterious workings are acknowledged, the book could have covered at least the attention formulas in a few chapters given their importance. Despite this weakness, this is a great book for at least certain types of enthusiasts. ...more |
Notes are private!
|
1
|
Feb 07, 2025
|
Feb 11, 2025
|
Feb 12, 2025
|
Hardcover
| |||||||||||||||
069124913X
| 9780691249131
| 069124913X
| 3.92
| 975
| Sep 24, 2024
| Sep 24, 2024
|
did not like it
|
"AI Snake Oil" ambitiously seeks to dissect the capabilities and limitations of AI, but it stumbles into its own quagmire of bias and outdated perspec
"AI Snake Oil" ambitiously seeks to dissect the capabilities and limitations of AI, but it stumbles into its own quagmire of bias and outdated perspectives. AI, in whatever it means, has limitations and other ethical, moral, political, and other issues. Perma-critics like the book's authors, particularly when they are also domain experts, are needed as they will play the vital role of holding the mirror and hopefully forcing us to look for better ways in time. The book claims to be a corrective to unbridled optimism about AI’s capabilities. Still, despite the best intentions, the critique is marred by its failure to keep pace with the rapid evolution of AI technologies and where it could be headed in a short time. The authors� failures are primarily due to the insistence that there is no significant change in the pace of improvements, a biased claim that falls flat almost every week these days, leading to a consistent shift in the authors� views in their public posts outside the book and erroneous conclusions within it. The book starts with useful sections on the difficulties in defining "AI." Not only did they successfully introduce a few major categories in generative, predictive, and analytic systems, but the book also discusses how the list could contain many other types. While the authors rightly argue that AI means different things to different stakeholders at different times, the rest of the book uses the neatly defined categories as a license to dismiss the entire field’s recent advances - particularly with their overlapping capabilities as well as the newfound abilities to do things nothing in our mathematical or technological toolkits has been able to do. One recent example is Google’s announcement of a weather forecast model based on the LLM type methods rather than our knowledge of scientific equations and methods. This model is performing better than any of our science-based models; a claim, even if somewhat hyped up, is illustrative of what is generative or predictive in AI is no longer as separate as the book makes it out to be. In discussing the limits of and issues with predictive AI, the book lays out a framework that, while theoretically sound, lacks the depth and foresight needed to address the nuanced implications of AI in decision-making processes. It bypasses critical discussions on whether AI predictions might, in some contexts, be superior to human judgment or other current methods, even if not perfect. One can always look at the individual cases where any predictive analysis fails. While these discussions are important as we strive for ever-better methods, focusing on which methods are better requires a more holistic view of comparative analysis of different available tools. Yes, there could still be domains where a society or a system may want to limit the use of predictions altogether. The authors� discussions on peripheral issues around here excessively rely on individual cases of failures in AI models� predictions rather than larger ethical or moral reasons and without even discussions on the comparative frameworks. For instance, when the authors trash the use of AI in loan applications with the examples of deserving people whom the biased models might have left out, they not only ignore how the models are getting better but also how the models are giving rise to businesses with capital that are suddenly making loans available to hundreds of millions globally who were simply out of the system before because of lack of any data or previous systems� abilities to serve them. This criticism applies to numerous other examples used in the book, with each example revealing the preconceived biases and positions of the authors rather than useful discussions on the limitations. The arguments fail the most because the authors cannot appreciate how these systems could evolve and whether it is desirable or possible for predictions to keep improving in specific fields even if the current AI predictions are worse than other available options. In fact, this becomes the book's central failure: its inability—or unwillingness—to grasp the exponential trajectory of AI development. The authors cling to outdated examples and isolated anecdotes to argue that AI will forever change slowly. They completely miss the "threshold crossing" phenomenon in technological evolution, wherein years of incremental progress culminate in breakthroughs that redefine entire fields, something that has happened to generative AI in the last two years. Talking about exaggerated expectations of earlier times, say around perceptrons, and ignoring the rapid advances causing hundreds of billions in investments and millions of best human brains to get involved in the improvements is nothing but submitting to one’s notions on a simplistic reading of history. The authors' failure to grasp the nature of technological leaps exposes a fundamental flaw in their critique. History is replete with examples where decades of slow, incremental progress culminated in revolutionary breakthroughs that changed the world overnight. Consider the protracted evolution of electricity: it was a curiosity confined to laboratories for decades, powering little more than rudimentary experiments. Then, in a transformative leap, it became the backbone of industrial civilization. Similarly, the idea of connecting machines languished for decades as incremental improvements trickled in. Yet, the sudden arrival of the internet in the 1990s didn’t just connect computers; it forged an entirely new digital age. Generative AI today sits at a similar crossroads. The authors� insistence on extrapolating past limitations and pace as perpetual constraints ignores the explosive ongoing developments. In mere months, what they dismiss as incremental today has vaulted into domains of creativity, problem-solving, and understanding that fundamentally redefine society. It’s not just a question of what AI can’t do—it’s about whether we recognize the thresholds it is poised to cross before we address the questions of forever limitations. The critiques of generative AI in this book are not just outdated—they were outdated the moment they were written. The authors harp on early limitations of systems like GPT, using cherry-picked examples of failures in reasoning or creativity. Yet they ignore the rapid iteration cycles that have already addressed many of these issues even when the book was published. For instance, their skepticism about AI's ability to perform nuanced creative tasks is laughable in light of advancements in natural language understanding, music composition, and even medical research that had occurred by then. The authors spend an inordinate amount of time decrying the "hype" surrounding AI, accusing developers and proponents of overstating its capabilities. Yet they fall into the same trap by hyping up AI’s perceived shortcomings. Their relentless focus on what AI "cannot" do blinds them to what it already does. The result is an unbalanced narrative that misleads readers into underestimating the technology's present and future impact. A lot is hyped up by those promoting AI, but the approach taken in the book is wrong. The book could have contributed to the discourse around AI, separating genuine concerns from baseless fears and uncritical optimism. But its narrow focus and flawed arguments do more harm than good. By focusing excessively on relatively trivial and solvable issues (if not already solved), the book defocuses on far bigger other issues, including limitations and capabilities. The book is valuable in some examples of genuine limitations, but its tone is towards dissuading rather than guiding potential users. ...more |
Notes are private!
|
1
|
Dec 18, 2024
|
Dec 22, 2024
|
Dec 22, 2024
|
Hardcover
| |||||||||||||||
059333048X
| 9780593330487
| 059333048X
| 3.92
| 223
| unknown
| Jun 13, 2023
|
really liked it
|
In humanity's relentless pursuit to comprehend the cosmos, we've journeyed from the realm of myths and mysticism to the structured methodologies and e
In humanity's relentless pursuit to comprehend the cosmos, we've journeyed from the realm of myths and mysticism to the structured methodologies and equations of pre-technology science. Moving away from arbitrary assumptions to empirical observations and logical reasoning was a fascinating and comprehensible journey for a few centuries but still deeply inadequate. "The Universe in a Box" is an ambitious exploration of how modern cosmology increasingly relies on this transformative journey's next tool or method - computer simulations. The book commendably introduces the changing landscape of scientific inquiry. However, given the topic's profound complexity and ever-evolving nature, it merely scratches the surface, leaving much to be explored. Most of the review here is about my own thoughts, inspired by the topics covered in the book. The neat equations that once pointed to the supremacy of our logical, deductive, and intuitive capabilities can no longer carry us forward. This is a fascinating inflection point that is difficult for many to accept. Particularly challenging is leaving the figuring out to devices whose methods, processes, and increasingly even results we cannot comprehend. The neat mathematical models, dominated by idealized particles and simplified conditions, struggle to account for the messy, chaotic reality observed in the cosmos. This realization has propelled a paradigm shift from purely equation-based descriptions to simulation-driven explorations. Simulations represent a new frontier in scientific inquiry. They are recursive, iterative processes that provide approximate solutions to complex problems—solutions that are constantly evolving and never truly complete. In the best cases, these computational models manipulate variables and conditions in ways that are impossible in the physical world, offering insights into phenomena that are otherwise inaccessible. However, the complexity inherent in these simulations often defies expression in human language. The multidimensional data, intricate feedback loops, and non-linear interactions challenge our traditional means of communication and understanding. This linguistic limitation poses a significant problem not just for conveying simulation results but also for integrating them into the broader framework of scientific knowledge. When simulations operate at levels of complexity that elude straightforward explanation, they can seem opaque or inaccessible, even to experts. This opacity is evident in the difficulty of describing simulation phenomena comprehensively. While theoretical concepts may be distilled into understandable terms, the full breadth of simulation outputs often resists simplification. The rise of machine learning and artificial intelligence has further complicated this landscape. Machines can now form hypotheses and perform higher-order calculations at speeds and scales unimaginable to human researchers. These systems can detect patterns, correlations, and entanglements within vast datasets and decide what details to include or omit in their models. The criteria for these decisions are embedded within layers of computational processes often indecipherable to humans—a phenomenon known as the "black box" problem. This growing reliance on machine-driven simulations raises profound questions about the nature of scientific understanding. If the methods and reasoning behind simulation models are beyond human comprehension, can we honestly claim to understand the phenomena being modeled? This challenge touches on the philosophical underpinnings of science, which traditionally values transparency, reproducibility, and explanatory power. Despite these challenges, simulations remain tethered to empirical reality through the necessity of validation against observations. Simulation results must correspond to observed data from the past or present to be deemed credible and useful. In this sense, simulations function similarly to data mining—they extract patterns and insights from vast amounts of information, but their value is contingent upon alignment with empirical evidence. This dependency underscores a fundamental aspect of scientific inquiry: the iterative process of hypothesis, experimentation, observation, and refinement. The limitations highlighted here are not explicitly discussed in the book, which focuses more on introducing various simulation capabilities and applications. It forces our supreme role in the ability to form hypotheses and regulate/direct, although these assumptions are likely hopes rather than tethered in the current reality of exploding AI abilities. Still, the book succeeds in illuminating the transformative impact of simulations on cosmology but also inadvertently showcases how much remains unexplored and perhaps inexpressible. In conclusion, the "end of the equation world" doesn't imply that equations are obsolete but that they are no longer the sole or primary means of understanding complex systems. In embracing simulations, we acknowledge the limitations of reductionist approaches and the need for holistic, integrative models that can accommodate the universe's inherent complexity. However, this acceptance also requires us to grapple with the implications of relying on processes that may elude complete human understanding. As the book's last chapters speculate, as we venture deeper into the simulation age, we are not just coding the cosmos but also rewriting our place within it. We ourselves could be somebody else's simulation, or we may create simulations that automatically create their own simulations, and we may not even know this. All in a loop within a loop within a loop! ...more |
Notes are private!
|
1
|
Nov 23, 2024
|
Nov 30, 2024
|
Nov 30, 2024
|
Hardcover
| |||||||||||||||
059373422X
| 9780593734223
| 059373422X
| 4.18
| 26,711
| Sep 10, 2024
| Sep 10, 2024
|
really liked it
|
Nexus is an exceptional, topical book that one cannot avoid in the current age. Expectedly, it is a remarkable journey through history, technology, an
Nexus is an exceptional, topical book that one cannot avoid in the current age. Expectedly, it is a remarkable journey through history, technology, and the complex future we face. Whether one agrees or not (like this reviewer), the author’s ability to synthesize vast historical knowledge with contemporary concerns, particularly those surrounding artificial—or what he calls alien—intelligence, will leave most transfixed. As much as the author has warned against the power of narration or storytelling, he is the ace deployer of the same power to create the existential fear that he believes in and one all the rest should at least know of. The author is able to create a sense of urgency that few others can. The author’s views have evolved from fairly mild and relatively benign AI views expressed in Homo Deux. As discussed in my review, back then, the author was far more sanguine about AI’s benefits versus the negatives . He turns as negative as Bostrom in Superintelligence , O’Neil in Weapons , or Zuboff in Surveillance Capitalism . This reviewer will not repeat the points covered in other reviews, which are all valid, but cover other theoretical and practical issues that are important to know. While the author rightly points out that we may never know the counterfactuals of a world without these technologies, he ignores how his utopian vision is entirely impractical in the real world we live in. It will give him a license, like those employed by many polemicists since time immemorial, to criticize anything wrong or abhorrent that happens to the factors discussed in the book and blame the world for not implementing his suggestions in total. The main point is that the solutions that the author eloquently describes are non-starters. Everyone’s Utopia is different. Few at any point in history could ever agree with Plato’s vision. And, the same is likely to be the case with the author’s detailed suggestions, as he creates extremely arbitrary boundaries on what should be permissible and what should be banned throughout the book. What one group sees as a necessary restriction � let’s say teachers� jobs in developed countries � another may view as an unacceptable limitation on progress or freedom (say developing worlds� students without access to quality instructions). In every field where the book suggests what should be curtained, the perspective is of a particular class of people from certain types of societies. The romantic vision of a globally coordinated response to AI needs to consider the historical and geopolitical challenges that make such cooperation highly improbable. The author mentions Mearsheimer but only to prevent himself from being too negative and end on a constructive note. In the process, and through the book, he ignores how nation-states and private corporates are driven by self-interest and power competition, making them inherently incapable of achieving lasting cooperation. Even within a nation with the United States providing an obvious example, there is no guarantee of cohesive action, as domestic policy-making is often mired in conflicting interests between political parties, businesses, and the public. Natural political systems have highly competing forces—truth, rights, government control, market capitalism, and democratic freedoms. Without explicitly numbering them, the author discusses these five systemic forces that clash in all societies: the need for agreed-upon truths, inalienable rights that cannot be overridden by majority rule, governmentally imposed limits and controls, market forces and capitalist principles, and democratic rights of citizens. . Even a clash between two forces � liberal democratic and capitalistic market systems� � caused Fukuyama’s End of History vision to come unstuck. In passing, the author covered one example that effectively exposed the underlying weakness of everything proposed in the book. He discussed how, in the United States, all must acknowledge the scientific truths of environmental damage, and still, the voters must have the right to decide how they want to act on policies of fossil fuels. The entire book is replete with such arbitrary suggestions where the author � almost like a global benevolent ruler � would decide ad hoc what truths are, who gets to decide, who gets to be controlled, and whose benefits should get the priorities. The book's shortcomings become most apparent when considering the global context. The book acknowledges national interests but fails to see how few countries will ever agree. For example, efficiency gains that benefit one country at the expense of jobs in another are likely to be celebrated by the benefiting nation and seen as a necessity, regardless of their global impact. For example, who in a developed nation is likely to agree on a global minimum income or act against automation that may save them expenses by taking a job away from a faraway society to a better functioning and controllable machine at home? Like the author, most politicians will be acutely tuned to technologies threatening domestic industries, even if they offer broader global benefits. Given that most global societies face the common indifference to negative externalities outside their own boundaries with utterly different demographic, resources, and other prosperity-driving factors� conditions, it is naïve to assume any agreement even with protracted debates and conferences in decades. It is not just the absolutism that jars but also the book’s repeated tendencies to create its own realities to stoke fear. The author continuously talks about the pace of these models� change when he wants to paint the future where we understand less and less about the machines that take control of our lives. For instance, he would discuss how the machines began understanding chairs and cats a few years after the problem appeared intractable. Still, he would make us believe how some of the unfair biases in today’s models will not be eliminated almost ever or cannot be a significant improvement on human subjectivities and biases. The author similarly ignores how AI is democratizing developments in some ways, with many erstwhile giants on their knees, even if the power concentration is rising in some other ways. We need books like these to educate us about the risks, and we need eloquent activists like the author to rouse everyone into action. Practically, however, we need different preparations at the individual, community, national, and global levels. ...more |
Notes are private!
|
1
|
Oct 06, 2024
|
Oct 13, 2024
|
Oct 13, 2024
|
Hardcover
| |||||||||||||||
0399562761
| 9780399562761
| 0399562761
| 3.92
| 2,733
| Jun 25, 2024
| Jun 25, 2024
|
liked it
|
In this latest offering, the renowned futurist revisits his groundbreaking concept of technological singularity. However, the eagerly anticipated upda
In this latest offering, the renowned futurist revisits his groundbreaking concept of technological singularity. However, the eagerly anticipated update to the nearly 20-year-old seminal work falls short of expectations over and above the revised date when he expects Singularity to arrive. Instead, the book feels hastily assembled. Absolutely, the new date � before the end of this decade � will likely make this book a must-read piece of work for many, but it is likely that most, like this reviewer, will walk away not much smarter. The book's primary shortcoming is its lack of fresh insights. Rather than delving deeper into the evolving landscape of technology and its implications for the singularity, it rehashes familiar territory. The author misses a golden opportunity to provide better justifications for why he expects machines to be better than humans in almost all aspects by 2029 and not 2045. More importantly, the book fails to discuss the implications of machines working on themselves. At the least, the update book should have re-examined the core concepts of singularity in light of the vast amount of new information available since the original publication. A glaring omission is the lack of discussion on recent technological breakthroughs. The book overlooks innovations in mobile telephony and social media that were not expected in the first work, which is perhaps ok, but also highly topic-relevant developments in deep tech. Notably absent is any meaningful exploration of neural networks, including RNNs, CNNs, and the game-changing advent of transformer and post-transformer technologies. These advancements have profound implications for machine intelligence, intentionality, and purpose-driven AI � topics that are supposed to be what the book is all about. Instead, the book veers into well-trodden territory, offering a broad overview of technological progress over centuries and projecting exponential growth into the future—a topic extensively covered in numerous other books, TED Talks, and industry reports. A significant portion of the book is devoted to societal progress, summarizing work better articulated by other authors like Steven Pinker. While interesting, the author's optimism about technology's impact on employment and his speculations about future innovations across various fields don't offer much novelty either. On the positive side, the author's unwavering optimism and recounting of technological advancements do provide some valuable insights. His ability to picture potential future developments across various sectors is commendable, even if not groundbreaking. Overall, the book may become the book of the season for most readers, but it serves more as a general recap of well-covered subjects than a pioneering work like its predecessor. ...more |
Notes are private!
|
1
|
Jun 26, 2024
|
Jun 30, 2024
|
Jun 30, 2024
|
Hardcover
| |||||||||||||||
1250897939
| 4.35
| 3,695
| Nov 07, 2023
| Nov 07, 2023
|
liked it
|
As one of AI's pioneering founding mothers, Dr. Fei-Fei Li is destined for the history books. While shaping the AI's true history-defining years of th
As one of AI's pioneering founding mothers, Dr. Fei-Fei Li is destined for the history books. While shaping the AI's true history-defining years of the first quarter of the twenty-first century, she had a unique vantage point to narrate the evolution that has now turned into a revolution. The chronicle of the neural network renaissance sketched in the book should prove a trove for future scholars tracing AI's origins. Alas, this book's desultory storytelling does not do justice to the topic. It will go down as not only incomplete but also inadequate in the goodness of time. The book is best when it captures the emergence of Convolutional Neural Networks (CNNs) and then Recurrent Neural Networks (RNNs) awakened from the field's winter slumber, breaking past roadblocks in ImageNet and WordNet. The author superbly shows the critical role played by her teams and their heroic labors in generating vast human-curated datasets. For some reason, the author shies away from more detailed technical and technological factors, which would have made the book a far greater source of historical importance. The author's weaving of this with her personal narrative does not work. The book overreaches in painting personal setbacks as the stuff of Greek tragedy. As much as the author tries, her story is not of an extreme underdog. This person is blessed with great intellect and has always been a part of the world's best institutions. She was rarely hindered because of her gender or ethnicity. Yes, she has her own occasional setbacks and personal tragedies, but not of the kind that make compelling narratives. Like her Congress appearance, the endeavor to make everything a rough climb has almost anecdote ending with a damp squib. They serve occasional purposes in providing a base for some of the author's ethical AI pursuits, but once again, not of a kind that will set either the goals or the work apart from those of many others. The biggest weakness is where the book stops - just at the time of the arrival of the LLMs, which, as we know it today, dwarfs all that came before. The book is akin to an early 20th-century treatise on electromagnetism that stops at 1905 and Einstein's emergence in physics. The book excels in showcasing neural networks' sporadic successes and pitfalls before taking absolute center stage, but it still appears like the tales of early Korolev rockets before Apollo's glories. ...more |
Notes are private!
|
1
|
Dec 04, 2023
|
Dec 08, 2023
|
Dec 09, 2023
|
Hardcover
| |||||||||||||||||
0593593952
| 9780593593950
| 0593593952
| 3.83
| 10,684
| Sep 05, 2023
| Sep 05, 2023
|
it was amazing
|
Before the review, it is important to mention that this reviewer has rarely agreed with any books more than this one on almost any topic. I have writt
Before the review, it is important to mention that this reviewer has rarely agreed with any books more than this one on almost any topic. I have written on at least a dozen topics covered in the book in various past reviews and my LinkedIn posts with almost identical conclusions on AI's impact. Now on to the long review: Mustafa Suleyman's The Coming Wave offers a compelling narrative of the promises and perils of artificial intelligence, emphasizing the urgency of collective action to mitigate its risks before they spiral out of control. The book excels in illustrating the far-reaching impacts of AI, although these sections are short and sporadic. It falls short in its central theme of potential solutions. As the founder of DeepMind, Suleyman provides an insider's, authoritative perspective on the recent advances. While AI optimism pervades the book, with the topic being its risks, these sections' message on transformative power is often too hurried. That said, the book convincingly argues that we have moved beyond the rudimentary applications of AI, such as chatbots and photo editing tools, to a new era where machines can think on their own. Techniques like deep learning and transformers have enabled AI to tackle complex real-world tasks like protein folding, autonomous driving, and understanding of human languages that were far beyond their capabilities until now. The author effectively argues these are not incremental advances but a paradigm shift � AI can now learn and reason independently in ways that were unimaginable even a few years or even quarters ago. Many who feel AI has been around for years completely miss this point: various giant thresholds have been crossed, and we all need to think about AI anew to appreciate its impact from hereon truly. The main positive effect of AI is how it has begun to turbocharge innovation across sectors globally. It can deal with the complexity of orders higher than ours and exponentially rising. In this reviewer's language, we have been tackling all the life's NP-hard problems at a particular level defined by the limitations of our 100W-powered biological neural networks, aka brains. Our solutions have been the best so far, as nothing in our toolkit could process the most elementary of our level complexity issues, let alone anything higher. This last fact is no longer true. In other words, machines of higher capabilities will have repeated relooks at the complicated problems of all life's domains, with the promises of solutions that will revolutionize fields as far away as synthetic biology and robotics to drug discovery and new chemicals and everything in between including quantum computing, battery alternatives, superconducting candidates, etc. Whether linked to machine vision or solving micro-scale climate tech issues, supply chains, policy regulations, and finance - none of our domains will likely remain untouched. More importantly, the changes will be hyper-paced, with the risks of better technologies around the corner forever even before any new solution settles, all accentuated by these technologies building on each other, with quantum computing plus AI an obvious hyperscaling candidate. This ability to acquire and deploy knowledge at a more complex level makes AI a universally applicable technology. Suleyman illustrates this through examples like AlphaFold cracking protein folding, then showing how the same system can learn physics or math. On the one side, The Coming Wave refreshingly looks beyond common Silicon Valley chat/edit examples of AI's emerging impact. On the other, the book's focus is squarely on risks. The author balances every positive prognostication with a historian's lens, cautioning against blind techno-optimism. He convincingly argues against optimists who ignore logical risks through the simplistic interpretation of history based on a small number of data points, most famously those who love to site a Malthus or a Ludd. The author's starting point is simple: AI should not be contained, given the positives, but it cannot be contained given the distance we have already covered in our world of competitive nations, corporates, and ego-driven people. Paradoxically, his eventual suggested solutions are too idealistic and rhetorical to have any chance of succeeding in a world where no two central authorities have the same moral, ethical, political, or legal framework, not just across nations but even within the same nation between rival parties. To this reviewer, the author should have focussed on the best defensive strategies given his technological skillsets and understanding. Good, conscientious folks should continuously work on mitigating risks by building rival technologies - like anti-virus or anti-missile technologies - that perpetually monitor early signs of malfeasance. Yes, policymakers, too, must come together to do their best on global guardrails, but it is unlikely high-level global agreements can prevent much of anything the author warns about through the book. Other societal risks, like technological unemployment, are more important, but they do not get the same treatment as risks of malintent or machine errors. Most discussions on these topics, including issues like bias, transparency, and privacy invasions, have little freshness. In fairness, few books could offer definitive solutions to challenges this enormous and complex. If anything, this ambiguity leaves readers recognizing their own role and agency in shaping the AI future. The book succeeds most in giving readers a conceptual foundation to wrestle with the coming wave. The book dwells heavily on historical examples to frame AI. While partly instructive for context, extended discussions about innovations like electricity feel dated and distracting compared to the remarkable technological forces described elsewhere. More perspective from the bleeding edge of research could have reinforced the book's vital message around AI's transformational potential. The discussions on the complexity above are my own, although they mirror those of the author's. Before leaving the review, let me add some more personal views on the same topic, although this has no connections with anything in the book. The book does not recognize that we have likely built a completely different form of intelligence using the latest neural network methods, one that is likely far superior to human intelligence and constantly improving. This new intelligence may or may not beat The Turing Test. We may constantly encounter examples where humans do things better or differently, but this will not contradict our machines' new ability to deal with higher forms of complexity. It is the machines' ability to deal with a higher level of complexity that is revolutionary and transformative. Ever since Turing, the arguments that machines would need to mimic human behavior to be considered intelligent have been fundamentally flawed and, at times, regressive. We don't need to understand a dolphin's language to know that we are more intelligent, and similarly, machines don't need to mimic us to surpass us in intelligence. Lastly, through unsupervised learning, machines create their own languages, classifications, tags, etc., to analyze structures like genes, proteins, vision, and everything else. This allows them to approach various scientific queries in ways our languages, including coding languages, were incapable of. For example, deciphering a gene would never be possible using our language dictionaries and any human-driven categorization. The same happens in LLMs through machines' indecipherable ways of dealing with tokens and converting them into usable symbols. This is another new methodical change with far-reaching implications everywhere. ...more |
Notes are private!
|
1
|
Sep 10, 2023
|
Sep 11, 2023
|
Sep 11, 2023
|
Hardcover
| |||||||||||||||
0691181225
| 9780691181226
| 0691181225
| 4.15
| 126
| unknown
| May 04, 2021
|
it was amazing
|
The Self-Assembling Brain is a fascinating examination of the intersection between neurobiology and artificial intelligence. As the title suggests, th
The Self-Assembling Brain is a fascinating examination of the intersection between neurobiology and artificial intelligence. As the title suggests, the author Jonas Hielsinger posits that the brain - let's call it BNN or biological neural network for this review - is a self-assembling system with simple low-level rules resulting in incredibly complex high-level behaviors and cognition. Through dense yet lucid descriptions of cutting-edge research in neuroscience, the author makes the case that understanding how the brain wires itself may hold the key to advancing AI or artificial neural networks, henceforth called ANNs - again for this review. The core argument underpinning the book is that neurobiology and AI are deeply intertwined fields with much to learn from each other. It emphasizes the need for collaboration between experts across disciplines to unlock the secrets of ANNs and BNNs. In some ways, the author's views are too biased toward the potential payoff of connections between the fields. The book could have benefited from more focus on the enormous divergence that has grown between these fields by now, but it still does not take away anything from the enormous value it provides, regardless. The book truly shines while discussing the details of neuroscience. A particular highlight is the in-depth discussions of how simple local learning rules, evolved over millions of years, lead to the complex phenomena we associate with cognition and consciousness. Take language acquisition as an example � babies are not explicitly programmed with grammatical rules but rather absorb the statistical regularities in the speech patterns around them. The brain, a BNN, is wired to detect and internalize these regularities through brute repetition, unlike how we train ANNs these days. The book illustrates this and other similar concepts through clever hypothetical dialogues between experts at the start of each chapter. In one exchange, an AI researcher presses a neuroscientist on how children acquire language without direct instruction. The neuroscientist explains how the rapid formation and pruning of neural connections allow the BNN to build statistical models reflecting the environment. While fictional, these dialogues neatly encapsulate the core themes around self-assembly and help make the later technical sections more intuitive. An early section analyzes systems like our BNNs that are fundamentally unpredictable despite relying on simple deterministic rules. And, then, there is the reverse. Networks of neurons in lower-level areas operate largely randomly at an individual level yet produce reliable signals when aggregated. Out of disorder emerges order. The book covers the opposite phenomenons exceptionally to describe various aspects of both neural networks' complexity. The book argues that grappling with these chaotic systems holds lessons for AI researchers seeking to build adaptable, resilient models. The brain achieves robustness despite � or perhaps because of � underlying chaos and randomness percolating through its networks. While the author makes a strong case for collaboration between neuroscience and AI, the rapid progress of artificial intelligence over the past decade suggests the arrow of learning between the two fields has reversed in crucial ways. This reviewer feels that back when ANNs were in their infancy, AI researchers had much to gain from understanding the workings of organic BNNs. Insights into biological neural architecture and plasticity accelerated early ANN development. However, ANNs today operate unconstrained by the limitations of their organic counterparts - they do not have to be energy efficient or constructible from genetic code. They are not survival maximizers without a goal. The environments and design parameters for ANNs are now so distinct that neuroscience, for all its intricacies, likely has more to learn from AI than vice versa moving forward. While exceptions exist, the utility of modeling AI systems on detailed neurobiology has also diminished because of the incompleteness of our understanding of low-level brain function. In summary, while conceptual inspiration clearly flowed from neuroscience to AI originally, ANNs have evolved so dramatically in recent years that they operate under very different principles and design constraints compared to BNNs. While fascinating, the complex mechanics of actual brain processes seem unlikely to offer meaningful shortcuts for today's leading AI techniques. Such disagreements aside, here is a book where one learns in every para. The details are exhaustive but also fascinating when one begins to think how evolution has produced a gadget of such intricacy. The book not only succeeds at conveying the awe-inspiring complexity and magic of the BNNs but also throws light on how we will struggle to truly understand and master ANNs despite being their creators. ...more |
Notes are private!
|
1
|
Aug 11, 2023
|
Aug 15, 2023
|
Aug 17, 2023
|
Hardcover
| |||||||||||||||
B0C5G7M2KQ
| 3.00
| 1
| unknown
| May 16, 2023
|
liked it
|
The book is like a short school/college project report. Based on secondary research. Well-crafted.
|
Notes are private!
|
1
|
Aug 12, 2023
|
Aug 12, 2023
|
Aug 12, 2023
|
Kindle Edition
| |||||||||||||||||
0138200041
| 9780138200046
| B0BXNVM6FC
| 3.91
| 488
| unknown
| Apr 14, 2023
|
really liked it
|
The AI Revolution is an engaging and thought-provoking read, offering insightful examples that highlight the potential of artificial intelligence. The
The AI Revolution is an engaging and thought-provoking read, offering insightful examples that highlight the potential of artificial intelligence. The authors eloquently capture the essence of their argument in the middle of the book: "Medicine traditionally refers to a sacred relationship between a doctor and a patient � a twosome, a dyad. "And I'm proposing that now we move to a triad," he said, with an AI entity like GPT-4 as the third leg of that triangle. Today's LLMs are likely to appear elementary in a few years. As impressive as their feats are, as shown in the book, they still have much to demonstrate in order to surpass the expertise of our finest medical professionals indefinitely. Even if they claim to outperform the average practitioner, it is natural for many of us to harbor reservations and doubt their abilities, regardless of the irrationality behind such sentiments. Nevertheless, the book masterfully showcases the incredible potential of integrating GPT as the third agent in the doctor-patient dynamic. From aiding in diagnosis, documentation, and explanations to serving as an error handler, facilitating patient-doctor communication, optimizing planning, and enhancing overall efficiency—the possibilities are vast. Furthermore, the book hints at the future prospects of LLMs as long-term record-keepers and even contributors to drug discovery, further emphasizing their potential value. It is pretty likely that healthcare and pharmaceuticals emerge as generative AI's most significant application sectors over time. ...more |
Notes are private!
|
1
|
Jun 25, 2023
|
Jul 11, 2023
|
Jun 25, 2023
|
Kindle Edition
| |||||||||||||||
1579550827
| 9781579550820
| B0BY59PT5Z
| 3.86
| 1,441
| unknown
| Mar 10, 2023
|
it was amazing
|
"What is ChatGPT doing" has the perfect combination of well-explained, highly technical details with clear, concise, human writing (!) to make some pr
"What is ChatGPT doing" has the perfect combination of well-explained, highly technical details with clear, concise, human writing (!) to make some profound points about not just the product that is at everyone's tongue tips but also a scientific discovery that has the potential to change everything in humanity's future. Superficially, the reader is taken on an enlightening journey through the intricate realm of generative artificial intelligence (AI) and ChatGPT's technical architecture with the emphasis on things that make it completely different from everything that carried the tag of "artificial intelligence" before. The least one learns is an appreciation for why this "AI" is different and why it needs learning. There will always be those who will try to confuse previous-generation neural network technologies with generative AI. Still, these people could be making even worse mistakes than those made by many celebrated handphone makers of 2007 who continued to call their devices also as "Smartphones" without ever stopping to understand what had changed post the arrival of the iPhone. However, there is more it what has changed than just the arrival of a new product or a large language model. Philosophically, the LLMs are a new lens through which we are able to dissect the mechanisms of human cognition. The enigmatic structure of the human brain, its nebulous configurations of neurons and synapses, has long defied capture in simple equations. Our brains have been black boxes with constellations of thoughts and perceptions, ideas and inspirations that form the stars of our individual universes. We have been able to do little more than brandish them a chaotic cosmos where the most we could attempt to understand were in small parts. Yet, as Wolfram illustrates, the advent of generative AI has begun to chart a course through this complexity, offering profound insights into how we perceive everything from language to visuals and more. Despite the differences in the inner workings of our brains and the current constructs of generative AIs, there is a striking similarity in the results both produce. This resemblance is a testament to the role of near-by or near-term parameters ("attention") in shaping our subsequent thoughts and ideas, just as generative AI uses immediate context to generate responses. It's like looking at a painting, where each brushstroke contributes to the overall picture while also extending the meaning of what it surrounds. However, the parallels between the human brain and AI are not without much-discussed terrifying implications. They are not the topic of the book, but the explanations only cement the thought that for ever-improving machines, passing our level of intelligence will be just a small milestone on its light-speed march. Wolfram's book excels in describing the concept of computational irreducibility and the constraints of generative AI. There are limitations to what an AI model can infer or predict about a system's future states without simulating the system step by step. In other words, there are realms within which the process of querying will have to take alternate forms, including the good old lookup-type forms. This harks back to traditional methods like reaching for a dictionary when one comes across an unfamiliar word, a process not yet completely replicated by LLMs that rely solely on training-based generative methods. This is sure to change soon. The heart of the book lies in its elucidation of how generative AI creates its own classifications through tokens and navigates the complexity of information through attention mechanisms. This process mirrors human cognition in many ways, particularly because it makes LLMs their own kind of black boxes that work but without us understanding - in human terms, at human levels - how. Wolfram's ability to explain equations makes the book far better than hundreds that spring up on a daily basis. A must-read guide for almost everyone. ...more |
Notes are private!
|
1
|
May 15, 2023
|
May 22, 2023
|
May 15, 2023
|
Kindle Edition
| |||||||||||||||
0393635821
| 9780393635829
| 0393635821
| 4.36
| 4,278
| Oct 06, 2020
| Oct 06, 2020
|
it was amazing
|
The Alignment Problem is the perfect book to read to understand the machinery behind generative artificial intelligence (AI) systems that are taking o
The Alignment Problem is the perfect book to read to understand the machinery behind generative artificial intelligence (AI) systems that are taking over the world. This insightful and thought-provoking book comprehensively explains the underlying mechanisms of AIs. Directly and indirectly, it also explores one of the most pressing issues of our time: how to ensure that artificial intelligence is used for good rather than for harm. If the previous era programming were based on conditionals and loops, variables and functions, the machine learning algorithms have their equivalents in reinforcement learning, shaping, and IRL. This book is a fantastic introduction to the lexicon of this emerging field. The author delves into the challenge of aligning AI with human values, arguing that AI systems have become exceedingly powerful and opaque. While the call for greater accountability and alignment is compelling, the author often appears to state an aspirational goal—akin to the desire for no child to go hungry—rather than a concrete, actionable solution provider. The reviewer wishes to share some personal takeaways from the book, focusing on the two developmental phases of machine learning systems: The initial phase involves constructing an AI system to accomplish broad goals through learning, imitation, and problem decomposition. This phase is reminiscent of the AlphaGo machine that mastered the game of Go by studying human gameplay history. In the subsequent phase, the AI system evolves to set its own objectives, generate its data, and create learning processes with diminishing inputs from its human masters. This is the scary stage where machines are on autopilot learning, including devising better learning methods to do things we barely comprehend outside the end results. An example of this is AlphaGo Zero. A future example of this could be the use cases of AutoGPT. Most of us who love to quote Asimov's Robotics rules or Turing tests discuss AIs in the first phase above. Discussions on topics like "Alignment" are also more relevant when humans have some control over end goals, interim steps, process creations, and inputs. We are incapable of imagining what machines could do once they internalize more and more away from our provided structures. Positively, machines could create scientific or medicinal discoveries and solve some of our biggest problems. The list of negatives is far more inscrutable and hence frightening for most of us. Although the desire for control is understandable, implementing safeguards may be challenging due to competition among nations and corporations. For instance, one country's attempts to restrict AI applications in a certain sector could be undermined by providers in another jurisdiction. Additionally, machine learning algorithms are not difficult to develop and are accessible to many. As a result, small teams can rapidly create innovative products that are quickly imitated, rendering traditional regulatory approaches ineffective. Another thing one learns from the book is how most AI/ML tools - aka libraries, stacks, programming methods - are available to everyone, everywhere. Without a doubt, a few will emerge as commercial winners with their AI implementations, but this is going to be a crowded field in every aspect of AI use cases, making control extremely difficult. In conclusion, The Alignment Problem is a timely and invaluable resource for readers looking to understand the complexities of AI alignment and its implications for society. This book is an important contribution to the ongoing conversation about AI's potential to shape our future, both positively and negatively, and is a must-read for anyone interested in the subject. ...more |
Notes are private!
|
1
|
May 2023
|
May 04, 2023
|
May 06, 2023
|
Hardcover
| |||||||||||||||
1250770742
| 9781250770745
| 1250770742
| 3.88
| 464
| Jan 2021
| Jan 19, 2021
|
did not like it
|
Superficially, "A Brief History of Artificial Intelligence," is an ambitious attempt to provide a comprehensive account of the development of artifici
Superficially, "A Brief History of Artificial Intelligence," is an ambitious attempt to provide a comprehensive account of the development of artificial intelligence (AI). However, the different and evolving meaning of the central term, artificial intelligence, comes in the way. The book simply fails to address the rapidly evolving state of AI as it is known today. The way the term AI was meant at various times, the book turns into a computer science history. The book was published in 2021. For the reader, however, a complete absence of generative AI, ChatGPT, Bard, or other similar terms makes the history itself too historic! Its treatment of AI's potential uses and abuses is similarly weak. Discussions are cursory, often recycling arguments and examples covered well in popular journals. This lack of depth is especially apparent when the author addresses weighty topics like AI consciousness or its impact on jobs. The author's biggest failure is in its predictions about the future of AI. Wooldridge's vision of what AI will be capable of in the coming decades seems painfully outdated compared to the developments we've seen in just the past few weeks. The rapid pace of AI advancement has outstripped the author's expectations, rendering many of his predictions obsolete so soon after its release. Given that the topic is soon to have many books with the central focus on the path-breaking latest innovations, this book has lost its relevance even more. ...more |
Notes are private!
|
1
|
Mar 29, 2023
|
Mar 30, 2023
|
Mar 31, 2023
|
Hardcover
|

14 of 14 loaded