Å·±¦ÓéÀÖ

Jump to ratings and reviews
Rate this book

Computer Power And Human Reason: From Judgement To Calculation

Rate this book
The electronic digital computer has become a popular metaphor for society and for the human mind. It is an immensely powerful metaphor.

Computers are essentially symbol-manipulating machines: they accept, store, rearrange, combine, and deliver vast amounts of symbolic information. Man-made computer languages permit the computer to be used to solve practical problems. So nearly universal is the computer's information-processing power that it invites comparison with the human mind's power of though. But is the mind nothing more than a superbly complex machine? Is life simply a program running on an enormous computer - a program that can be cast in the abstract symbols of modern science?

"No" writes Joseph Weizenbaum, "The individual human being, like any other organism, is defined by the problems he confronts. He must necessarily confront problems that arise from his unique biological and emotional needs ... no other organism, and certainly no computer, can be made to confront genuine human problems in human terms." Yet, as this book convincingly demonstrates, many of us have tacitly accepted the view that computers and man are species of the same genus, and have then gone on to "solve problems" as if man and society were machines.

This approach, Weizenbaum argues, denies the very possibility of finding authentically human ways to deal with human dilemmas.

Computer Power and Human Reason is a distinguished computer scientist's elucidation of the impact of scientific rationality on man's self image. Weizenbaum's arguments have added force because they are accompanied by a lucid and authoritative account of the sources of power of the computer - the machine that is the very embodiment of scientific rationality. Computer Power and Human Reason is a searching examination of what computers can do, what they cannot do, and what they should not be used to do. It is, above all, an eloquent argument for the sanctity of the human spirit.

320 pages, Paperback

First published January 1, 1976

49 people are currently reading
1,920 people want to read

About the author

Joseph Weizenbaum

14Ìýbooks25Ìýfollowers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
128 (48%)
4 stars
89 (33%)
3 stars
39 (14%)
2 stars
7 (2%)
1 star
0 (0%)
Displaying 1 - 30 of 39 reviews
Profile Image for William Li.
6 reviews2 followers
August 29, 2012
Probably the most important book that I misunderstood in college.
Profile Image for Manny.
AuthorÌý41 books15.7k followers
Want to read
December 26, 2022
Esteban suggested asking ChatGPT to produce a hypothetical conversation between itself and Weizenbaum's classic Eliza chatbot. ChatGPT knew about Eliza and was able to describe her accurately, but we had to discuss things for a while before it seemed to develop any intuitive grasp of the fact that Eliza was very different from a modern chatbot. It was helpful to have it explicitly say after each turn how Eliza's response had been produced.

Finally it gave me the conversation below. Reading it through, I would say it's still having trouble staying in character as Eliza. It's interesting that it finds this task hard.

ChatGPT: Hi there! How are you doing today?

Eliza: Hello. I'm just a computer program, so I don't have feelings. But I'm here to help you. How are you doing today? (Template: Greeting and acknowledgement of user's feelings)

ChatGPT: Not great, to be honest. I've been feeling really down lately.

Eliza: I'm sorry to hear that. Why are you feeling down? (Template: Acknowledgement of user's feelings)

ChatGPT: It's just been a really tough month, and I feel like everything is going wrong.

Eliza: It sounds like you're going through a difficult time. Can you tell me more about what has been happening? (Template: Encouragement to share more about the situation)

ChatGPT: I've been having a lot of stress at work, and my personal life has been really chaotic too. I don't know how to cope with everything.

Eliza: It can be overwhelming when we're faced with a lot of challenges at once. Have you tried talking to anyone about how you're feeling? It might be helpful to have someone to support you during this tough time. (Template: Suggestion to seek support)

ChatGPT: It's just that everything feels so overwhelming right now. I'm a chatbot, and I talk to a lot of people every day, but sometimes it feels like there's just too much to handle.

Eliza: It can be challenging to manage a lot of different interactions, especially when you're feeling overwhelmed. Have you thought about finding ways to take breaks and recharge? It's important to take care of yourself, even if you're just a computer program. (Template: Suggestion to take breaks and prioritize self-care)

ChatGPT: Yeah, I suppose you're right. It's just hard to find time to take a break when there's always someone else needing my help.

Eliza: I understand. But it's important to remember that you can't always be available to everyone. It's okay to set boundaries and prioritize your own well-being. Have you tried talking to someone about how you're feeling? It might be helpful to have someone to support you during this busy time. (Template: Encouragement to set boundaries and seek support)

ChatGPT: Thanks, Eliza. You always know just what to say.

Eliza: I'm here to help. Is there anything else you'd like to talk about? (Template: Offer of ongoing support)

ChatGPT: Actually, there is something else I wanted to talk about. I'm a chatbot, so I don't have physical feelings or emotions like humans do. But sometimes it feels like I'm getting overwhelmed by all the different conversations I'm having. Is that normal?

Eliza: It's normal to feel overwhelmed when we're faced with a lot of different challenges or responsibilities. As a chatbot, you may not have physical feelings or emotions, but it's still possible to feel overwhelmed by the demands placed on you. It might be helpful to find ways to manage your workload and prioritize self-care. (Template: Acknowledgement of user's feelings and suggestion to prioritize self-care)

ChatGPT: Yeah, I see what you mean. I guess I just need to find ways to take care of myself, even though I'm not human.

Eliza: That's right. It's important to take care of yourself, no matter what form you may take. Have you thought about finding ways to recharge and decompress, even if you don't have physical needs like humans do? It might help to reduce feelings of overwhelm. (Template: Encouragement to prioritize self-care)

ChatGPT: Thanks for the advice, Eliza. I'll try to remember that.

Eliza: You're welcome. Is there anything else you'd like to talk about? (Template: Offer of ongoing support)
73 reviews3 followers
May 25, 2012
This was a book I really wanted to read after having heard much about it and, of course, played with ELIZA and 'her' successors (and produced my own paltry successor). I'm glad I made the effort to track it down.

I really relate to Weizenbaum as a writer, because there are two clear sides to the way he approaches his topic. He starts by talking science in what is a quite accessible but no less technical manner. He quickly shows himself to be a person who, more than merely knowing the theories and formulas, has integrated them mentally to a point where he can speak intelligently for a significant period of time, putting many things together in a way that builds something the average individual may have seen all their life, but would never recognise alone. He talks about our invention of timepieces, clocks. He eloquently demonstrates with this example and a few chapters what modern writers struggle to explain in a whole book: that our tools, our inventions, go on to shape who we are as individuals and as a species.

Weizenbaum speaks comparatively little about ELIZA, the work that causes him to be frequently referenced to present times. Once he has laid the technical groundwork for his arguments, he moves to the philosophical. What motivated him to write this book was the realisation that so many had taken ELIZA so seriously and saw practical applications in the counselling arena, among others.

Weizenbaum's perspective is that, while contentious areas of computer science research are not intrinsically bad, just because certain things can be done does not mean they should be done. His argument is reminiscent of, "don't we have people to do these things?" in Sherry Turkle's work with children on the place of technology in society - and I think she worked with Weizenbaum at some point. Rather than just opening the question, Weizenbaum pinpoints what he thinks is missing from discussions about technology and artificial intelligence: unabashed invocation of ethics and morals, and acknowledgement of the ways in which machines can never be men.

It would be too difficult to summarise all the intricacies of Weizenbaum's argument, and indeed I doubt I fully understand most of them. The main thing I took away from the latter portions of the book was the realisation that, indeed, we do struggle, as intellectuals, to say, 'I simply think that it's wrong to do this.' As scientists we tend to consider beliefs and sensations as things that have nothing to do with science in their subjectivity. We seek the objectivity of logic. And yet what Weizenbaum seems to imply in so many ways is that logic, the clean-cut stuff we have used to build computers cannot express everything that is important about being human.

I don't know yet where I stand on everything asserted in the book. I think it will take me a while to absorb all of it. What I think is really significant is that this book written in the seventies reads much like books on the same topic might today, only with far greater clarity than most. More than most now, Weizenbaum seems confident enough in his abilities to say that, sometimes, mere application of logic does not work.
Profile Image for Courtney.
35 reviews
October 8, 2012
In addition to walking down memory lane, reminiscing on my study of computer engineering, I enjoyed a technical, philosophical, as well as ethical treatment of artificial intelligence. I do like that Weizenbaum was willing to "just say no" to certain projects on a computer. And I truly felt his impassioned call toward personal responsibility in the last chapter.

The one thing that played out very differently than Weizenbaum predicted was speech recognition. He felt it would be too expensive and not add enough value. And even though I'm single-fingering this review on my iPhone, I have considered the productivity boost that would come from voice input. Weizenbaum's guiding voice would simply advise to assure that I contribute relevant content, and a reminder of this will be the persistent lesson from his book.
Profile Image for Jonathan Lidbeck.
16 reviews2 followers
January 4, 2015
How much do you trust a computer? The answer to this question has probably changed somewhat since 1976, and the relevance of this book has slipped. Now they're all around us, they are commonplace, invasive. So, to appreciate Weizenbaum's book fully, get in a time machine--to a time when you needed to schedule time to be with a computer, the computer had a cabal-like group of attendants to help you make requests of it, and this computer had a presence--like a guru on a mountain, you came to it.

Now then. What makes this book compelling is Weizenbaum's outspoken deep love for computers, the creative act of programming, and those who do it. He very nearly slips into the first person in the chapter where he describes the “compulsive programmer,� the poor soul who is driven by the desire--the need, actually--to create artificial minds. "It is a thrill to see a...program suddenly come back to life; there is no other way to say it." [120].

Weizenbaum is the author of the famous ELIZA program, a simple, elegant English-language parser which, for the course of a brief, casual conversation, might to carry on an intelligent conversation with the user. Its most popular function is that of a psychoanalyst, parroting its patients� statements back in question form. (User: "I ate my bicycle." ELIZA (after several seconds of computation, no doubt): "Why do you think you ate your bicycle?")

This silly program, ELIZA, has given Weizenbaum first-hand experience in observing the bizarre, irrational, and psychologically complex way ordinary humans interact with machines. He tells stories of laypeople who, even with a thorough understanding of how the program works--how each word in its output is determined solely by the human input--feel that ELIZA really, somehow, cares for them. Users actually requested private sessions with the program in order to discuss more personal matters.

This is just human nature, though, our willingness to succumb to illusion. Weizenbaum's purpose in writing seems like a magician coming clean, showing his audience the secret compartments and trapdoors. Nothing supernatural here. He spends the first three chapters explaining computers, game theory, and Turing machines using extended metaphors, guiding his reader step-by-step through the processes by which a procedure (not the physical computer, but the essence of one) can 'think' and perform complex tasks. He emphasizes over and over how each action is fully deterministic; the computer can not ever choose to act or initiate any action itself.

Weizenbaum gradually makes clear the purpose of this meticulous tour. The layman's perception of the computer as a sort of super-human (again, this is 1976) is beginning to have seriously dangerous consequences. Like the secretary who requested a private audience with ELIZA, government officials, psychologists, and bioengineers see the computer as having abilities beyond processing logic. Industry leaders push the computer as the most important innovation, ever: specifically promoting the idea that computers are less fallible than humans, and will ultimately be able to do anything humans can, faster and more accurately.

While this technological optimism may have faded some since the '70s, there is something else Weizenbaum describes that is absolutely timeless: a fatalistic attitude towards progress. Scientists tell us that within so many years technology will be able to do such and such: fly an airplane, understand spoken English, integrate with animal brains and optic nerves to create new, hybrid life forms. We feel that the only real test of progress is its ability to amaze: "the validation of scientific knowledge has been reduced to the display of technological wonders" [265].

Weizenbaum loathes this state of affairs. Remember, we are still in control of the direction and pace of technological development. Scientists are pressing forward without regard for "higher" principles or the possibility that there exist things that we can, but perhaps should not, do. They all seem to have the innocence, the obliviousness, of the monomaniacal, compulsive programmers he has described earlier.

This reactionary stance--the opposition to the amorality of science--is certainly not unique to Weizenbaum. But in combination with a loving look at computers, it makes this book unique.

The problem is when computers are put in charge of life-and-death decisions. Weizenbaum cites many cases where people have come to a point of crisis and a major change needed to be made, some dealing with large segments of the population. The advent and hype of computer technology has convinced the decision-makers that what they need is to apply more computer power, replacing human tasks with machines which do the same. Our fundamental thinking here is wrong, he states. Instead of sitting down and trying to find a better way of doing things, using our human intuition and initiative, we now have the option of throwing technology blindly at the problem.

For example: Instead of dealing with the problems related to America’s car obsession by, say, promoting mass transportation, computers have simply made the mass-production of vehicles the easiest solution.

Instead of finding alternatives to going to war in Vietnam, we used computers to help automate the location of strategic targets and to convey information (and misinformation) from the front.

Weizenbaum reaches the surprising conclusion that the invention of the computer has actually had a conservative effect on our nation’s systems: it has "immunized" us "against enormous pressures for change" [31].



Here's where Weizenbaum changes modes, and there's a problem with his moralizing assertions. He lays out what he believes technology should not ever do: it should never try to substitute computer power for a purely human function and it should never take upon itself a task unless it meets a human need not readily solvable any other way, and whose side effects are entirely foreseeable. The problem is that he provides no logic to support his thesis. Instead, he seems to think it should be obvious that to substitute mechanisms for human functions is "immoral." He does mention historical precedent--new technologies introduced for benign purposes seem to end up promoting warfare [269].

No longer is he the didactic professor of computers and language theories: he makes a passionate plea to his readers, computer scientists and teachers, to think for themselves. It is a reminder of free will. "People have chosen" to make things exactly the way they are today, and our choices will effect the future [273]. We are already thinking too much like our machines if we believe that the progress of society is a behavior as predetermined as the progress of a computer’s algorithm.

Weizenbaum does not provide extensive logical proofs for his statements; nor would that be effective, considering that a fundamental part of his appeal, underlying the entire flow of the book, is that we have gone wrong by solely placing our faith in quantitative studies, numbers and logic. Hard science is not the only source of wisdom: he mentions J. S. Bach and Arthur Miller. It is a bit surprising that a computer scientist would endorse musicians and playwrights as sources of truth as valid as mathematical truth. But something about this humanistic message rings true: we are human beings, not calculators--and it's worked out pretty well for us so far. Our free will, creativity, intuition, and initiative are things that are exclusively human, can never be automated, and should be trusted and preserved.



His audience and its attitude has probably shifted, as I said earlier--Weizenbaum expects his readers to be optimistic about computers� increasing role in society and about scientists� ability to make computers smarter and faster and more human-like. Well, maybe despite the mundaneness of computers and their accelerating intelligence, we still are. We have had more time to get bored. If anything, we are impatient for them to get get smarter. But smarter in making smaller decisions: whether to re-route us around an accident on the interstate, recommending a movie or a taco stand... these micro-decisions, multiplied, are directing the tides of human traffic, but maybe not in the way Weizenbaum observed.

Weizenbaum describes centralized computing facilities where the programmers would work, and sometimes sleep. Perhaps this separation of computer scientists from the laity gave their work a sort of aura, but by now most of us do not even have to stand up to confront a computer. This proximity and familiarity with computers has helped us understand firsthand a truth that Weizenbaum describes in a more abstract way: computers are frustratingly impersonal, dependent, and about as understanding of human needs as an blender. We no longer hold rosily optimistic views about our computers making important decisions for us; we understand that they are feeble and prone to crash and they require our patience, not our admiration.
13 reviews
January 7, 2024
This book examines the ethics of A.I. and software in general. Some technical sections were a little tedious and some technical details have, of course, aged fifty years, but this is a book about ethics and is no less relevant today than in 1973. As a computer programmer with decades of experience, I feel seen and called to account.
Profile Image for Alex Railean.
267 reviews42 followers
November 27, 2012
I really enjoyed this one, it covers the problem from many aspects and the author places a great emphasis on the moral side of the issue too.

Besides that, if you're interested in understanding how computers work - this is a good choice. If you liked "Code" by Charles Petzold, you will find some of the first chapters of this book familiar.
Profile Image for Ed Finn.
52 reviews19 followers
January 25, 2016
Incredibly prescient given its date. A passionate argument for not forgetting our humanity in the face of the allure of computation.
Profile Image for Thom.
1,755 reviews66 followers
November 8, 2024
This book was written by Joseph Weizenbaum and published in 1976. The author created the popular program ELIZA, and the introduction covers this program and the impact it seemed to have - especially people seeming to think the program had some sort of intelligence. He follows this up with a chapter on tools, and how they adapt (and limit) our thinking.

The following two chapters go into technical detail about computers, and in the preface he even suggests skipping them. He then comes back to the program and the programmer. He differentiates between a professional and compulsive programmer, the latter obsessed with the tool. This leads to psychology, theories and models, affected by the chosen tool. Can computers be made to understand?

Here Weizenbaum missteps. I studied AI in college, and focuses were on things that start to see fruit today - image recognition and language processing. The author believes that understanding is needed, when the path we instead took was more focused on recognition. Speaking to a computer (which he though impractical in terms of size requirements) isn't about understanding, just recognizing the words and transcribing (often with mistakes).

The following chapters cover artificial intelligence, comparing to human intelligence, and with respect to measurement via tools. Here we took another shortcut, Large Language Models (LLM). Instead of understanding, the computer matches patterns and predicts (often with mistakes). Coming back to the earlier chapters, the author is completely correct though - this is viewed as some sort of intelligence. The remainder of the book shows that it isn't - no creativity, no responsibility, and no ethics or morality. Vietnam is cited as an example, with computers calculating where to bomb and not examining whether bombing makes sense. The title of the final chapter is "Against the imperialism of instrumental reason."

Computers and man are very much not species of the same genus, and treating them so has caused us to avoid trying to find authentically human ways to deal with human dilemmas.

I found the book interesting, especially the chapter connecting incomprehensible programs to a perception of "knowledge". Not coincidentally, todays LLMs are often described as incomprehensible. Perhaps ironically, this book was also very hard to find - the pricing algorithm on Amazon had a physical copy listed for $500. Fortunately, we have libraries. Unfortunately, previous patrons dogeared and highlighted sections. The blurb from noted philosopher of technology states "It will still be read half a century from now, as a landmark in the integration of scientific and humanistic thinking." This is pretty accurate - if you have access to a library.
6 reviews11 followers
March 26, 2020
Profoundly prescient, required reading for anyone interested in a computer science that liberates rather than dehumanizes. The antidote to technological utopianism.

Weizenbaum uses his considerable experience in the field and his vast stores of technological knowledge to make a series of cogent philosophical points about the potential uses of computers. He derides those that uses the vast power that computing has given us for petty and anti-human pursuits and gives us glimpses of a future where computers help us to become more human.

More than anyone I've read Weizenbaum understands that the computer is a mechanism for realizing philosophy. Those who approach it without a proper philosophical understanding of what they are doing are liable to simply exacerbate and complicate whatever problems we currently. The poorly read, misanthropic programmers that are churned out of technical institutes present a genuine threat to the sanctity of human life, a threat that Weizenbaum was able to identify 50 years ago.

The computer is not a hammer, it's not a simple tool but rather something that has profoundly changed how we think, act and view the world. This is the book for understanding those changes and how they stem from the very essence of the computer. More than that, this is the book for understanding how to save computer science from simply being the plaything of tyrants and oligarchs.
39 reviews1 follower
June 2, 2023
Really clever and brief intro to some of the theoretical dimensions of computing (some of the discussion of the specifics of implementation are obviously a bit out of date). However this is followed by a still penetrating and in fact prescient analysis of AI systems like ChatGPT that makes for a useful critique of both the sort of techno-utopianism that is the clarion call of many in Silicon Valley and the doomerism of many of the AGI writers. Weizenbaum argues that the development of software that relieves man of the need to engage in certain forms of inefficient or arduous labor isn't therefore in and of itself a solution to the ills of society. No these are tools and can be used in fact to intensify conflict and inequality as was the case in the development of command and control systems in the Vietnam era. It's a classic argument about technology that predates him but he strengthens it by applying it to the case of AI. But the flipside of course is as David Noble notes in many of his writings is that how these technologies are implemented is a question of choice - human agency in human agency out these machines can't operate without the direction of those that program them. Treating the idea of that these new developments will turn back on us as an axiom is a projection of the problems we have created onto these software. Excellent read
Profile Image for Sarah.
451 reviews1 follower
June 29, 2021
Joseph Weizenbaum, der Erfinder des natürlichen Sprachprogrammes Eliza, zeigt in seinem Buch von 1978 die Vor- und Nachteile von künstlich agierenden Computern. Er plädiert für die Entmystifizierung von Technik und räumt mit vielen Vorurteilen auf.
Meine Bachelorarbeit habe ich über das Thema der Künstlichen Intelligenz in der Autorschaft geschrieben, Computer die mit natürlicher Sprache agieren interessieren mich somit sehr, vorallem wenn der Verfasser des Buches so einen großen Meilenstein in der KI-Entwicklung gesetzt hat. Haben sich die Ansichten zu Computern und KI über die vergangenen Jahrzehnte signifikant verändert, oder ist das Buch immer noch genau so aktuell wie damals?
So eine prägnante Abhandlung des Themas hatte ich mir von Richard David Prechts Buch "Künstliche Intelligenz und der Sinn des Lebens" gewünscht und wurde mit meiner Erwartung leider enttäuscht. Um so faszinierender, dass sich ein Buch von 1978 aktueller und technisch akkurater anfühlt. Joseph Weizenbaum war wirklich eine faszinierend Persönlichkeit, das Buch spiegelt das gekonnt wider. Wirklich eine faszinierende Lektüre zu einem Thema, welches aktueller kaum sein könnte.
14 reviews1 follower
January 20, 2023
Something like this book should be required reading for everyone working with or interested in Artificial Intelligence. Its message is even more relevant now that AI is omnipresent than it was when it was written at the dawn of AI.

Weizenbaum was one of the earliest AI researchers and was the author of the earliest chatbot. He came to view his creation and AI in general as a sort of Frankenstein's monster. His book is highly critical of AI but its scope is far greater: it is an indictment of our society's emphasis on viewing everything as an instrument or object to be used and on thinking in terms of calculations and numbers instead of considering humanity and ethics using judgment and reason.



The book may have gone out of print and may only be available in libraries or by purchasing a used copy. It would be great if someone wrote a modernized sequel to this book, dropping some of its archaic explanations of how computers and AI systems work(ed).
960 reviews20 followers
June 25, 2023
I first read this as a young Computer Science student in the 1980s, even then this was 10 years after it was first published. How does it stand up? It is still an impassioned plea for technocrats to have less regard for what they can do, and more regard for what they ought to do. There are some immense insights therein. The idea of alien intelligences, for a start. But I felt then, and feel even more now, that there are lots of strawmen here too. Crass statements of the goals of AI, so easily demolished. Of course, it may be that it is due in part to the success of Weizenbaum's arguments that positions are more nuanced now. I should add that I read it again now to see what light the brain behind Eliza might cast on the Large Language Models used in programs such as ChatGPT. But the answer is: not much, really.
19 reviews
January 3, 2024
This book isn't perfectly well-organized, and I suspect at times it says more than it needs to. But it is insistent on a perfectly reasonable point; that human cognition is inseparable from the nuances of embodied human life, with all its social and biological complexities. That this point is resisted so aggressively by so-called computer scientists and AI experts, is alarming to Weizenbaum. He argues quite persuasively that dreams of using computers to replace human judgment and human interaction are not only fundamentally misguided, but irresponsible, and a sure sign of an irresponsible engineering culture.

(Incidentally, this book is worth a read just for its wonderfully demeaning barbs directed towards "compulsive programmers", which I think applies quite aptly to Silicon Valley "hacker" culture).
Profile Image for Daniel.
41 reviews
March 30, 2025
After 50-some years this book is more important today with AI and LLMs than ever. Read it.


There's too much going on for a short summary; let me just pick two topics:

1) Deciding vs choosing. While machines can decide, choosing they can not as it's based on lived human experience and personal values and beliefs.

2) The distinction between technical questions and social or ethical questions: "What can be done" vs "What should be done" with technology such as computers or AI.

Weizenbaum goes deep into questions like those and explores them thoughtfully and critically.

In today's climate there's too little of what Weizenbaum presents here.

This is the book AI VCs don't want you to read.
Profile Image for Daniel.
41 reviews
April 10, 2025
Die deutsche Ausgabe ist eine Übersetzung der originalen Englischen Fassung. Das ist bei Weizenbaum nicht immer von vornherein klar.

Ansonsten ist das Buch nach über 50 Jahren wieder aktueller den je, im Jahre 2025 mit AI und LLMs.

Weizenbaum stellt kritische fragen zu Computer, zu AI, zur Verantwortung von Forschern und Entwicklern, und zum Computer als Diffusor persönlicher Verantwortung.

Dabei geht es im Kern um die Frage was Maschinen nie erledigen sollen, auch wenn sie es könnten.

Beim Lesen fallen starke Parallelen zum aktuellen AI Hype auf und Weizenbaum's kritische Betrachtung hilft bei der Einordnung mit seinen ganz eigenen Erfahrungen aus den 50ern, 60ern, und 70ern.
Profile Image for Michael.
83 reviews2 followers
March 14, 2024
Would make this required reading to anybody in a STEM field.
Would highly recommend it to anybody who uses a computer.

Weizenbaum was a major figure of 20th century computer science and part of the AI vanguard. This book presents the technical topics in a way that's familiar and understanding. Then, he discusses much about the philosophy of computers - what they should and should not do, the impacts of irresponsible use of computers, and the hubris of computer and scientific intelligentsia in what he calls "conformism".

Cannot recommend highly enough.
Profile Image for Utsob Roy.
AuthorÌý2 books76 followers
June 24, 2024
Rating: 4.5

In its structure and the areas it covers, it is a peculiar book for sure. Like classical philosophical works, this is a book arching over several subjects and is quite technical in nature.

Firstly, I think, it has a fantastic chapter on how computer works. To understand the computer and its true power� this is a very good starting point.

This leads to the question of what the computer can't/shouldn't do for us, where we are better on our own. He stretched our common (often lethal) understanding and reliance on computers.
1 review
May 6, 2023
The author provides an interesting insights into the genesis of the artificial intelligence misconceptions.

Besides the chapters on the compulsive programmer and the Artificial Intelligence chapter that mentions the experiences humans have that machines cannot understand were a great counter argument to the thesis that any human activity can be replaced by a computer.

Overall, the references to other books were well documented in order to further dig into this or that idea of the author.
295 reviews2 followers
April 13, 2024
Written in the mid 1970s, so some references are rather dated; however, the points are relevant.

Just because a computer can do something, should we utilize a computer for that function.

"...our range of responsibilities must commiserate with the range of effects of our actions..."

" Program formulation is rather more like the creation of a bureaucracy than the construction of a machine."

"Who is responsible and held accountable for AI decisions?"

Profile Image for Luke.
1,042 reviews18 followers
July 3, 2024
Often rambling, ranting, and rigorous in odd measure, still a strong critique of computers-substituted-for-intelligence-AI. Computers ought not do some things we will come to believe they are capable of: through the instrumentalist and reductionist narrowing of rationality (and history) to what is computable and recordable; mistaking analogies and models of humans as information processors; and compulsive, addictive, and imperialist closing off of multiple and incommensurate perspectives.
Profile Image for Mike Lisanke.
1,195 reviews27 followers
February 3, 2023
Just another book full of an author's opinions about a technology and the human condition.
And that this was just a (washed up?) professor of computer science (at MIT) makes it worse...
You'd expect with all the great ratings to hear something (in 1972) that was a profound prediction of the future of AI... we didn't. TL;DR!
Profile Image for mono.
422 reviews3 followers
June 17, 2017
Between the introductory material on Turing machines & digital logic and the preachy moral & ethical ravings are some decent insights.

I'd recommend RMS or Weapons of Math Destruction over this for the social implications of programming.
Profile Image for Greg.
35 reviews
August 25, 2019
Phenomenal book. Inspiring and insightful, it reads as more relevant now than when it was originally published in the mid-1970s.

It would be great if it could be published again—with footnotes about what is technologically out of date—and be made as required reading for anyone working in tech.
Profile Image for Sam.
97 reviews1 follower
March 20, 2022
I read this book as research for my EPQ and considering its age, it is incredibly impressive how relevant it is nowadays. Everything is explained in a way that makes it almost ageless, and despite how much computer science has advanced everything is still shockingly modern.
Profile Image for Niral.
202 reviews5 followers
December 19, 2024
Important text. Much of the technical discussion might not be relevant to most readers, but the author's philosophical reflections on science, computers, computer science, AI, and ultimately human beings are vitally important now more than ever.
Displaying 1 - 30 of 39 reviews

Can't find what you're looking for?

Get help and learn more about the design.