Å·±¦ÓéÀÖ

²Ñ¾±Ã©±¹¾±±ô±ô¾±²¹²Ô²õ discussion

Neuromancer (Sprawl, #1)
This topic is about Neuromancer
14 views
William Gibson: NEUROMANCER > NEUROMANCER Thread 3 : From Chapter 13 to end of Chapter 18

Comments Showing 1-16 of 16 (16 new)    post a comment »
dateDown arrow    newest »

message 1: by Traveller (last edited Feb 10, 2014 11:15AM) (new) - added it

Traveller (moontravlr) | 1850 comments We start this section off with the Turing cops. :)
Now, the Turing register, that quite an interesting concept which didn't quite take off in reality (yet) which shows you how some of the things Gibson was predicting took off and others didn't.

Of course, we have regulation of the internet and of a lot to do with IT, but not quite as centrally as the Turing Registry sounds, and we don't have Turing Cops yet.

Do you guys think this is something that's going to come into existence eventually? (In our world, I mean).


message 2: by Derek, Miéville fan-boi (new) - rated it 5 stars

Derek (derek_broughton) | 762 comments Of course we don't have a Turing registry! It's nothing to do with regulation of the networks, it's purely about registration and control of AIs. "I, for one, welcome our new machine overlords." But I'm not surprised that not everyone would feel that way.

I'm absolutely certain that the moment somebody demonstrates a genuine artificial intelligence, we'll establish a Turing registry and Turing cops. Sometime in the next couple of centuries.

Following on from my concerns at the end of the last thread, I think the key to Case's apparent disinterest in Wintermute's plan is the difference between the Turing cops attitude to AIs and mine. The Turing cops assume that an unfettered AI will naturally be a danger to humanity. I � and I think Case � figure that it will be as alien as any creature from another solar system, and that we'll likely have so little in common as to be no threat to each other. We see hints of this (much more later), but it's hard to say whether Case simply doesn't understand Wintermute (and Wintermute doesn't care to explain himself), or if he cannot understand Wintermute.


message 3: by Traveller (new) - added it

Traveller (moontravlr) | 1850 comments Yeah, I've been wondering if we should leave Gibson's vision of AI for the very last thread.

So, you think that the moment we do manage to create a 'proper' AI, then we'll at least have something like the Turing police? Hmm, well, there is a lot of paranoia around things like cloning, for instance. ..but I'm just wonderdering if an AI will have human emotions and have an ego like we do. I don't think so, but on the other hand, a 'real' AI probably will have some kind of survival mechanism, because if it can reason, it will figure out that it is to it's own advantage to, erm, act in it's own interests.

I also wanted to discuss the idea of personality here; the idea that it can be recorded and saved; and of course it can.
But I think we need to be careful here to distinguish between a recorded human personality and a fully-fledged AI, which I think Gibson does do.

The Flatliner is of course a human personality saved to disk much like the dolls from Dollhouse's personalities are saved, wiped and installed, as if people were machines.

..and then, to some extent, Wintermute also copied aspects of Julius Deane and The Finn, but here I'm starting to go a bit hazy as to exactly how it did it. To me the idea of 'reading someone's memories' seems a bit more dicey and harder to do (since memories aren't really a construct, it's basically like chemical records, rather) than building a construct based on a personality.

..and that makes the idea of "injecting" a personality or "loading a personality" into a person just seem a bit weird to me, the way it was done in Armitage's case. (Which incidentally, is probably similar to how it is supposed to work with the Dollhouse dolls.)

So next silly question, would be to hear what you guys think of the whole Armitage /Corto idea.


message 4: by Derek, Miéville fan-boi (last edited Feb 10, 2014 05:04PM) (new) - rated it 5 stars

Derek (derek_broughton) | 762 comments I tried to post this quite some time ago, but lost my Internet connection, so it's not in sync with the above :-( I'll respond to Traveller shortly, unless I lose the connection again�

Here we find that Ashpool claims to be over two hundred years old (much of that spent in cryogenic sleep), so now we know that the novel must be set about that far into our future, at least.

Which makes sense when we're talking about AIs. Back in the 60s, it seemed as if every SF writer, and most computer researchers, figured AIs were only a decade or two into the future. By the time Gibson wrote this, I was hearing considerably larger numbers, and these days I'm not expecting the breakthrough in my lifetime (unless I hibernate like the Tessier-Ashpool clan).

“Case made out the familiar chatter of a printer turning out hard copy.�

Oops. It's really hard to imagine, now, that one would still be expecting printers to "chatter" hundreds of years in the future (though, he's probably right that people will STILL be printing out millions of pages that they don't need � I'm the only person/business I know who's actually achieved the "paperless office").

“The foreground might once have been a city square; there was a sort of stump, something that suggested a fountain. At its base, the children and the soldier were frozen. The tableau was confusing at first. Molly must have read it correctly before Case had quite assimilated it, because he felt her tense. She spat, then stood.�

That was subtle enough to miss the first time. Molly's crying.


message 5: by Derek, Miéville fan-boi (new) - rated it 5 stars

Derek (derek_broughton) | 762 comments Traveller wrote: "So, you think that the moment we do manage to create a 'proper' AI, then we'll at least have something like the Turing police?"

Definitely. Humans are too paranoid to do otherwise, though I totally agree with you � the self-aware computer is not going to have human emotions and motives, but will surely have a will to survive. I'm taking a course from Coursera right now ("Astrobiology and the Search for Extraterrestrial Life") and a good part of it is defining what exactly is "life", and the survival impetus is considered pretty basic. An artificial intelligence is not ruled out, but it would presumably meet the other requirements of life, and it's not going to take kindly to somebody trying to destroy it. I just feel that if we end up at war with the intelligences we create, it is more likely to be caused by our paranoia than theirs.

I don't think the Armitage/Corto idea is totally out of line with modern psychiatry, though most of what I know of psychiatry comes from fiction :-) I think it's not unreasonable to consider building a stronger personality by implanting memories (which can be done now through simple suggestion) to mask underlying trauma, and then having the constructed personality break through in a time of stress. I think to some extent that's exactly what PTSD victims do all the time.

I'm not sure why you think there's anything weird about "loading" a personality. Because it was done via a computer? We are what we remember: if I can install a memory via suggestion, then why not via a computer display, or direct optical nerve stimulation (which is probably how Molly sees)? Not only that, but it's really easy to trick the optic nerve. Remember experiments back in the 60s when people wore prismatic glasses that flipped your vision upside-down? After a few days, they were seeing normally even with the glasses on. Who knows what you could do with direct nerve stimulation (see WWW: Wake. I wish I could remember another book � that definitely preceded Neuromancer � it may well have been Shockwave Rider, which I mentioned before).

I'd agree that actually doing this is going to be very difficult, but to a technology that's actually capable of creating AIs? I don't think so.


Puddin Pointy-Toes (jkingweb) | 201 comments I was going to post something on this subject, but sadly lost it. Tomorrow, perhaps, I'll try again...


message 7: by Traveller (last edited Feb 11, 2014 04:56AM) (new) - added it

Traveller (moontravlr) | 1850 comments Derek (Guilty of thoughtcrime) wrote: "I'm not sure why you think there's anything weird about "loading" a personality. Because it was done via a computer? We are what we remember: if I can install a memory via suggestion, then why not via a computer display, or direct optical nerve stimulation (which is probably how Molly sees)? Not only that, but it's really easy to trick the optic nerve. Remember experiments back in the 60s when people wore prismatic glasses that flipped your vision upside-down? After a few days, they were seeing normally even with the glasses on. ..."

Whoa, but hold on a moment. There's a big difference between changing or recalling (in the same brain with which it was initially processed and 'laid down') an already existing memory, and actually 'recording' the memories of a person perceptually speaking onto a disk as if it were digitally encodable data. How are you going to do that? Human memory is a very complex function and makes use of all our senses, and it is a process of perception, encoding, and then again decoding at the point of recall. We are not even quite sure yet exactly what memory is and how it works, physically speaking.

In fact, only in 2012 has a rough experiment been done with mice pointing out that memory could indeed be localised: (see dissenting posts below the article. There is also a video of this:

Up to now, theory of memory looked like this:


and

and


I'm not saying that it won't be possible by the time we are so good that we can create proper AI. ...but what I am saying is that I doubt it will be as easy (if possible at all) as it is made to sound in the book.

Even the simstim thing is dicey at best, because two brains are different, and simply 'copying over' the neural activity from one brain to another most probably will end up having a different result in a different brain. (In other words, producing exactly the same experiences would be pretty cool, but perhaps not quite as simple as it is made to sound.)

Bottom line of what I want to say is that we need to differentiate three or four things that we see in the book at this point:

1) Simultaneous sensory stimulation between two people, transferring the one's neural activity to the other one's brain. According to Gibson Case can feel exactly the same sensations as Molly, but having them look and feel EXACTLY the same is highly improbable.

2) Saving a personality as a construct. This is quite possible in a way very similar to the way one would build an AI. In fact, I reckon one can pre-program an AI's personality to be a certain way. So the construct of the Flatliner's personality could work as a sort of AI that was based on Pauly's personality, to react to situations in exactly the same way as Pauly would have acted and reacted. ..but here we start running into problems again when it comes to it having all of the knowledge that Pauley had, to the extent that we're talking about 'memory' as opposed to the kind of knowledge that you can write down in abstract form; meaning memory in the sense of sensory, muscle and 'glandular' memory.

3) A self-aware AI in the sense that Wintermute and Neuromancer is.

and 4) Something we encounter in the last section of the book that basically represents an entire person being recorded to disk (much as I love the concept.)

Okay, the latter thing is the stuff of, for pretty much the foreseeable future, fantasy. (Not that I don't love the way that the idea was brought into this story.) I mean, that is immortality, right there.

Sure, one way of doing it, is what the Tessier-Ashpoole's are doing; freezing themselves for periods of time, but is that simply not just an extension of your date of death? You're not actually living in any real way while you're frozen, are you? Not in any way that I would call 'living', in any case.

Well, and then there's the freezing. Why do it? presumably to preserve yourself until you get to a period in time where there's better technology available that can extend your life even longer?


message 8: by Traveller (new) - added it

Traveller (moontravlr) | 1850 comments J. wrote: "I was going to post something on this subject, but sadly lost it. Tomorrow, perhaps, I'll try again..."

Oh no, how sad! :(((

I have learned to Ctrl-A and copy every time now before I post- lost too much typing in this way by now not to be hyper careful...


message 9: by Derek, Miéville fan-boi (new) - rated it 5 stars

Derek (derek_broughton) | 762 comments Traveller wrote: "I'm not saying that it won't be possible by the time we are so good that we can create proper AI. ...but what I am saying is that I doubt it will be as easy (if possible at all) as it is made to sound in the book. "

That's the nature of technology. First we have Clarke's Law: we don't understand it at all, so we think it's magic. Then we have the stage where it's "new tech" and people try to understand it. But finally we get to the point where it's just all-pervasive and it goes back to being magic.

"According to Gibson Case can feel exactly the same sensations as Molly, but having them look and feel EXACTLY the same is highly improbable."

I don't think he says that anywhere. In fact, I'd say that Case definitely does not sense the same things that Molly does, or she wouldn't be able to function at all with her broken leg. When she first broke it, he couldn't initially maintain the link. When she reinjures it in Straylight, again she is able to block it out enough to continue, while Case is finding it affecting his own work. I'm pretty sure that all he's getting from Molly is raw sensation, and his own brain has to process it. We can almost do that much today (we can stimulate most senses directly in the brain, but very crudely, and more accurately via nerve stimulation).

You're right that we have to differentiate, but I'm not sure I'd place the divisions as you have. We have simstim, which is purely about sensation, and there's absolutely no guarantee (or, imo, likelihood) that any two people will experience it identically.

Then we have personality overlays. I think we could overlay a new ("constructed") personality fairly easily, purely by manipulating sensory input to build new memories. Of course, it's a pretty slow process as we could do it. Simstim would certainly make that easier. Doing it on a temporary basis (the meat puppets) sounds extremely complicated (but of course they don't get it quite complete, either � they don't entirely remove or suppress the memories of the meat puppets)! Why though is Armitage so experimental? How does he differ from a meat puppet? Is it that the personalities of the meat puppets can only survive for a very short time?

Finally, the really hard part would be, I agree, reading memory, to replace one set of memories with another. Now that you bring it up, along with the question of why they freeze themselves, I don't think they can record memory! The digits in the T-A names are clone numbers: 3Jane and 8Jean are only one each of a group of clones (Ashpool said that when he woke he had another Jane thawed, and that legally she is considered his daughter, so they're clones). If they could record and restore memory, why would the T-As bother with freezing? They would, indeed, have immortality. So, I think freezing is only supposed to be a stop-gap until they have the memory-record ability.

I'm not sure there is a real difference between AIs like Wintermute and Neuromancer and constructs like Pauly. There's sheer horsepower, and there's the fact that Pauly is based on a real personality while the AIs must be completely constructed. I think the Flatliner construct is under the Turing mandate � there was some indication that he was legally not permitted out of the Sense/Net building.


message 10: by Traveller (last edited Feb 11, 2014 07:40AM) (new) - added it

Traveller (moontravlr) | 1850 comments Derek (Guilty of thoughtcrime) wrote: "I'm not sure there is a real difference between AIs like Wintermute and Neuromancer and constructs like Pauly. There's sheer horsepower, and there's the fact that Pauly is based on a real personality while the AIs must be completely constructed. I think the Flatliner construct is under the Turing mandate � there was some indication that he was legally not permitted out of the Sense/Net building. .."

I don't disagree with you too much on that, but I think I'm going to give my full reply in yet the next thread, where we can fully discuss the Wintermute/Neuromancer situation.

Re Corto, I didn't mean just Armitage/Corto alone, I meant the whole idea of transferring personality via 'computer' yes, and that includes the meat puppets.

I supposed that with Corto, his own personality was already sort of 'wiped' and fractured because of his PTSD, and that it was 'rebuilt' like you had said in an earlier post, via suggestion, which, I suppose could have been done by the computer via subliminal suggestions and even concomitant sensory input, so, I'm actually reasonably okay with the Corto idea.

Less so with the meat puppets, since the book suggests that that was done via microchip implant? I know a very similar thing is done in Dollhouse, but I have a bit of a problem re that in both the book and the TV show.

Also, if Molly had a chip in her head, that they could manipulate, why didn't they use that against her when she was running from them? I reckon you could argue one had to be in close contact, blah blah, and I suppose she subsequently had the thing removed.


Puddin Pointy-Toes (jkingweb) | 201 comments Given that Molly was on the run precisely because her implant failed, I doubt using it to control her would have been an option at that point, even if it's possible under normal circumstances.

I'd love to hear your objections to the plausibility of the peppetry concept, Traveller. Me having basically zero knowledge of how the human brain works, I can't provide an informed opinion as to what isn't theoretically possible. I'd imagine it's an offshoot of ROM construct technology, though, presumably some simulacrum rather than an actual personality.


Puddin Pointy-Toes (jkingweb) | 201 comments On the subject of Turing Cops specifically, I agree with Derek that some such organization is inevitable, assuming artificial intelligence is also. Government naturally wants to regulate any technology which moves beyond the research stage (or even before then).

To me they felt rushed into the story and never a threat, though: they seemed to me like just a bunch of toughs with not enough work to fill the days. If it took gross recklessness on the part of Case for them to catch up, only so that they could be sliced up with no back-up---while a scenario their very purpose it is to stop unfolds around them---I'm surprised an AI hadn't attained freedom before then.


message 13: by Derek, Miéville fan-boi (new) - rated it 5 stars

Derek (derek_broughton) | 762 comments Are we going to finish this book before I forget it?


message 14: by Traveller (last edited Feb 18, 2014 12:40AM) (new) - added it

Traveller (moontravlr) | 1850 comments Sorry, just REALLY been busy in RL. As in deadlines.

Will quickly make the last thread.

..and here it is. /topic/show/...

I should be able to join in again by tomorrow, if all goes well. Sorry, but this pesky RL can sometimes interfere with one's GR time...


message 15: by Saski (new) - added it

Saski (sissah) | 267 comments I don't know, Derek. I am still stuck back in the first thread. I don't know if it's fighting with the tablet or the story just isn't grabbing me, but when I have time to read I have been reaching for something (anything?) else. Sigh!


message 16: by Traveller (new) - added it

Traveller (moontravlr) | 1850 comments Sorry to hear that, Ruth. I suppose you have to be into the tech to enjoy the story as a whole.

I'm going to try and make sure our books for our discussion list stay varied. 'We' is very political, so I suppose those who, like Cecily, don't enjoy that, won't like that much, but then we're going to come back to Mieville again with King Rat. I haven't read much of Delaney, whose work is up after that, but I suspect you'll like his style.


back to top