Pages

Friday, May 01, 2015

Was Dr. Pulaski racist?


i) Recently I ran across a defense of Dr. Pulaski, the short-lived character in TNG. This post really isn't about Pulaski, but since that's the springboard, I'll make a few observations before moving onto the main point.
Apparently, a lot of Trekkies hated the character, and waged a successful write-in campaign to get it axed. They succeeded.
I myself don't share their antipathy towards the character. I think the characterization was somewhat heavy-handed. The way the screenwriters depicted her was rather forced. She was basically a type.
She was, however, a good foil for Picard. Tough, independent. A strong female character. 
She was replaced by Beverly Crusher, who was more likable in the sense of being more classical feminine in appearance and demeanor. However, Beverly was bland and essentially decorative. Like Counselor Troi, she was basically eye-candy.
ii) Now to the main point. Apparently, many Trekkies hated Pulaski because she belittled Data. What caught my eye is how they put it. They describe Pulaski as "racist." On the face of it, that's an incongruous way of characterizing her view of an android. It reflects the intellectual poverty of the general culture. A lack of conceptual resources. That's the only category they know to reach for.
iii) Evidently, they sense an analogy between racism and denying that an android is a real person or "sentient being." But is that an accurate definition of racism?
Take the Antebellum law against teaching black slaves how to read. That's racist, but is it predicated on the assumption that blacks weren't real people or sentient? 
To the contrary, it presumes that given a chance, blacks could learn to read just a well as whites–which was threatening to the Antebellum caste-system.   
Or take the Final Solution. Did the Nazis deny that Jews were real people or sentient? If anything, the Nazi antipathy towards Jews reflects a resentful envy for Jewish intellectual and cultural accomplishments. 
So the implicit or intuitive analogy is flawed. 
iv) But there's the deeper issue. There's the classic question of whether artificial intelligence is possible. But there is, if anything, the more interesting question of whether artificial intelligence is detectible. That is to say, even if a computer crossed that threshold, could we tell if it was truly intelligent? That's a legitimate and difficult philosophical question. Not at all "racist," or analogous to racism." 
v) There is, of course, the famous Turning test. But that's controversial.
The question is whether you can tell, by its behavior, whether a computer is actually intelligent, or simply mimicking human intelligence. A clever simulation: clever, not it itself, but cleverly staged. 
Does the computer have consciousness? Does it have a first-person viewpoint?
Even if it did, an outside observer isn't privy to that experience. Since the outside observer isn't a computer, he doesn't know what it's like to be a computer. That's directly inaccessible. So he has no basis of comparison. He can't compare and contrast his experience with the experience (assuming it has any) of a computer. 
So the question is whether that's inferable from computer behavior. And that's tricky because computers are extensions of human programmers. They are designed to approximate human problem-solving skills. So is that really coming from the computer, or is that ultimately coming from the programmer? 
vi) Some people might object that this is just a special case of the problem of other minds. But the analogy is equivocal at the crucial point of comparison: since I know, from direct experience, what it's like to be human, it's reasonable for me to interpret the behavior of other humans along the same lines. But it's precisely because a computer isn't human that the parallel breaks down. That's not something we know in advance.
vii) Of course, the question of artificial intelligence is bound up with the nature of human intelligence. Since a physicalist regards human reason as the product of physical interactions (the brain), the presumption is that, at least in principle, it ought to be possible for a sufficiently sophisticated machine to duplicate (or surpass) human intelligence. 
If, however, the mind/body relation involves the mind (or soul) using the brain, then the AI research program is doomed to fail, although it may generate useful spinoff applications. 
viii) The character of Data was plausibly intelligent because that's fiction. The character was written by human screenwriters. They made him a sympathetic character. And he was played by a human actor. But that's not a real test of AI. Some Trekkies are so invested in a fictional character that they forget this isn't real–or even realistic. They've been manipulated by the actor and screenwriter. 

6 comments:

  1. I've seen people claim that any sufficiently advanced AI should have "rights" and be treated as human, but I think that is fallacious for the following reason: whatever traits, behavior, etc. the AI exhibits, we can give a complete account of it without invoking consciousness. Therefore it violates parsimony to posit that it is in fact conscious.
    The appearance is due to the skill of the engineer, and any theory that goes beyond that strikes me almost as a kind of mysticism.

    ReplyDelete
    Replies
    1. whatever traits, behavior, etc. the AI exhibits, we can give a complete account of it without invoking consciousness.

      To be fair, dull boy that I am, people have told me they can give a complete account of me without invoking consciousness. ;-)

      Delete
  2. I thought Pulaski was okay. Beverly Crusher was a strong female type in her own right. I think the principles Dr. Pulaski was willing to stand up for were better in general than Dr. Crusher's. If anything, that demonstrates a strength in the writing staff. I think Picard needed Crusher more than he needed Pulaski (except for that time when he needed a heart transplant). The relational dynamic was simply more fruitful.

    That said, leaving the spiritual aspect aside, the human bran is far more sophisticated than the processing of binary logical calculus. The function of the interplay between neurotransmitters and electrical impulses in the various parts of the brain (like the hippocampus, thalamus, amygdala, etc.) cannot be simply programmed into a computer.

    Beyond this, we have some problems:
    1. A computer can be programmed to imitate living responses to stimuli, but without copying the Creator's design it can never develop a meaningfully dynamic relationship that is inherent in the human condition.
    2. These functions may be artificially reproducible, but they are nothing without a platform. That is to say that we can start with machine language and program a set of higher-order rules that define an environment within which to attempt to reproduce the function. It's a mere substitute for the platform that supports our minds that isn't based on physical existence.
    3. This spiritual platform is absolutely necessary for moral conviction. You can program a set of morally based principles into the platform, but you have to recognize that if you don't do that there will be nothing to indicate to an artificial intelligence what is right and what is wrong. So why would we think to put those into an artificial intelligence if we think that there is some "natural morality" that we all subscribe to? Interestingly, Data was based on Asimov's Robot stories. Even Asimov, famous atheist that he was, recognized that you had to program the "three laws of robotics" into the positronic brains of his robots to give them enough of a sense of morality to function effectively.

    ReplyDelete
    Replies
    1. I thought Ro Laren was probably the strongest female character in TNG. The effectiveness of the character depends in large part on the actress. In this case, they chose the right actress for the part. She was also the first choice to butt heads with Sisko in DS9, but she turned it down, so the role went to Nana Visitor–whom I never cared for.

      In a predictable example of political correctness gone haywire, the original security chief in TNG was a female character. But Denise Crosby was wholly unconvincing in the tuff girl role.

      Delete
    2. Agreed on Denise Crosby. She was even weak as a Romulan.

      I liked Ro. I thought her toughness was more a cover for her anger. As her character matured, she softened a bit in a respectable kind of way.

      My favorite female Character was Guinan for no other reason than she was and enigmatic and tactful old soul. This is the way I imagine Whoopie Goldberg sees herself. I wouldn't agree with her, but she ended up creating a likable character who was the only person who could tell Picard what to do without much of a fight. There's a certain strength there that just blows all kinds of paradigms and I'm glad the writers didn't abuse that. It could have easily turned into the Whoopie Goldberg show.

      Delete