i) Recently I ran across a defense of Dr. Pulaski, the short-lived character in TNG. This post really isn't about Pulaski, but since that's the springboard, I'll make a few observations before moving onto the main point.
Apparently, a lot of Trekkies hated the character, and waged a successful write-in campaign to get it axed. They succeeded.
I myself don't share their antipathy towards the character. I think the characterization was somewhat heavy-handed. The way the screenwriters depicted her was rather forced. She was basically a type.
She was, however, a good foil for Picard. Tough, independent. A strong female character.
She was replaced by Beverly Crusher, who was more likable in the sense of being more classical feminine in appearance and demeanor. However, Beverly was bland and essentially decorative. Like Counselor Troi, she was basically eye-candy.
ii) Now to the main point. Apparently, many Trekkies hated Pulaski because she belittled Data. What caught my eye is how they put it. They describe Pulaski as "racist." On the face of it, that's an incongruous way of characterizing her view of an android. It reflects the intellectual poverty of the general culture. A lack of conceptual resources. That's the only category they know to reach for.
iii) Evidently, they sense an analogy between racism and denying that an android is a real person or "sentient being." But is that an accurate definition of racism?
Take the Antebellum law against teaching black slaves how to read. That's racist, but is it predicated on the assumption that blacks weren't real people or sentient?
To the contrary, it presumes that given a chance, blacks could learn to read just a well as whites–which was threatening to the Antebellum caste-system.
Or take the Final Solution. Did the Nazis deny that Jews were real people or sentient? If anything, the Nazi antipathy towards Jews reflects a resentful envy for Jewish intellectual and cultural accomplishments.
So the implicit or intuitive analogy is flawed.
iv) But there's the deeper issue. There's the classic question of whether artificial intelligence is possible. But there is, if anything, the more interesting question of whether artificial intelligence is detectible. That is to say, even if a computer crossed that threshold, could we tell if it was truly intelligent? That's a legitimate and difficult philosophical question. Not at all "racist," or analogous to racism."
v) There is, of course, the famous Turning test. But that's controversial.
The question is whether you can tell, by its behavior, whether a computer is actually intelligent, or simply mimicking human intelligence. A clever simulation: clever, not it itself, but cleverly staged.
Does the computer have consciousness? Does it have a first-person viewpoint?
Even if it did, an outside observer isn't privy to that experience. Since the outside observer isn't a computer, he doesn't know what it's like to be a computer. That's directly inaccessible. So he has no basis of comparison. He can't compare and contrast his experience with the experience (assuming it has any) of a computer.
So the question is whether that's inferable from computer behavior. And that's tricky because computers are extensions of human programmers. They are designed to approximate human problem-solving skills. So is that really coming from the computer, or is that ultimately coming from the programmer?
vi) Some people might object that this is just a special case of the problem of other minds. But the analogy is equivocal at the crucial point of comparison: since I know, from direct experience, what it's like to be human, it's reasonable for me to interpret the behavior of other humans along the same lines. But it's precisely because a computer isn't human that the parallel breaks down. That's not something we know in advance.
vii) Of course, the question of artificial intelligence is bound up with the nature of human intelligence. Since a physicalist regards human reason as the product of physical interactions (the brain), the presumption is that, at least in principle, it ought to be possible for a sufficiently sophisticated machine to duplicate (or surpass) human intelligence.
If, however, the mind/body relation involves the mind (or soul) using the brain, then the AI research program is doomed to fail, although it may generate useful spinoff applications.
viii) The character of Data was plausibly intelligent because that's fiction. The character was written by human screenwriters. They made him a sympathetic character. And he was played by a human actor. But that's not a real test of AI. Some Trekkies are so invested in a fictional character that they forget this isn't real–or even realistic. They've been manipulated by the actor and screenwriter.