Henry/Robert/Arminibot 3000 Serial Number 777666 is questioning Reformed doctrine once more. While it is obvious to anyone who has studied the issues that H/R/A has not, I thought it might be a beneficial exercise for those concerned if we took the robot motif up once again and pondered a thought experiment.
Is it possible to give a robot free will? To make, as it were, an android?
The question is important because it helps us to define how exactly a choice is made. Currently, computers can be programmed to make “choices” by assigning a weight-value to different options. From there, a risk/reward calculation can be made, and the computer can pick which option has the greatest potential for reward with the lowest amount of risk. This is ultimately how computers can play chess games. They analyze a multitude number of possible moves and rank the orders in terms of which one is statistically most likely to occur.
But obviously this “choice” isn’t a free choice. It relies upon a set of initial factors, such as the hardware used to create the computer. (If a chipset is flawed, the calculations will be flawed and the computer will make erroneous choices.) Further, the software has to be programmed such that the computer is able to assign a weight to various chess functions. A computer is not “born” knowing that pawn a5 is a horrible opening move. It has to be programmed in, and the various values of the board have to be programmed in. Further, the specific values of what levels of risk are acceptable must also be programmed in. These are not laws of nature. They are dependent upon the programmer.
Naturally, one can test the computer after that by simulating several games until the best moves are found. Further testing against human opponents can further hone the skills of the computer. Eventually, you have Deep Blue beating Garry Kasparov.
But this brings up an interesting problem for the libertarian, especially as defined by H/R/A. H/R/A believes that a choice cannot be free unless it is possible to choose a different option. But let us present a computer with two options for an opening move. Either the computer can pick pawn to e5, or it can pick pawn to a5. Given the programming in place, it is impossible for the computer to actually pick pawn to a5 because of how horrible that opening move is compared to the standard pawn to e5 approach.
Now ask Kasparov to make the same decision. Given Kasparov’s knowledge of chess, it is equally impossible for Kasparov to make the move pawn to a5 instead of pawn to e5. Yet we would not say that Kasparov is acting against his free will were he to always play pawn to e5 instead of pawn to a5. We would say he is making the smart move. He would be an idiot to make the other choice.
H/R/A might respond by saying that Kasparov could choose to behave stupidly, if that’s what Kasparov wanted to do, but Kasparov doesn’t want to act stupidly, so he will limit his selection to the smart move each time. This, however, changes H/R/A’s position! What first defined free will as the ability to do otherwise has become simply doing that which one wants to do.
But this secondary definition of free will is actually the very definition that Calvinists hold to. People always do that which they want to do, and the unregenerate always wants to disobey God. Under this definition of free will, Calvinists fully support free will. As such, moving to this explanation doesn’t help H/R/A at all. In fact, it forces an immediate checkmate against his viewpoint.
Before abandoning this illustration completely, let us take another thought experiment. I own Chessmaster 10, and the lowest AI opponent you can face is a chimpanzee that uses completely random moves. There is no attempt to weigh which move is better. The computer compiles a list of all possible legal moves and randomly chooses one of those moves.
Is this random choice any freer than the choice a computer makes by weighing a list? The answer to that question is a resounding no. Once again, the computer chooses based on hardware structures and software limitations. Computers are not really random—they have random seed generators that are strictly controlled. They mimic random events, but in reality they are not random at all. (Each time you reuse the same random seed generator, you get the same result. To avoid this as much as possible, most programs use the date and time functions for their random seed generation. Since it is basically impossible for a person to pick the exact same millisecond on a clock each time he runs a program--even if he resets the clock--it always appears to us as a random result.) So, even engaging in random “choices” is not really random for a computer. Suppose a computer randomly picks the move pawn to a5. This move is determined by the hardware features working together with the software features of the computer so that at the exact moment the program is run, it will always pick pawn to a5. There is never a time when it will not pick pawn to a5 under those circumstances.
But there is a way to get truly random data (assuming one doesn’t have access to the omniscient mind of God). You could hook the computer up to a piece of radioactive matter. Since radioactivity occurs at a completely random, totally impossible to predict, rate for individuals particles, you could create a computer to use those random results to make decisions about chess moves.
But is this any freer? Again, the answer is a resounding no! After all, there is no value in the radioactive decay that says, “If this particle goes now, choose option pawn to a5.” The ability to translate a truly random event into a choice is still based on the software to define what each selection must be. And we haven’t even addressed the elephant in the room: the fact that these random choices are still determined by radioactive decay!
Suppose, however, we were able to surmount those obstacles and create a computer that could make choices that were not based on its hardware or software. It could play a truly random game of chess.
Does anyone think the computer would win the chess game? Of course not. Does anyone think that a computer making choices without reference to a designers hardware specs or software instructions would make good choices? Of course not.
Why, then, do Arminians insist that people must be able to make choices without regard to our hardware (brains) or our software (our nature)? How is it possible for our choices to be good ones if we are able to ignore the hardware specs and the software limitations? How is it possible for us to make any decisions at all outside of the governing physical and spiritual specs that we have?