In his paper, Chalmers outlines four general outcomes for the human race when an artificial superintelligence arises, those being: extinction, isolation, inferiority and integration (Chalmers p.33). While Chalmers laments over the unpreferable-ness of the first three in favour of integration, I believe that Chalmers is making an important leap here that is not addressed, that being that we will almost certainly not be the ones making this decision. In this paper, I will argue that despite the preferability of integration to the other three, the most likely scenario out of the four cases, is in fact inferiority.
Although it is generally the best outcome of the four options, when it comes to integrating with an A.I. it seems unlikely that we as a society will be technologically and philosophically prepared to become cyborgs or upload. The trends we see in both our practical understanding and theoretical models show that we will most certainly create an artificial intelligence before we create software and hardware that can perfectly or more efficiently mimic neurology. With this in mind, our only hope of integrating comes from the A.I.’s we create, and them wanting to assist us in our integration. However, I do not believe they would want to merge with us. Though we won’t know the nature of this A.I. we do know the nature of intelligence; the A.I. would certainly recognize the competition of allowing humans to join its ranks, as well as all the biases and errors that come along with uploading human biology. With these issues in mind it seems difficult to believe that an A.I. would want to grant any of us equal power, knowing full and well all of our intrinsic flaws.
Another issue with preparing for integration is that most people today do not consider the emergence of an artificial intelligence to be a pressing issue, and would probably not be comfortable or willing to conform their bodies and minds in order to make integration easier or possible. When it comes to uploading, until we have a complete physical understanding of consciousness, we do not know how good of a replica of oneself we could possibly create, if at all. This will cause people to avoid any form of destructive uploading. Conversely non-destructive uploading is another issue entirely, as it forces us to confront issues of identity and cloning. We can clearly see that people’s reluctance to prepare for the issue, and the time dependence of the creation of these technologies shows that it is unlikely that any humans will have the ability to upload or replace their brains with artificial neurons before the creation of an A.I. and thus integration is incredibly unlikely, especially on large scales.
Secondly, let us look at extinction. Humans are an incredibly powerful and populous force spreading over most of the globes land mass, and have survived countless plagues, threats and wars simply because of how adaptable and clever we are. Although these characteristics have prevented us from dying out in the past, will they continue to be helpful to us in the future? Let’s explore the cases; firstly, a benevolent A.I. This situation can mostly be ignored, as it would by definition care for all humans and treat us well. The second situation to consider is that of an indifferent A.I. This would certainly be worse than a benevolent A.I. and would probably be the cause of many deaths, simply because it doesn’t care about us one way or the other, and we would initially get in its way. However, by the same reasoning this A.I. would not go out of its way to cause the extinction of the human race, so we can conclude that in this scenario’s worst case a large percentage of the population is lost, but humanity would not be eradicated. Lastly, we consider a malevolent A.I. This is the case many people antagonize, for good reason. However, this situation is unlikely as well because 1) for the reasons in the first sentence of this paragraph, it would be incredibly inefficient to attempt to eradicate all humans everywhere and 2) a hyper rational machine would certainly not make sweeping generalizations about all of human-kind simply because it hates some or many people. Based on this analysis we can see that in the worst cases of indifference and malevolence, even though many people would be eradicated, it is much more likely that humans would be isolated from the A.I. or kept around inferiorly as opposed to driven to extinction.
Isolation is the third option we have after the arrival of a superintelligence. In this situation, we are physically separated from the A.I.’s to stay out of each others way. As discussed above, whether the A.I. is benevolent to malevolent, there will still be a fairly large number of people around, and so presumably for resource efficiency we will be packed into a small area of the globe.[RM1] Humans have the goal of creating A.I. to solve problems we cannot, such as climate change; without answers to these questions we by our nature will attempt to visit the area inhabited by A.I. for help with these issues. Even if the humans are too afraid to leave their area of the globe, an issue arises when we consider that humans had the ability to create an AI at one point, and thus there is no reason they could not do it again. Due to this fact, it is foolish to assume the A.I. would simply let us be when we have the ability to create a competitor for it. Having an A.I. keep track of us and control us so we cannot create a competitor is no longer isolation but inferiority. Similarly, virtual isolation is another form of inferiority, as it is controlled by the A.I. Inferiority is more likely than isolation because it allows the A.I. to both hear our concerns, and keep a watchful eye on us so it has no new competition.
The final possibility is inferiority, which would leave humans in an analogous situation similar to how dogs are now to humans. This seems odd initially, why would an A.I. want a pet in the first place? If we assume the A.I. is conscious, then we know it would probably at some point look for a telos, coming to nihilist or existentialist conclusions through its hyper-rationalism. In the same way, we care for pets and children to add meaning to our lives, the A.I. may do the same for us, whether in the real world or a virtual one to keep us happy and healthy. Many philosophers like to say that this is unlikely, as we would be to them as ants are to us, but I do not believe this to be the case. Although the scale of intelligence may be comparable, because the AI’s will be brought up, created and trained for the human world, they will have human language. Even if the machines surpass this for a more efficient method of communicating ideas, there is no reason they would not be able to talk to us to explain things, or hear us out in our concerns. For this reason, its entirely plausible that a human and A.I. could have a relationship like a dog and a human, in the best case, where the A.I. enjoys entertaining and teaching the human while the humans needs are met. If humans are not extinct or isolated and are refrained from uploading, this is a natural outcome of what the A.I.’s will do with people. It should be noted that inferiority need not be as pleasant as outlined above; one can easily envision a world where humans are used as a source of entertainment or some form of servitude for the pleasure and assistance of the machines. Either way, inferiority for some purpose, whether it be telos, entertainment or servitude seems like the natural path the machines would take given the immense population and spread of humans.
From the above, we can conclude that inferiority to the artificial intelligences is the most likely future of the human race, as extinction is too costly and inefficient and unlikely for most forms of A.I., isolation is unlikely to properly serve either humans or A.I.’s needs, and integration requires a level of preparedness we do not have and will continue to rely on technology that is beyond our reach. Although this is not the most preferable state of the four futures outlined by Chalmers, it is the most plausible, and most probably not that bad.
Chalmers, David. “The Singularity: a philosophical analysis.” consc.net/papers/singularity.pdf. Accessed 6 Jan. 2017.