I do think there will be some form of a singularity, but also think that exactly what it will be cannot be accurately predicted, because there are too many possible variations. Movies often portray a post-singularity world as a dystopia for humans. Movies are made to be dramatic and entertain, so they are biased towards negative predictions. If the singularity causes there to be no distinction between humans and machines, then any sort of rivalry between the two becomes obsolete.
In the wikipedia article, Jaron Lanier makes a good point that embracing the singularity would be “bad politics”. Just because robots could become capable of performing all human jobs, does not mean things will move in that direction. If that would ultimately lead to mass unemployment and economic collapse, what politicians would push that agenda? There is a point at which replacing human resources with artificial intelligence resources would lead to diminishing returns for corporations. There needs to be a lower class or middle class to support the economy. If nobody has any money, who will pay for all the work being done by robots? On the other hand, if robots began to earn wages, a large new market would be created.
As far as when the singularity would occur, I tend to believe the experts, which according to the article has a mean of 2040, but a range of 5 to 100 years. I would not be surprised if it occurred in my lifetime, but with advancements there are often many unforeseen hurdles along the way, so it also wouldn’t surprise me if it doesn’t occur during my lifetime.
As the article suggests, what the future will look like post-singularity may be incomprehensible for current human intelligence. Much greater intelligence may have vastly different goals and desires than humans. As some science fiction suggests, super intelligence may want to wipe out the human race if it sees it as destructive to the planet. At the same time, super intelligence that can re-engineer itself could quickly become capable of surviving in more hostile environments than humans. This means that our concerns for the planet may be of no concern to them. Their energy sources would not come from food and hazards to us, like radiation, may have no effect on their bodies.
Bodies do not even necessarily have to exist. All intelligence could be converted to a digital format. We may choose to put ourselves in the matrix. If that happens, it brings up questions of how we will maintain contact with the world outside of our digital realm and what reality really is. We may already be in a digital world, so putting ourselves into another one just starts to get Inception-y.
No comments:
Post a Comment