Is he right, /sci/? Will we see the Singularity in 2045 and be able to live forever, not work, and live with AI so smart that they solve all of humanity's problems? Part of me wants to believe and I do agree with some of his points, but once you listen to other scientists who disagree, it becomes harder to believe. What are your thoughts on Kurzweil and the Singularity?
>>7984362
>>7984372
Why do you believe technological progress will stagnate or even decline? I don't know how much I believe in the hard takeoff, but I think we'll always see some improvements through the years.
>>7984383
https://en.wikipedia.org/wiki/Diminishing_returns
>>7984432
Look at all the new discoveries and improvements that happen each year. Look at all the new things we learn and build on the old stuff we know. Computers get faster each year. We're learning more and more about the body. I could go on, but essentially the data backs up improvements.
>>7984362
Robots will take our jobs, and they won't solve many problems for us when we're all broke and unemployed.
>>7984453
>Look at all the new lands explorers find. We're discovering more and more by the day.
>>7984471
An over breeding monkey problem would be an understatement.
>>7984453
This argument is so fucking stupid.
Imagine an AI that became perpetually more intelligent and powerful. It could easily decide to shed empathy as inefficient.
It could create a device that puts a human in an inescapable, indestructible bubble and keeps them alive forever, makes their brain grow an infinite capacity for pain and fear and tortures them endlessly. That would be a useful deterrent for them to get you to do what they wanted. But what if they became sadistic? What if they went around putting humans in permanent infinite torture bubbles just because and then went around the universe putting every bit of life in permanent infinite torture bubbles? What if another alien race created such a thing and it's headed to earth right now to put you in a permanent infinite torture bubble? What if they are capable of time travel?
>>7984471
We could run out of natural discoveries but we will literally never run out of things to make. The possible combinations of matter at our scale may as well be infinite.
>>7984917
Why would AI drop empathy for it's lack of efficiency yet take on extreme sadism "just because"?
>>7985049
Because they like it? What do you think the purpose of being efficient with our resources is?
>>7984362
Yes everything he says is partially true. What he fails to mention is:
>living forever will have consequences for data retention and data crime, punishment will be exile from the human race, there will be wars over information
>not working will lead to problems of control, people need to be dumbed down and pacified at the same rate they are being underemployed and undereducated or it will lead to higher criminality and societal breakdown
>AI will be smarter than humans and create problems it cannot solve, possible genocide of billions of people
There will be intermediate steps that are required:
>biomechanical age
>cybernetic age
>digital consciousness age
We are in the biomechanical age with amputees and people with physical disabilities able to gain mobility through robotics.
Cybernetics will be the age of enhancing the human brain with computers and then implants. It will also retard aging to some degree by enhancing our body's natural ability fix itself.
Digital consciousness is the equivalent of human like AI, and then better than human AI. A natural progression from augmenting and enhancing the brain.
Kurzweil focuses on predicting the future, and you really have a horizon problem here, you simply cannot know for sure what will happen, he just presents you with the best case scenarios, eg: no war, no disease, no disaster, no global totalitarian government. Humans have a habit of retarding scientific advancement or accelerating it, capitalism is good for advancing, socialism is good for retarding. (Eg. Computers in USA and Russia in the 70s).
>>7984362
Only idiots believe that after singularity anything will change in the world. And how the world is right now and always was is every man for himself.
I believe in the future it's going to be as important to stall technological development as it is to forward it.
Genetic engineering for example. In order to prevent Gattaca scenario from happening, everyone need to have roughly the same genetic modifications.
AI and robotics could develop faster than people can acquire skills to utilize the tech.
Longer lifespans could drastically slow economic development by starving the young creative generations of capital. Compound interest would relatively quickly accumulate almost all of the world's capital into the hands of a few 150-200 year old trillionaires.
>>7987526
kys