As people gain knowledge about AGI there appears to be several levels one typically goes thru:
- Ignorance – Unaware of the possibility of computer intelligence
- Aware – Aware of the field of AGI
- Disbelief – Conviction that computers cannot do what humans can do (intelligence, consciousness, emotive, creative, etc).
- Belief – Conviction that computers will one day do all of the above and more.
- Concerned – Seriously concerned about the existential risks to humanity
- Transhumanists – Those that believe the end of humanity is a good thing.
This is not to say most people go thru all levels. Certainly plenty of people will remain at level 1, 2 or 3. Personally, having thought about and followed AGI for decades I have essentially no doubt that computers will one day do all the above. But, also, having followed the risk issues and my own thought experiments, I have moved from level 4 to 5. Noted people at level 5 include Nick Bostom, Stephen Hawkins, Elon Musk and many others. Those at level 6 believe that the end of humanity is a natural progression of evolution and welcome it. In my view, it may be an unavoidable evolution but certainly not one that we want to accelerate. Some at level 6 believe that humans will merge with the AGIs in a desirable way. I feel that may occur initially but in the longer run there is too much that can go wrong.