top of page
Superintelligence - Highly Superhuman
Definition: AI system far surpassing human capabilities across all domains, outperforming collective human expertise. This out-performance could be in generality, quality, speed, and/or other measures.
Examples
[No current examples - theoretical]
Timing
No consensus – estimates range from 2-25 years
Implications of Profit-Focused AI
Not safe with no guardrails. We do not yet know how to control it. There could be significant risks related to military, health, privacy, cyber security, financial issues and more. Financial incentives to significantly replace workers, most likely disruption
Implications of People-Centered AI
Safe, AI required to be developed in a way so that people can control it. Augment workers, assist in creating jobs for all, still very profitable for developers and companies
bottom of page
