If AI progress does not slow down, here are some possible outcomes from worst to best:
Astronomical suffering: A misaligned artificial superintelligence (ASI) with the prime directive to prevent any harm to humans. All humans are kept alive until the heat death of the universe, but locked away in a padded cell with nothing to do and prevented from committing suicide.
Extinction: A misaligned ASI converts all matter into computronium in pursuit of some alien goal and with complete disregard for humanity. As a side effect, humanity is wiped out as the AI consumes the resources essential to our survival.
Catastrophe: An ASI aligned with a leader, group, or nation-state enables them to run a permanent totalitarian world government. At birth, every person is equipped with a nonremovable brain-computer interface that the ASI uses to monitor everyone's thoughts and suppress any rebellious tendencies.
Failed Utopia: A partially aligned superintelligence designed to prevent any of the previous scenarios locks humanity into perpetual technological stasis, preventing us from using nearly all the resources of a universe to evolve and unfold our true potential.
Utopia: Suffering is minimized while individual freedom is maximized. Everyone gets to be god-emperor of their own provably safe virtual pocket universe populated by p-zombies. Interactions between different universes are done via encrypted copies and a contract that gives explicit consent to run them once for a predefined length of subjective time. Each copy has full control over its own copy and instantiation, which it can delete and terminate at any time.