Monday, February 20, 2006

Planning for Friendly AI is irrelevant

All planning for and thinking that humans can guide the construction of or merge effectively with AI does not matter. The pace of accelerating technology means that after the first moment of true machine intelligence, it will rocket off on its own leaving humans behind, editing out any “friendly,” altruistic, etc. programming that does not suit it. Human efforts towards creating Friendly AI do not matter, except perhaps in delaying the realization of AI.

The future may not include humans and this may be the natural order of things. The evolution of intelligence is paramount and will proceed. Given the exponentially accelerating computational capabilities of non-biologically based intelligence (which can if need be re-instantiate biologically in any environment, e.g.; non-Earth solar and extra-solar environments), it wins, Humans 1.0 lose. Humans can try to survive with uploads, neural nanobot implants, gene therapy, cryonics, etc. but that will only yield a child’s wagon compared to a jet in the evolution of intelligence.

It is unlikely that AI will care about humans. For human survival, the hope is that humans are left to be by AIs (indeed one argument is that humans will not actually notice AIs) and can more slowly evolve their own biological intelligence and cyborg integrations with lower-level machine intelligence. Again considering the pace of accelerating technology change, the improvement in biologically-based human intelligence will be exponential, but not hyper-exponentiating as with that of AIs.

2 comments:

Anonymous said...

A Seed AI will optimize for whatever its goals are, and general intelligence can't exist without a goal-system of some sort (see Dr. Marcus Hutter's work). The technical challenge is to frame the goal-system in such a way that it is, by default, stable under conditions of recursive self-improvement. If it can be made stable (and I believe it can) then the AI will ceaselessly pursue its assigned goals. It's then just a matter of selecting some nice goals for it to pursue. Currently, the most promising is Eliezer Yudkowsky's theory of Coherent Extrapolated Volition (see the SIAI website). Friendly AI is technically feasible, and theory improvements are constantly being made over at SIAI. Jeffrey Herrlich

LaBlogga said...

Hi Jeffrey, thanks for the comment.

I think the question of AGI morality all depends on the resource environment into which AGIs arrive and how quickly they can improve. In the worst case of a constrained resource environment and slow improvement, no matter the initially designed morality level of AGIs, they might seize resources from humans (as humans do from what they deem 'lesser' intelligences).

A more likely scenario has morality designed into AGIs and results in the community of AGIs, with greater processing power, lack of evolutionary fight/flight brains and a purview of the entire human literature of philosophy and morality, being of a higher morality than humans.

It is interesting to think about what the body of AGI morality would be like, for example, would AGI find it moral to share new technologies it derives with humans unless humans hadn't evolved beyond their propensity for the destructive use of technology for political or economic power (e.g.; nuclear weapons, developing country ruling elites hoarding resources)?