nine ways to bias open source agi toward friendliness
play

Nine Ways to Bias Open-Source AGI Toward Friendliness Ben Goertzel - PDF document

A peer-reviewed electronic journal published by the Institute for Ethics and Emerging Technologies ISSN 1541-0099 22(1) February 2012 Nine Ways to Bias Open-Source AGI Toward Friendliness Ben Goertzel and Joel Pitt Novamente LLC


  1. A peer-reviewed electronic journal published by the Institute for Ethics and Emerging Technologies ISSN 1541-0099 22(1) – February 2012 Nine Ways to Bias Open-Source AGI Toward Friendliness Ben Goertzel and Joel Pitt Novamente LLC ben@goertzel.org Journal of Evolution and Technology - Vol. 22 Issue 1 – February 2012 - pgs 116-131 Abstract While it seems unlikely that any method of guaranteeing human-friendliness (“Friendliness”) on the part of advanced Artificial General Intelligence (AGI) systems will be possible, this doesn’t mean the only alternatives are throttling AGI development to safeguard humanity, or plunging recklessly into the complete unknown. Without denying the presence of a certain irreducible uncertainty in such matters, it is still sensible to explore ways of biasing the odds in a favorable way, such that newly created AI systems are significantly more likely than not to be Friendly. Several potential methods of effecting such biasing are explored here, with a particular but non- exclusive focus on those that are relevant to open-source AGI projects, and with illustrative examples drawn from the OpenCog open-source AGI project. Issues regarding the relative safety of open versus closed approaches to AGI are discussed and then nine techniques for biasing AGIs in favor of Friendliness are presented: 1. Engineer the capability to acquire integrated ethical knowledge. 2. Provide rich ethical interaction and instruction, respecting developmental stages. 3. Develop stable, hierarchical goal systems. 4. Ensure that the early stages of recursive self-improvement occur relatively slowly and with rich human involvement. 5. Tightly link AGI with the Global Brain. 6. Foster deep, consensus-building interactions between divergent viewpoints. 7. Create a mutually supportive community of AGIs. 8. Encourage measured co-advancement of AGI software and AGI ethics theory 9. Develop advanced AGI sooner not later. In conclusion, and related to the final point, we advise the serious co-evolution of functional AGI systems and AGI-related ethical theory as soon as possible, before we have so much technical infrastructure that parties relatively unconcerned with ethics are able to rush ahead with brute force approaches to AGI development. 1

  2. 1. Introduction Artificial General Intelligence (AGI), like any technology, carries both risks and rewards. One science fiction film after another has highlighted the potential dangers of AGI, lodging the issue deep in our cultural awareness. Hypothetically, an AGI with superhuman intelligence and capability could dispense with humanity altogether and thus pose an “existential risk” (Bostrom 2002). In the worst case, an evil but brilliant AGI, programmed by some cyber Marquis de Sade, could consign humanity to unimaginable tortures (perhaps realizing a modern version of the medieval Christian imagery of hell). On the other hand, the potential benefits of powerful AGI also go literally beyond human imagination. An AGI with massively superhuman intelligence and a positive disposition toward humanity could provide us with truly dramatic benefits, through the application of superior intellect to scientific and engineering challenges that befuddle us today. Such benefits could include a virtual end to material scarcity via advancement of molecular manufacturing, and also force us to revise our assumptions about the inevitability of disease and aging (Drexler1986). Advanced AGI could also help individual humans grow in a variety of directions, including directions leading beyond our biological legacy, leading to massive diversity in human experience, and hopefully a simultaneous enhanced capacity for openmindedness and empathy. Eliezer Yudkowsky introduced the term “Friendly AI” to refer to advanced AGI systems that act with human benefit in mind (Yudkowsky 2001). Exactly what this means has not been specified precisely, though informal interpretations abound. Goertzel (2006a) has sought to clarify the notion in terms of three core values of “Joy, Growth and Freedom.” In this view, a Friendly AI would be one that advocates individual and collective human joy and growth, while respecting the autonomy of human choice. Some (for example, De Garis 2005) have argued that Friendly AI is essentially an impossibility, in the sense that the odds of a dramatically superhumanly intelligent mind worrying about human benefit are vanishingly small, drawing parallels with humanity’s own exploitation of less intelligent systems. Indeed, in our daily life, questions such as the nature of consciousness in animals, plants, and larger ecological systems are generally considered merely philosophical, and only rarely lead to individuals making changes in outlook, lifestyle or diet. If Friendly AI is impossible for this reason, then the best options for the human race would presumably be to avoid advanced AGI development altogether, or else to fuse with AGI before the disparity between its intelligence and humanity’s becomes too large, so that beings- originated-as-humans can enjoy the benefits of greater intelligence and capability. Some may consider sacrificing their humanity an undesirable cost. The concept of humanity, however, is not a stationary one, and can be viewed as sacrificed from only our contemporary perspective of what humanity is. With our cell phones, massively connected world, and the inability to hunt, it’s unlikely that we’d seem the same species to the humanity of the past. Just like an individual’s self, the self of humanity will inevitably change, and as we do not usually mourn losing our identity of a decade ago to our current self, our current concern for what we may lose may seem unfounded in retrospect. Others, such as Waser (2008), have argued that Friendly AI is essentially inevitable, linking greater intelligence with greater cooperation. Waser adduces evidence from evolutionary and human history in favor of this point, along with more abstract arguments such as the economic viability of cooperation over not cooperating. Omohundro (2008) has argued that any advanced AI system will very likely demonstrate certain “basic AI drives,” such as desiring to be rational, to self-protect, to acquire resources, and to preserve and protect its utility function and avoid counterfeit utility; these drives, he suggests, must be taken carefully into account in formulating approaches to Friendly AI. Yudkowsky (2006) discusses the possibility of creating AGI architectures that are in some sense “provably Friendly” – either mathematically, or else by very tight lines of rational verbal argument. However, several possibly insurmountable challenges face such an approach. First, proving mathematical results of this nature would likely require dramatic advances in multiple branches of mathematics. Second, 2

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend