From nobody@digitalkingdom.org Sat Jan 30 12:12:16 2010 Received: with ECARTIS (v1.0.0; list lojban-list); Sat, 30 Jan 2010 12:12:17 -0800 (PST) Received: from nobody by chain.digitalkingdom.org with local (Exim 4.71) (envelope-from ) id 1NbJfz-0004bv-Bi for lojban-list-real@lojban.org; Sat, 30 Jan 2010 12:12:16 -0800 Received: from mail-yx0-f197.google.com ([209.85.210.197]) by chain.digitalkingdom.org with esmtp (Exim 4.71) (envelope-from ) id 1NbJfW-0004Pn-78 for lojban-list@lojban.org; Sat, 30 Jan 2010 12:11:53 -0800 Received: by yxe35 with SMTP id 35so3310504yxe.2 for ; Sat, 30 Jan 2010 12:11:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=vRzCdLuw7UaH8sEw7gnyLAmMKlGp35IeAA2N8JzpFiw=; b=bP/JpXw11yXWcJV4+SxRZpU3NMROlQff4fZoQ6LiQnYpofsKZYWeTxeBHS1dlKOb3w x8DjiOnXGsmIfhrO/7NmGDsDYWZhRM/cFxxgxQZtQwJC3AE4yJqfg/YJtKB9nC+bUgvA IkMjUpME1TZEo6l74Qm15Nsxypr4ANTJC9Mc4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=ijkAosX0++amZ2lwLDgTK1IMtseyo873VSKZVQkFene5emPGNt+P6yJaj+bwafXTYk XMySHnm68NZk0CsxJAdBjPGawUdq6Me7GaEmctozgSqtZ8Sb5duUnlk0HkWEOvn6Q7QT lCT0gF6C7uP5MVTE5GUBcvntGbr5V4g5L9PVE= MIME-Version: 1.0 Received: by 10.150.117.10 with SMTP id p10mr3735105ybc.121.1264882299472; Sat, 30 Jan 2010 12:11:39 -0800 (PST) In-Reply-To: <5715b9301001281210o5a85c8c0x291195f396427e73@mail.gmail.com> References: <16d9defd1001281200r3b81f47bh8a12bffb14106a27@mail.gmail.com> <5715b9301001281210o5a85c8c0x291195f396427e73@mail.gmail.com> Date: Sat, 30 Jan 2010 15:11:39 -0500 Message-ID: Subject: [lojban] Re: An open source AI research & developing project uses lojban From: Matt Arnold To: lojban-list@lojban.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by Ecartis X-archive-position: 16901 X-ecartis-version: Ecartis v1.0.0 Sender: lojban-list-bounce@lojban.org Errors-to: lojban-list-bounce@lojban.org X-original-sender: matt.mattarn@gmail.com Precedence: bulk Reply-to: lojban-list@lojban.org X-list: lojban-list Luke, I follow Dresden Codak as well. I really like that particular strip. Of course, systems that perform non-linguistic tasks, with no socialization, need no more than an incredibly simple system of motivation. "You're programmed to pilot this vehicle to wherever the passenger says." It is absurd to think such a system has any reason to do any other task, such as overthrow the human race. It has no motivation to do so. But Super-User is talking about a human-equivalent AI, which means deliberately creating a motivational system complex enough to want to do _anything_. With human-equivalent AI, we face a question of our own motivation. What's the point in creating a new person with human rights? What is to be gained from that? If it is made to be like us, with our own instinct for self-improvement, and is allowed to self-improve to become smarter than us, why should it use that power to act in our best interests? How are we better off to create someone who has an advantage over us? We have a bad enough class system as it is. Suppose we carefully limit the human-equivalent AI to stay at the level of an imbecile human. Ethically, you wouldn't do that to a human. Do we then have an obligation to keep the computer running forever? Is shutting off the computer an act of murder? It's best to decide what one thinks of these issues before, not after. -Eppcott On Thu, Jan 28, 2010 at 3:10 PM, Luke Bergen wrote: > Well, there are many methods, all more questionable than the last.  I for > one am in favor of the "ball and chains" method. > That's an excellent point codrus.  I know you were joking Matt but it's > always funny how many people see "i Robot" and suddenly believe that all AI > research is bad and will lead mankind to his doom.  I think Hollywood has > been a leading cause in us all having an over-extended fear of the un-known > (especially sci-fi related unknown).  Kind of reminds me > of http://dresdencodak.com/2009/09/22/caveman-science-fiction/ > > On Thu, Jan 28, 2010 at 3:00 PM, chris kerr > wrote: >> >> How do you make sure your children won't kill you? >> >> codrus >> >> On Wed, Jan 27, 2010 at 12:13 PM, Matt Arnold >> wrote: >>> >>> >>> That having been said, it raises another can of worms about how to >>> make sure the AI won't try to kill us. > > To unsubscribe from this list, send mail to lojban-list-request@lojban.org with the subject unsubscribe, or go to http://www.lojban.org/lsg2/, or if you're really stuck, send mail to secretary@lojban.org for help.