From eyeonus@gmail.com Thu Jan 28 13:44:23 2010 Received: from mail-ew0-f224.google.com ([209.85.219.224]) by chain.digitalkingdom.org with esmtp (Exim 4.71) (envelope-from ) id 1Nac9x-0000Hz-Tq for lojban-list@lojban.org; Thu, 28 Jan 2010 13:44:23 -0800 Received: by ewy24 with SMTP id 24so1266590ewy.26 for ; Thu, 28 Jan 2010 13:44:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=F2w4E9Dz63uNFUTpwQuHOgtsEe+hp3RLLblEJzz4OpM=; b=I5ZC+7jVcVNd4BmKdh+h6dQa0l2BeMql6At25V2Go3xrSiTpNehVdJVBGmh3IeqOrs tcS7onyvZs61pmqztJj+LCYAQOWvdWHC6P0DTSj3X4LrTNzmnpcspUy6lOXZWygqCa1g ++APEliHAuCpwidPanzowGgtVvVzHF23f004M= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=Puli9xADKP2ru2Lu43cBPTh9hJ8BqlwmJRQu59OLJaDRjpP/Uc1djib9Dh/uu7D0TS IzFgOgFP44AmgFW6AIJwbrEVHLoGffo+c7hgaHFUQUh63bRHvRsVYBMT5mDE5szpEmiq KztMOpVRqChm4t1NGnMpHdOOLivkJQkxUC8Qc= MIME-Version: 1.0 Received: by 10.216.88.207 with SMTP id a57mr2043525wef.200.1264715050753; Thu, 28 Jan 2010 13:44:10 -0800 (PST) In-Reply-To: <5715b9301001281210o5a85c8c0x291195f396427e73@mail.gmail.com> References: <16d9defd1001281200r3b81f47bh8a12bffb14106a27@mail.gmail.com> <5715b9301001281210o5a85c8c0x291195f396427e73@mail.gmail.com> Date: Thu, 28 Jan 2010 15:44:09 -0600 Message-ID: <702226df1001281344k1903e2aal24b1a5979c68edd6@mail.gmail.com> Subject: Re: [lojban] Re: An open source AI research & developing project uses lojban From: "Jon \"Top Hat\" Jones" To: lojban-list@lojban.org Content-Type: multipart/alternative; boundary=0016e6d7eaa757a176047e406bea --0016e6d7eaa757a176047e406bea Content-Type: text/plain; charset=ISO-8859-1 It doesn't help that the movie made the suggestion that the Three Laws of Robotics would lead to the end of humanity and only a robot which did NOT have to follow said laws could stop it. Because it had a friggin' heart. Being someone who has read (nearly?) all of Asimov's books on the subject of robots, the Three Laws, and the messing with the latter, it is painfully obvious that the movie has that exactly backwards. Assuming that a way could be found to hardwire the Three Laws *as stated*into an AI, that would successfully keep our children from killing us. On Thu, Jan 28, 2010 at 2:10 PM, Luke Bergen wrote: > Well, there are many methods, all more questionable than the last. I for > one am in favor of the "ball and chains" method. > > That's an excellent point codrus. I know you were joking Matt but it's > always funny how many people see "i Robot" and suddenly believe that all AI > research is bad and will lead mankind to his doom. I think Hollywood has > been a leading cause in us all having an over-extended fear of the un-known > (especially sci-fi related unknown). Kind of reminds me of > http://dresdencodak.com/2009/09/22/caveman-science-fiction/ > > > On Thu, Jan 28, 2010 at 3:00 PM, chris kerr wrote: > >> How do you make sure your children won't kill you? >> >> codrus >> >> >> On Wed, Jan 27, 2010 at 12:13 PM, Matt Arnold wrote: >> >>> >>> >>> That having been said, it raises another can of worms about how to >>> make sure the AI won't try to kill us. >>> >> > -- mu'o mi'e .aionys. .i.a'o.e'e ko klama le bende pe denpa bu --0016e6d7eaa757a176047e406bea Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable It doesn't help that the movie made the suggestion that the Three Laws = of Robotics would lead to the end of humanity and only a robot which did NO= T have to follow said laws could stop it. Because it had a friggin' hea= rt.

Being someone who has read (nearly?) all of Asimov's books on the s= ubject of robots, the Three Laws, and the messing with the latter, it is pa= infully obvious that the movie has that exactly backwards.

Assuming = that a way could be found to hardwire the Three Laws as stated into = an AI, that would successfully keep our children from killing us.

On Thu, Jan 28, 2010 at 2:10 PM, Luke Bergen= <lukeabergen= @gmail.com> wrote:
Well, there are many methods, all more questionable than the last. =A0I for= one am in favor of the "ball and chains" method.

<= div>That's an excellent point codrus. =A0I know you were joking Matt bu= t it's always funny how many people see "i Robot" and suddenl= y believe that all AI research is bad and will lead mankind to his doom. = =A0I think=A0Hollywood=A0has been a leading cause in us all having an over-= extended fear of the un-known (especially sci-fi related unknown). =A0Kind = of reminds me of=A0http://dresdencodak.com/2009/09/22/cavema= n-science-fiction/=A0


On Thu, Jan 28, 2010 at 3:00 PM, chris kerr = <letsclimbhigher@gmail.com> wrote:
How do you make sure your children won't kill you?

codrus


On Wed, Jan 27, 2010 at 12= :13 PM, Matt Arnold <matt.mattarn@gmail.com> wrote:


That having been said, it raises another can of worms about how to
make sure the AI won't try to kill us.




--
mu'o mi'e .aion= ys.

.i.a'o.e'e ko klama le bende pe denpa bu

--0016e6d7eaa757a176047e406bea--