[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lojban] Re: An open source AI research & developing project uses lojban



It doesn't help that the movie made the suggestion that the Three Laws of Robotics would lead to the end of humanity and only a robot which did NOT have to follow said laws could stop it. Because it had a friggin' heart.

Being someone who has read (nearly?) all of Asimov's books on the subject of robots, the Three Laws, and the messing with the latter, it is painfully obvious that the movie has that exactly backwards.

Assuming that a way could be found to hardwire the Three Laws as stated into an AI, that would successfully keep our children from killing us.

On Thu, Jan 28, 2010 at 2:10 PM, Luke Bergen <lukeabergen@gmail.com> wrote:
Well, there are many methods, all more questionable than the last.  I for one am in favor of the "ball and chains" method.

That's an excellent point codrus.  I know you were joking Matt but it's always funny how many people see "i Robot" and suddenly believe that all AI research is bad and will lead mankind to his doom.  I think Hollywood has been a leading cause in us all having an over-extended fear of the un-known (especially sci-fi related unknown).  Kind of reminds me of http://dresdencodak.com/2009/09/22/caveman-science-fiction/ 


On Thu, Jan 28, 2010 at 3:00 PM, chris kerr <letsclimbhigher@gmail.com> wrote:
How do you make sure your children won't kill you?

codrus


On Wed, Jan 27, 2010 at 12:13 PM, Matt Arnold <matt.mattarn@gmail.com> wrote:


That having been said, it raises another can of worms about how to
make sure the AI won't try to kill us.




--
mu'o mi'e .aionys.

.i.a'o.e'e ko klama le bende pe denpa bu