[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lojban] Re: An open source AI research & developing project uses lojban



Luke,

I follow Dresden Codak as well. I really like that particular strip.

Of course, systems that perform non-linguistic tasks, with no
socialization, need no more than an incredibly simple system of
motivation. "You're programmed to pilot this vehicle to wherever the
passenger says." It is absurd to think such a system has any reason to
do any other task, such as overthrow the human race. It has no
motivation to do so. But Super-User is talking about a
human-equivalent AI, which means deliberately creating a motivational
system complex enough to want to do _anything_.

With human-equivalent AI, we face a question of our own motivation.
What's the point in creating a new person with human rights? What is
to be gained from that? If it is made to be like us, with our own
instinct for self-improvement, and is allowed to self-improve to
become smarter than us, why should it use that power to act in our
best interests? How are we better off to create someone who has an
advantage over us? We have a bad enough class system as it is.

Suppose we carefully limit the human-equivalent AI to stay at the
level of an imbecile human. Ethically, you wouldn't do that to a
human. Do we then have an obligation to keep the computer running
forever? Is shutting off the computer an act of murder? It's best to
decide what one thinks of these issues before, not after.

-Eppcott


On Thu, Jan 28, 2010 at 3:10 PM, Luke Bergen <lukeabergen@gmail.com> wrote:
> Well, there are many methods, all more questionable than the last.  I for
> one am in favor of the "ball and chains" method.
> That's an excellent point codrus.  I know you were joking Matt but it's
> always funny how many people see "i Robot" and suddenly believe that all AI
> research is bad and will lead mankind to his doom.  I think Hollywood has
> been a leading cause in us all having an over-extended fear of the un-known
> (especially sci-fi related unknown).  Kind of reminds me
> of http://dresdencodak.com/2009/09/22/caveman-science-fiction/
>
> On Thu, Jan 28, 2010 at 3:00 PM, chris kerr <letsclimbhigher@gmail.com>
> wrote:
>>
>> How do you make sure your children won't kill you?
>>
>> codrus
>>
>> On Wed, Jan 27, 2010 at 12:13 PM, Matt Arnold <matt.mattarn@gmail.com>
>> wrote:
>>>
>>>
>>> That having been said, it raises another can of worms about how to
>>> make sure the AI won't try to kill us.
>
>