... here comes BladeStop!
https://www.youtube.com/embed/NiRegdech_E?autoplay=1
http://www.bladestop.com
"BladeStop⢠improves band saw safety, with the ability to stop a
bandsaw blade within a fraction of a second from when contact is made
with the operator, reducing serious band saw blade injuries.
BladeStop⢠mechanically stops the bandsaw blade when the control unit
determines a person has come in contact with the blade -- stopping
the blade operation within 9 milliseconds of sensing a person's
finger or hand!"
On 11/23/2017 1:14 AM, OFWW wrote:
> On Wed, 22 Nov 2017 18:12:06 -0600, Leon <lcb11211@swbelldotnet>
> wrote:
>
>> On 11/22/2017 1:17 PM, OFWW wrote:
>>> On Wed, 22 Nov 2017 12:45:11 -0600, Leon <lcb11211@swbelldotnet>
>>> wrote:
>>>
>>>> On 11/22/2017 8:45 AM, Leon wrote:
>>>>> On 11/22/2017 6:52 AM, DerbyDad03 wrote:
>>>>>> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>>>>>>> [email protected] wrote:
>>>>>>>
>>>>>>>> I have to say, I am sorry to see that.
>>>>>>>
>>>>>>> Â technophobia [tek-nuh-foh-bee-uh]
>>>>>>> Â noun -- abnormal fear of or anxiety about the effects of advanced
>>>>>>> technology.
>>>>>>>
>>>>>>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>>>>>>>
>>>>>>
>>>>>> I'm not sure how this will work out on usenet, but I'm going to present
>>>>>> a scenario and ask for an answer. After some amount of time, maybe 48
>>>>>> hours,
>>>>>> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
>>>>>> another answer.
>>>>>>
>>>>>> Trust me, this will eventually lead back to technology, AI and most
>>>>>> certainly, people.
>>>>>>
>>>>>> In the following scenario you must assume that all options have been
>>>>>> considered and narrowed down to only 2. Please just accept that the
>>>>>> situation is as stated and that you only have 2 choices. If we get into
>>>>>> "Well, in a real life situation, you'd have to factor in this, that and
>>>>>> the other thing" we'll never get through this exercise.
>>>>>>
>>>>>> Here goes:
>>>>>>
>>>>>> 5 workers are standing on the railroad tracks. A train is heading in
>>>>>> their
>>>>>> direction. They have no escape route. If the train continues down the
>>>>>> tracks,
>>>>>> it will most assuredly kill them all.
>>>>>>
>>>>>> You are standing next to the lever that will switch the train to another
>>>>>> track before it reaches the workers. On the other track is a lone worker,
>>>>>> also with no escape route.
>>>>>>
>>>>>> You have 2, and only 2, options. If you do nothing, all 5 workers will
>>>>>> be killed. If you pull the lever, only 1 worker will be killed.
>>>>>>
>>>>>> Which option do you choose?
>>>>>>
>>>>>
>>>>> Pull the lever, Choosing to do nothing is the choice to kill 5.
>>>>
>>>> Well I have mentioned this before, and it goes back to comments I have
>>>> made in the past about decision making. It seems the majority here use
>>>> emotional over rational thinking to come up with a decision.
>>>>
>>>> It was said you only have two choices and who these people are or might
>>>> be is not a consideration. You can't make a rational decision with
>>>> what-if's. You only have two options, kill 5 or kill 1. Rational for
>>>> me says save 5, for the rest of you that are bringing in scenarios past
>>>> what should be considered will waste too much time and you end up with a
>>>> kill before you decide what to do.
>>>
>>> Rational thinking would state that trains run on a schedule, the
>>> switch would be locked, and for better or worse the five were not
>>> supposed to be there in the first place.
>>
>> No, you are adding "what if's to the given restraints. This is easy, you
>> either choose to move the switch or not. There is no other situation to
>> consider.
>>
>>>
>>> So how can I make a decision more rational than the scheduler, even if
>>> I had the key to the lock.
>>>
>>
>> Again you are adding what-if's.
>
> I understand what you are saying, but I would consider them inherent
> to the scenario.
>
LOL. Yeah well blame Derby for leaving out details to consider. ;~)
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
> [email protected] wrote:
>
> > I have to say, I am sorry to see that.
>
> technophobia [tek-nuh-foh-bee-uh]
> noun -- abnormal fear of or anxiety about the effects of advanced technology.
>
> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48 hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.
Trust me, this will eventually lead back to technology, AI and most
certainly, people.
In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.
Here goes:
5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.
You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.
You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.
Which option do you choose?
On Thu, 23 Nov 2017 18:44:05 -0800, OFWW <[email protected]>
wrote:
>On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
><[email protected]> wrote:
>
>>On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
>>> On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
>>> <[email protected]> wrote:
>>>
>>> >On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
>>> >> On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>>> >>
>>> >> >
>>> >> > Oh, well, no sense in waiting...
>>> >> >
>>> >> > 2nd scenario:
>>> >> >
>>> >> > 5 workers are standing on the railroad tracks. A train is heading in their
>>> >> > direction. They have no escape route. If the train continues down the tracks,
>>> >> > it will most assuredly kill them all.
>>> >> >
>>> >> > You are standing on a bridge overlooking the tracks. Next to you is a fairly
>>> >> > large person. We'll save you some trouble and let that person be a stranger.
>>> >> >
>>> >> > You have 2, and only 2, options. If you do nothing, all 5 workers will
>>> >> > be killed. If you push the stranger off the bridge, the train will kill
>>> >> > him but be stopped before the 5 workers are killed. (Don't question the
>>> >> > physics, just accept the outcome.)
>>> >> >
>>> >> > Which option do you choose?
>>> >> >
>>> >>
>>> >> I don't know. It was easy to pull the switch as there was a bit of
>>> >> disconnect there. Now it is up close and you are doing the pushing.
>>> >> One alternative is to jump yourself, but I'd not do that. Don't think I
>>> >> could push the guy either.
>>> >>
>>> >
>>> >And there in lies the rub. The "disconnected" part.
>>> >
>>> >Now, as promised, let's bring this back to technology, AI and most
>>> >certainly, people. Let's talk specifically about autonomous vehicles,
>>> >but please avoid the rabbit hole and realize that the concept applies
>>> >to just about any where AI is used and people are involved. Autonomus
>>> >vehicles (AV) are just one example.
>>> >
>>> >Imagine it's X years from now and AV's are fairly common. Imagine that an AV
>>> >is traveling down the road, with its AI in complete control of the vehicle.
>>> >The driver is using one hand get a cup of coffee from the built-in Keurig
>>> >machine and choosing a Pandora station with the other. He is completely
>>> >oblivious to what's happening outside of his vehicle.
>>> >
>>> >Now imagine that a 4 year old runs out into the road. The AI uses all of the
>>> >data at its disposal (speed, distance, weather conditions, tire pressure,
>>> >etc.) and decides that it will not be able to stop in time. It checks the
>>> >input from its 360° cameras. Can't go right because of the line of parked
>>> >cars. They won't slow the vehicle enough to avoid hitting the kid. Using
>>> >facial recognition the AI determines that the mini-van on the left contains
>>> >5 elderly people. If the AV swerves left, it will push the mini-van into
>>> >oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
>>> >with the 18 wheeler's AI who responds and says "I have no place to go. If
>>> >you push the van into my lane, I'm taking out a bunch of Grandmas and
>>> >Grandpas."
>>> >
>>> >Now the AI has to make basically the same decision as in my first scenario:
>>> >Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>>> >
>>> >"Bye Bye, kid. You should have stayed on the sidewalk."
>>> >
>>> >No emotion, right? Right, not once the AI is programmed, not once the initial
>>> >AI rules have been written, not once the facial recognition database has
>>> >been built. The question is who wrote those rules? Who decided it's OK to
>>> >kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
>>> >it's better to save the kid and let the old folks die. They've had a full
>>> >life. Who wrote that rule? In other words, someone(s) have to decide whose
>>> >life is worth more than another's. They are essentially standing on a bridge
>>> >deciding whether to push the guy or not. They have to write the rule. They
>>> >are either going to kill the kid or push the car into the other lane.
>>> >
>>> >I, for one, don't think that I want to be sitting around that table. Having
>>> >to make the decisions would be one thing. Having to sit next to the person
>>> >that would push the guy off the bridge with a gleam in his eye would be a
>>> >totally different story.
>>>
>>> I reconsidered my thoughts on this one as well.
>>>
>>> The AV should do as it was designed to do, to the best of its
>>> capabilities. Staying in the lane when there is no option to swerve
>>> safely.
>>>
>>> There is already a legal reason for that, that being that the swerving
>>> driver assumes all the damages that incur from his action, including
>>> manslaughter.
>>
>>So in the following brake failure scenario, if the AV stays in lane and
>>kills the four "highly rated" pedestrians there are no charges, but if
>>it changes lanes and takes out the 4 slugs, jail time may ensue.
>>
>>http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-800
>>
>>Interesting.
>
>Yes, and I've been warned that by my taking evasive action I could
>cause someone else to respond likewise and that I would he held
>accountable for what happened.
I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?
On Sat, 25 Nov 2017 12:45:15 -0500, J. Clarke
<[email protected]> wrote:
>How is any of this relevant to criminal offenses regarding autonomous
>vehicles?
Thread drift the whole thing changes and you still have not had your
question answered, oh well.
On Wed, 22 Nov 2017 08:45:53 -0500, "John Grossbohlin"
<[email protected]> wrote:
>"DerbyDad03" wrote in message
>news:[email protected]...
>
>>Here goes:
>
>>5 workers are standing on the railroad tracks. A train is heading in their
>>direction. They have no escape route. If the train continues down the
>>tracks,
>>it will most assuredly kill them all.
>
>>You are standing next to the lever that will switch the train to another
>>track before it reaches the workers. On the other track is a lone worker,
>>also with no escape route.
>
>>You have 2, and only 2, options. If you do nothing, all 5 workers will
>>be killed. If you pull the lever, only 1 worker will be killed.
>
>>Which option do you choose?
>
>As my school bus driver explained nearly 50 years ago, the lessor of evils
>in this case would be to kill the lone worker... In the case of the bus, it
>would be to run over a kid on the side of the road rather than have a
>head-on collision with a large truck.
However, you have no participated in a murder. How do you look in
orange?
>While troubling as a kid it made sense then and it still makes sense...
>
>Just to throw this in: In real life a speeding train suddenly and
>unknowingly switching tracks is not a good thing... hundreds could be killed
>or injured if it were a passenger train!
We're not talking about reality, here. ;-)
On Wed, 22 Nov 2017 04:52:04 -0800 (PST), DerbyDad03
<[email protected]> wrote:
>On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>> [email protected] wrote:
>>
>> > I have to say, I am sorry to see that.
>>
>> technophobia [tek-nuh-foh-bee-uh]
>> noun -- abnormal fear of or anxiety about the effects of advanced technology.
>>
>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>
>I'm not sure how this will work out on usenet, but I'm going to present
>a scenario and ask for an answer. After some amount of time, maybe 48 hours,
>since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
>another answer.
>
>Trust me, this will eventually lead back to technology, AI and most
>certainly, people.
>
>In the following scenario you must assume that all options have been
>considered and narrowed down to only 2. Please just accept that the
>situation is as stated and that you only have 2 choices. If we get into
>"Well, in a real life situation, you'd have to factor in this, that and
>the other thing" we'll never get through this exercise.
>
>Here goes:
>
>5 workers are standing on the railroad tracks. A train is heading in their
>direction. They have no escape route. If the train continues down the tracks,
>it will most assuredly kill them all.
>
>You are standing next to the lever that will switch the train to another
>track before it reaches the workers. On the other track is a lone worker,
>also with no escape route.
>
>You have 2, and only 2, options. If you do nothing, all 5 workers will
>be killed. If you pull the lever, only 1 worker will be killed.
>
>Which option do you choose?
The problem with this is that if I am the one pulling the switch I can
see more than what is being presented.
If all the workers are wearing prison uniforms and busy working, and I
pull the switch to kill the one who has ten kids verses the 5 who have
none?
Or I see that the one alone never left the detail and the other five
are escapee's then I leave it as is. I'm the one with the shotgun. :)
However, based on just your statement alone then I would leave the
switch alone, it is locked, so I couldn't change it anyhow, and the
five are working where they should not be as the train always runs on
schedules so the five are not to be there in the first place.
DerbyDad03 <[email protected]> wrote:
> On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
> > On 11/22/2017 1:20 PM, DerbyDad03 wrote:
> >
> > >
> > > Oh, well, no sense in waiting...
> > >
> > > 2nd scenario:
> > >
> > > 5 workers are standing on the railroad tracks. A train is heading in their
> > > direction. They have no escape route. If the train continues down the tracks,
> > > it will most assuredly kill them all.
> > >
> > > You are standing on a bridge overlooking the tracks. Next to you is a fairly
> > > large person. We'll save you some trouble and let that person be a stranger.
> > >
> > > You have 2, and only 2, options. If you do nothing, all 5 workers will
> > > be killed. If you push the stranger off the bridge, the train will kill
> > > him but be stopped before the 5 workers are killed. (Don't question the
> > > physics, just accept the outcome.)
> > >
> > > Which option do you choose?
> > >
> >
> > I don't know. It was easy to pull the switch as there was a bit of
> > disconnect there. Now it is up close and you are doing the pushing.
> > One alternative is to jump yourself, but I'd not do that. Don't think I
> > could push the guy either.
> >
>
> And there in lies the rub. The "disconnected" part.
>
> Now, as promised, let's bring this back to technology, AI and most
> certainly, people. Let's talk specifically about autonomous vehicles,
> but please avoid the rabbit hole and realize that the concept applies
> to just about any where AI is used and people are involved. Autonomus
> vehicles (AV) are just one example.
>
> Imagine it's X years from now and AV's are fairly common. Imagine that an AV
> is traveling down the road, with its AI in complete control of the vehicle.
> The driver is using one hand get a cup of coffee from the built-in Keurig
> machine and choosing a Pandora station with the other. He is completely
> oblivious to what's happening outside of his vehicle.
>
> Now imagine that a 4 year old runs out into the road. The AI uses all of the
> data at its disposal (speed, distance, weather conditions, tire pressure,
> etc.) and decides that it will not be able to stop in time. It checks the
> input from its 360° cameras. Can't go right because of the line of parked
> cars. They won't slow the vehicle enough to avoid hitting the kid. Using
> facial recognition the AI determines that the mini-van on the left contains
> 5 elderly people. If the AV swerves left, it will push the mini-van into
> oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
> with the 18 wheeler's AI who responds and says "I have no place to go. If
> you push the van into my lane, I'm taking out a bunch of Grandmas and
> Grandpas."
>
> Now the AI has to make basically the same decision as in my first scenario:
> Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>
> "Bye Bye, kid. You should have stayed on the sidewalk."
>
> No emotion, right? Right, not once the AI is programmed, not once the initial
> AI rules have been written, not once the facial recognition database has
> been built. The question is who wrote those rules? Who decided it's OK to
> kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
> it's better to save the kid and let the old folks die. They've had a full
> life. Who wrote that rule? In other words, someone(s) have to decide whose
> life is worth more than another's. They are essentially standing on a bridge
> deciding whether to push the guy or not. They have to write the rule. They
> are either going to kill the kid or push the car into the other lane.
>
> I, for one, don't think that I want to be sitting around that table. Having
> to make the decisions would be one thing. Having to sit next to the person
> that would push the guy off the bridge with a gleam in his eye would be a
> totally different story.
USATODAY: Self-driving cars programmed to decide who dies in a crash
https://www.usatoday.com/story/money/cars/2017/11/23/self-driving-cars-programmed-decide-who-dies-crash/891493001/
Electric Comet <[email protected]> wrote:
> why do sawstop have to move over do they have a bandsaw product too
YBTJ https://www.youtube.com/watch?v=W3PLwNccpXU
Leon <lcb11211@swbelldotnet> wrote in
news:[email protected]:
> ;~) BUT that was not one of the options. You have 2, and only 2,
> options
There's always the third option... Probably the only good part of that
movie:
The only winning move is not to play.
Puckdropper
--
http://www.puckdroppersplace.us/rec.woodworking
A mini archive of some of rec.woodworking's best and worst!
On Wednesday, November 22, 2017 at 10:32:54 AM UTC-5, Ed Pawlowski wrote:
> On 11/22/2017 7:52 AM, DerbyDad03 wrote:
> > On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
> >> [email protected] wrote:
> >>
> >>> I have to say, I am sorry to see that.
> >>
> >> technophobia [tek-nuh-foh-bee-uh]
> >> noun -- abnormal fear of or anxiety about the effects of advanced technology.
> >>
> >> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
> >
> > I'm not sure how this will work out on usenet, but I'm going to present
> > a scenario and ask for an answer. After some amount of time, maybe 48 hours,
> > since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
> > another answer.
> >
> > Trust me, this will eventually lead back to technology, AI and most
> > certainly, people.
> >
> > In the following scenario you must assume that all options have been
> > considered and narrowed down to only 2. Please just accept that the
> > situation is as stated and that you only have 2 choices. If we get into
> > "Well, in a real life situation, you'd have to factor in this, that and
> > the other thing" we'll never get through this exercise.
> >
> > Here goes:
> >
> > 5 workers are standing on the railroad tracks. A train is heading in their
> > direction. They have no escape route. If the train continues down the tracks,
> > it will most assuredly kill them all.
> >
> > You are standing next to the lever that will switch the train to another
> > track before it reaches the workers. On the other track is a lone worker,
> > also with no escape route.
> >
> > You have 2, and only 2, options. If you do nothing, all 5 workers will
> > be killed. If you pull the lever, only 1 worker will be killed.
> >
> > Which option do you choose?
> >
>
> The short answer is to pull the switch and save as many lives as possible.
>
> The long answer, it depends. Would you make that same decision if the
> lone person was a family member? If the lone person was you? Five old
> people or one child? Of course, AI would take all the emotions out of
> the decision making. I think that is what you may be getting at.
AI will not take *all* of the emotion out of it. More on that later.
On Friday, November 24, 2017 at 10:10:01 AM UTC-5, Ed Pawlowski wrote:
> On 11/24/2017 12:37 AM, J. Clarke wrote:
>
> >>>
> >>> I find the assumption that a fatality involving a robot car would lead
> >>> to someone being jailed to be amusing. The people who assert this
> >>> never identify the statute under which someone would be jailed or who,
> >>> precisely this someone might be. They seem to assume that because a
> >>> human driving a car could be jailed for vehicular homicide or criminal
> >>> negligence or some such, it is automatic that someone else would be
> >>> jailed for the same offense--the trouble is that the car is legally an
> >>> inanimate object and we don't put inanimate objects in jail.
> >>>
> >>
>
> They can impound your car in a drug bust. Maybe they will impound your
> car for the offense. We'll build special long term impound lots for
> serious offenses, just disconnect the battery for lesser ones.
>
>
> >> You've taken it to the next level, into the real word scenario and out
> >> of the programming stage.
> >>
> >> Personally I would assume that anything designed would have to
> >> co-exist with real world laws and responsibilities. Even the final
> >> owner could be held responsible. See the laws regarding experimental
> >> aircraft, hang gliders, etc.
> >
> > Experimental aircraft and hang gliders are controlled by a human. If
> > they are involved in a fatl accident, the operator gets scrutinized.
> > An autonomous car is not under human control, it is its own operator,
> > the occupant is a passenger.
>
> The programmer will be jailed. Or maybe they will stick a pin in a
> Voodoo doll to punish him.
>
>
> >
> > We don't have "real world law" governing fatalities involving
> > autonomous vehicles. The engineering would, initially (I hope) be
> > based on existing case law involving human drivers and what the courts
> > held that they should or should not have done in particular
> > situations. But there won't be any actual law until either the
> > legislatures write statutes or the courts issue rulings, and the
> > latter won't happen until there are such vehicles in service in
> > sufficient quantity to generate cases.
>
> The sensible thing would be to gather the most brilliant minds of the TV
> ambulance chasing lawyers and let them come up with guidelines for
> liability. Can you think of anything more fair than that?
Sure. Build a random number generator into the AI. The AI simply uses the
random number to decide who to take out at the time of the incident.
"Step right up, spin the wheel, take your chances."
It'll all be "hit or miss" so to speak.
On 11/24/2017 12:37 AM, J. Clarke wrote:
>>>
>>> I find the assumption that a fatality involving a robot car would lead
>>> to someone being jailed to be amusing. The people who assert this
>>> never identify the statute under which someone would be jailed or who,
>>> precisely this someone might be. They seem to assume that because a
>>> human driving a car could be jailed for vehicular homicide or criminal
>>> negligence or some such, it is automatic that someone else would be
>>> jailed for the same offense--the trouble is that the car is legally an
>>> inanimate object and we don't put inanimate objects in jail.
>>>
>>
They can impound your car in a drug bust. Maybe they will impound your
car for the offense. We'll build special long term impound lots for
serious offenses, just disconnect the battery for lesser ones.
>> You've taken it to the next level, into the real word scenario and out
>> of the programming stage.
>>
>> Personally I would assume that anything designed would have to
>> co-exist with real world laws and responsibilities. Even the final
>> owner could be held responsible. See the laws regarding experimental
>> aircraft, hang gliders, etc.
>
> Experimental aircraft and hang gliders are controlled by a human. If
> they are involved in a fatl accident, the operator gets scrutinized.
> An autonomous car is not under human control, it is its own operator,
> the occupant is a passenger.
The programmer will be jailed. Or maybe they will stick a pin in a
Voodoo doll to punish him.
>
> We don't have "real world law" governing fatalities involving
> autonomous vehicles. The engineering would, initially (I hope) be
> based on existing case law involving human drivers and what the courts
> held that they should or should not have done in particular
> situations. But there won't be any actual law until either the
> legislatures write statutes or the courts issue rulings, and the
> latter won't happen until there are such vehicles in service in
> sufficient quantity to generate cases.
The sensible thing would be to gather the most brilliant minds of the TV
ambulance chasing lawyers and let them come up with guidelines for
liability. Can you think of anything more fair than that?
On Thu, 23 Nov 2017 20:52:09 -0800, OFWW <[email protected]>
wrote:
>On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
><[email protected]> wrote:
>
>>On Thu, 23 Nov 2017 18:44:05 -0800, OFWW <[email protected]>
>>wrote:
>>
>>>On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
>>><[email protected]> wrote:
>>>
>>>>On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
>>>>> On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
>>>>> <[email protected]> wrote:
>>>>>
>>>>> >On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
>>>>> >> On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>>>>> >>
>>>>> >> >
>>>>> >> > Oh, well, no sense in waiting...
>>>>> >> >
>>>>> >> > 2nd scenario:
>>>>> >> >
>>>>> >> > 5 workers are standing on the railroad tracks. A train is heading in their
>>>>> >> > direction. They have no escape route. If the train continues down the tracks,
>>>>> >> > it will most assuredly kill them all.
>>>>> >> >
>>>>> >> > You are standing on a bridge overlooking the tracks. Next to you is a fairly
>>>>> >> > large person. We'll save you some trouble and let that person be a stranger.
>>>>> >> >
>>>>> >> > You have 2, and only 2, options. If you do nothing, all 5 workers will
>>>>> >> > be killed. If you push the stranger off the bridge, the train will kill
>>>>> >> > him but be stopped before the 5 workers are killed. (Don't question the
>>>>> >> > physics, just accept the outcome.)
>>>>> >> >
>>>>> >> > Which option do you choose?
>>>>> >> >
>>>>> >>
>>>>> >> I don't know. It was easy to pull the switch as there was a bit of
>>>>> >> disconnect there. Now it is up close and you are doing the pushing.
>>>>> >> One alternative is to jump yourself, but I'd not do that. Don't think I
>>>>> >> could push the guy either.
>>>>> >>
>>>>> >
>>>>> >And there in lies the rub. The "disconnected" part.
>>>>> >
>>>>> >Now, as promised, let's bring this back to technology, AI and most
>>>>> >certainly, people. Let's talk specifically about autonomous vehicles,
>>>>> >but please avoid the rabbit hole and realize that the concept applies
>>>>> >to just about any where AI is used and people are involved. Autonomus
>>>>> >vehicles (AV) are just one example.
>>>>> >
>>>>> >Imagine it's X years from now and AV's are fairly common. Imagine that an AV
>>>>> >is traveling down the road, with its AI in complete control of the vehicle.
>>>>> >The driver is using one hand get a cup of coffee from the built-in Keurig
>>>>> >machine and choosing a Pandora station with the other. He is completely
>>>>> >oblivious to what's happening outside of his vehicle.
>>>>> >
>>>>> >Now imagine that a 4 year old runs out into the road. The AI uses all of the
>>>>> >data at its disposal (speed, distance, weather conditions, tire pressure,
>>>>> >etc.) and decides that it will not be able to stop in time. It checks the
>>>>> >input from its 360° cameras. Can't go right because of the line of parked
>>>>> >cars. They won't slow the vehicle enough to avoid hitting the kid. Using
>>>>> >facial recognition the AI determines that the mini-van on the left contains
>>>>> >5 elderly people. If the AV swerves left, it will push the mini-van into
>>>>> >oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
>>>>> >with the 18 wheeler's AI who responds and says "I have no place to go. If
>>>>> >you push the van into my lane, I'm taking out a bunch of Grandmas and
>>>>> >Grandpas."
>>>>> >
>>>>> >Now the AI has to make basically the same decision as in my first scenario:
>>>>> >Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>>>>> >
>>>>> >"Bye Bye, kid. You should have stayed on the sidewalk."
>>>>> >
>>>>> >No emotion, right? Right, not once the AI is programmed, not once the initial
>>>>> >AI rules have been written, not once the facial recognition database has
>>>>> >been built. The question is who wrote those rules? Who decided it's OK to
>>>>> >kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
>>>>> >it's better to save the kid and let the old folks die. They've had a full
>>>>> >life. Who wrote that rule? In other words, someone(s) have to decide whose
>>>>> >life is worth more than another's. They are essentially standing on a bridge
>>>>> >deciding whether to push the guy or not. They have to write the rule. They
>>>>> >are either going to kill the kid or push the car into the other lane.
>>>>> >
>>>>> >I, for one, don't think that I want to be sitting around that table. Having
>>>>> >to make the decisions would be one thing. Having to sit next to the person
>>>>> >that would push the guy off the bridge with a gleam in his eye would be a
>>>>> >totally different story.
>>>>>
>>>>> I reconsidered my thoughts on this one as well.
>>>>>
>>>>> The AV should do as it was designed to do, to the best of its
>>>>> capabilities. Staying in the lane when there is no option to swerve
>>>>> safely.
>>>>>
>>>>> There is already a legal reason for that, that being that the swerving
>>>>> driver assumes all the damages that incur from his action, including
>>>>> manslaughter.
>>>>
>>>>So in the following brake failure scenario, if the AV stays in lane and
>>>>kills the four "highly rated" pedestrians there are no charges, but if
>>>>it changes lanes and takes out the 4 slugs, jail time may ensue.
>>>>
>>>>http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-800
>>>>
>>>>Interesting.
>>>
>>>Yes, and I've been warned that by my taking evasive action I could
>>>cause someone else to respond likewise and that I would he held
>>>accountable for what happened.
>>
>>I find the assumption that a fatality involving a robot car would lead
>>to someone being jailed to be amusing. The people who assert this
>>never identify the statute under which someone would be jailed or who,
>>precisely this someone might be. They seem to assume that because a
>>human driving a car could be jailed for vehicular homicide or criminal
>>negligence or some such, it is automatic that someone else would be
>>jailed for the same offense--the trouble is that the car is legally an
>>inanimate object and we don't put inanimate objects in jail. So it
>>gets down to proving that the occupant is negligent, which is a hard
>>sell given that the government allowed the car to be licensed with the
>>understanding that it would not be controlled by the occupant, or
>>proving that the engineering team responsible for developing it was
>>negligent, which given that they can show the logic the thing used and
>>no doubt provide legal justification for the decision it made, will be
>>another tall order. So who goes to jail?
>>
>
>You've taken it to the next level, into the real word scenario and out
>of the programming stage.
>
>Personally I would assume that anything designed would have to
>co-exist with real world laws and responsibilities. Even the final
>owner could be held responsible. See the laws regarding experimental
>aircraft, hang gliders, etc.
Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.
We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.
Rather than hang gliders and homebuilts, consider a Globalhawk that
hits an airliner. Assuming no negligence on the part of the airliner
crew, who do you go after? Do you go after the Air Force, Northrop
Grumman, Raytheon, or somebody else? And of what are they likely to
be found guilty?
>But we should be sticking to this hypothetical example given us.
It was suggested that someone would go to jail. I still want to know
who and what crime they committed.
"DerbyDad03" wrote in message
news:[email protected]...
>Here goes:
>5 workers are standing on the railroad tracks. A train is heading in their
>direction. They have no escape route. If the train continues down the
>tracks,
>it will most assuredly kill them all.
>You are standing next to the lever that will switch the train to another
>track before it reaches the workers. On the other track is a lone worker,
>also with no escape route.
>You have 2, and only 2, options. If you do nothing, all 5 workers will
>be killed. If you pull the lever, only 1 worker will be killed.
>Which option do you choose?
As my school bus driver explained nearly 50 years ago, the lessor of evils
in this case would be to kill the lone worker... In the case of the bus, it
would be to run over a kid on the side of the road rather than have a
head-on collision with a large truck.
While troubling as a kid it made sense then and it still makes sense...
Just to throw this in: In real life a speeding train suddenly and
unknowingly switching tracks is not a good thing... hundreds could be killed
or injured if it were a passenger train!
On Wed, 22 Nov 2017 08:08:26 -0800 (PST), DerbyDad03
<[email protected]> wrote:
>On Wednesday, November 22, 2017 at 10:32:54 AM UTC-5, Ed Pawlowski wrote:
>> On 11/22/2017 7:52 AM, DerbyDad03 wrote:
>> > On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>> >> [email protected] wrote:
>> >>
>> >>> I have to say, I am sorry to see that.
>> >>
>> >> technophobia [tek-nuh-foh-bee-uh]
>> >> noun -- abnormal fear of or anxiety about the effects of advanced technology.
>> >>
>> >> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>> >
>> > I'm not sure how this will work out on usenet, but I'm going to present
>> > a scenario and ask for an answer. After some amount of time, maybe 48 hours,
>> > since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
>> > another answer.
>> >
>> > Trust me, this will eventually lead back to technology, AI and most
>> > certainly, people.
>> >
>> > In the following scenario you must assume that all options have been
>> > considered and narrowed down to only 2. Please just accept that the
>> > situation is as stated and that you only have 2 choices. If we get into
>> > "Well, in a real life situation, you'd have to factor in this, that and
>> > the other thing" we'll never get through this exercise.
>> >
>> > Here goes:
>> >
>> > 5 workers are standing on the railroad tracks. A train is heading in their
>> > direction. They have no escape route. If the train continues down the tracks,
>> > it will most assuredly kill them all.
>> >
>> > You are standing next to the lever that will switch the train to another
>> > track before it reaches the workers. On the other track is a lone worker,
>> > also with no escape route.
>> >
>> > You have 2, and only 2, options. If you do nothing, all 5 workers will
>> > be killed. If you pull the lever, only 1 worker will be killed.
>> >
>> > Which option do you choose?
>> >
>>
>> The short answer is to pull the switch and save as many lives as possible.
>>
>> The long answer, it depends. Would you make that same decision if the
>> lone person was a family member? If the lone person was you? Five old
>> people or one child? Of course, AI would take all the emotions out of
>> the decision making. I think that is what you may be getting at.
>
>AI will not take *all* of the emotion out of it. More on that later.
Right. If it did, it wouldn't be "AI".
On Tuesday, November 21, 2017 at 2:04:21 AM UTC-5, [email protected] wrote:
> I have to say, I am sorry to see that.=20
>=20
> It means that all over the internet, in a high concentration here, and at=
the old men's table at Woodcraft the teeth gnashing will start.
>=20
> Screams of civil rights violations, chest thumping of those declaring tha=
t their generation had no guards or safety devices and they were fine, the =
paranoids buying saws now before the nanny state Commie/weenies make safety=
some kind of bullshit issue... all of it.
>=20
> Ready for the first 250 thread here for a long, long time. Nothing like =
getting a good bitch on to fire one up, though.
>=20
> Robert
I think it's a great idea.
There you go, I just canceled out your bitch and saved us 248 posts.
You're welcome. ;-)
On Wed, 22 Nov 2017 19:47:39 -0800 (PST), DerbyDad03
<[email protected]> wrote:
>On Wednesday, November 22, 2017 at 7:12:18 PM UTC-5, Leon wrote:
>> On 11/22/2017 1:17 PM, OFWW wrote:
>> > On Wed, 22 Nov 2017 12:45:11 -0600, Leon <lcb11211@swbelldotnet>
>> > wrote:
>> >
>> >> On 11/22/2017 8:45 AM, Leon wrote:
>> >>> On 11/22/2017 6:52 AM, DerbyDad03 wrote:
>> >>>> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>> >>>>> [email protected] wrote:
>> >>>>>
>> >>>>>> I have to say, I am sorry to see that.
>> >>>>>
>> >>>>> technophobia [tek-nuh-foh-bee-uh]
>> >>>>> noun -- abnormal fear of or anxiety about the effects of advanced
>> >>>>> technology.
>> >>>>>
>> >>>>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>> >>>>>
>> >>>>
>> >>>> I'm not sure how this will work out on usenet, but I'm going to present
>> >>>> a scenario and ask for an answer. After some amount of time, maybe 48
>> >>>> hours,
>> >>>> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
>> >>>> another answer.
>> >>>>
>> >>>> Trust me, this will eventually lead back to technology, AI and most
>> >>>> certainly, people.
>> >>>>
>> >>>> In the following scenario you must assume that all options have been
>> >>>> considered and narrowed down to only 2. Please just accept that the
>> >>>> situation is as stated and that you only have 2 choices. If we get into
>> >>>> "Well, in a real life situation, you'd have to factor in this, that and
>> >>>> the other thing" we'll never get through this exercise.
>> >>>>
>> >>>> Here goes:
>> >>>>
>> >>>> 5 workers are standing on the railroad tracks. A train is heading in
>> >>>> their
>> >>>> direction. They have no escape route. If the train continues down the
>> >>>> tracks,
>> >>>> it will most assuredly kill them all.
>> >>>>
>> >>>> You are standing next to the lever that will switch the train to another
>> >>>> track before it reaches the workers. On the other track is a lone worker,
>> >>>> also with no escape route.
>> >>>>
>> >>>> You have 2, and only 2, options. If you do nothing, all 5 workers will
>> >>>> be killed. If you pull the lever, only 1 worker will be killed.
>> >>>>
>> >>>> Which option do you choose?
>> >>>>
>> >>>
>> >>> Pull the lever, Choosing to do nothing is the choice to kill 5.
>> >>
>> >> Well I have mentioned this before, and it goes back to comments I have
>> >> made in the past about decision making. It seems the majority here use
>> >> emotional over rational thinking to come up with a decision.
>> >>
>> >> It was said you only have two choices and who these people are or might
>> >> be is not a consideration. You can't make a rational decision with
>> >> what-if's. You only have two options, kill 5 or kill 1. Rational for
>> >> me says save 5, for the rest of you that are bringing in scenarios past
>> >> what should be considered will waste too much time and you end up with a
>> >> kill before you decide what to do.
>> >
>> > Rational thinking would state that trains run on a schedule, the
>> > switch would be locked, and for better or worse the five were not
>> > supposed to be there in the first place.
>>
>> No, you are adding "what if's to the given restraints. This is easy, you
>> either choose to move the switch or not. There is no other situation to
>> consider.
>>
>
>I tried, I really tried:
>
>"Please just accept that the situation is as stated and that you only have
>2 choices. If we get into "Well, in a real life situation, you'd have to
>factor in this, that and the other thing" we'll never get through this
>exercise."
>
Snip>
Ok, then I opt to let er fly, and not interfere since morals or values
cannot be a part of the scenario without it being a "what if".
On Thursday, November 23, 2017 at 10:21:38 AM UTC-5, Leon wrote:
> On 11/23/2017 1:14 AM, OFWW wrote:
> > On Wed, 22 Nov 2017 18:12:06 -0600, Leon <lcb11211@swbelldotnet>
> > wrote:
> >=20
> >> On 11/22/2017 1:17 PM, OFWW wrote:
> >>> On Wed, 22 Nov 2017 12:45:11 -0600, Leon <lcb11211@swbelldotnet>
> >>> wrote:
> >>>
> >>>> On 11/22/2017 8:45 AM, Leon wrote:
> >>>>> On 11/22/2017 6:52 AM, DerbyDad03 wrote:
> >>>>>> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt w=
rote:
> >>>>>>> [email protected] wrote:
> >>>>>>>
> >>>>>>>> I have to say, I am sorry to see that.
> >>>>>>>
> >>>>>>> =C2=A0 technophobia [tek-nuh-foh-bee-uh]
> >>>>>>> =C2=A0 noun -- abnormal fear of or anxiety about the effects of=
advanced
> >>>>>>> technology.
> >>>>>>>
> >>>>>>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=3D1&autohide=
=3D1&showinfo=3D0&iv_load_policy=3D3&rel=3D0
> >>>>>>>
> >>>>>>
> >>>>>> I'm not sure how this will work out on usenet, but I'm going to pr=
esent
> >>>>>> a scenario and ask for an answer. After some amount of time, maybe=
48
> >>>>>> hours,
> >>>>>> since tomorrow is Thanksgiving, I'll expand on that scenario and a=
sk for
> >>>>>> another answer.
> >>>>>>
> >>>>>> Trust me, this will eventually lead back to technology, AI and mos=
t
> >>>>>> certainly, people.
> >>>>>>
> >>>>>> In the following scenario you must assume that all options have be=
en
> >>>>>> considered and narrowed down to only 2. Please just accept that th=
e
> >>>>>> situation is as stated and that you only have 2 choices. If we get=
into
> >>>>>> "Well, in a real life situation, you'd have to factor in this, tha=
t and
> >>>>>> the other thing" we'll never get through this exercise.
> >>>>>>
> >>>>>> Here goes:
> >>>>>>
> >>>>>> 5 workers are standing on the railroad tracks. A train is heading =
in
> >>>>>> their
> >>>>>> direction. They have no escape route. If the train continues down =
the
> >>>>>> tracks,
> >>>>>> it will most assuredly kill them all.
> >>>>>>
> >>>>>> You are standing next to the lever that will switch the train to a=
nother
> >>>>>> track before it reaches the workers. On the other track is a lone =
worker,
> >>>>>> also with no escape route.
> >>>>>>
> >>>>>> You have 2, and only 2, options. If you do nothing, all 5 workers =
will
> >>>>>> be killed. If you pull the lever, only 1 worker will be killed.
> >>>>>>
> >>>>>> Which option do you choose?
> >>>>>>
> >>>>>
> >>>>> Pull the lever, Choosing to do nothing is the choice to kill 5.
> >>>>
> >>>> Well I have mentioned this before, and it goes back to comments I ha=
ve
> >>>> made in the past about decision making. It seems the majority here =
use
> >>>> emotional over rational thinking to come up with a decision.
> >>>>
> >>>> It was said you only have two choices and who these people are or mi=
ght
> >>>> be is not a consideration. You can't make a rational decision with
> >>>> what-if's. You only have two options, kill 5 or kill 1. Rational f=
or
> >>>> me says save 5, for the rest of you that are bringing in scenarios p=
ast
> >>>> what should be considered will waste too much time and you end up wi=
th a
> >>>> kill before you decide what to do.
> >>>
> >>> Rational thinking would state that trains run on a schedule, the
> >>> switch would be locked, and for better or worse the five were not
> >>> supposed to be there in the first place.
> >>
> >> No, you are adding "what if's to the given restraints. This is easy, y=
ou
> >> either choose to move the switch or not. There is no other situation =
to
> >> consider.
> >>
> >>>
> >>> So how can I make a decision more rational than the scheduler, even i=
f
> >>> I had the key to the lock.
> >>>
> >>
> >> Again you are adding what-if's.
> >=20
> > I understand what you are saying, but I would consider them inherent
> > to the scenario.
> >=20
>=20
> LOL. Yeah well blame Derby for leaving out details to consider. ;~)
The train schedule, labor contract and key access process was not available
at the time of my posting. Sorry.
On 11/22/2017 1:17 PM, OFWW wrote:
> On Wed, 22 Nov 2017 12:45:11 -0600, Leon <lcb11211@swbelldotnet>
> wrote:
>
>> On 11/22/2017 8:45 AM, Leon wrote:
>>> On 11/22/2017 6:52 AM, DerbyDad03 wrote:
>>>> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>>>>> [email protected] wrote:
>>>>>
>>>>>> I have to say, I am sorry to see that.
>>>>>
>>>>> Â technophobia [tek-nuh-foh-bee-uh]
>>>>> Â noun -- abnormal fear of or anxiety about the effects of advanced
>>>>> technology.
>>>>>
>>>>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>>>>>
>>>>
>>>> I'm not sure how this will work out on usenet, but I'm going to present
>>>> a scenario and ask for an answer. After some amount of time, maybe 48
>>>> hours,
>>>> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
>>>> another answer.
>>>>
>>>> Trust me, this will eventually lead back to technology, AI and most
>>>> certainly, people.
>>>>
>>>> In the following scenario you must assume that all options have been
>>>> considered and narrowed down to only 2. Please just accept that the
>>>> situation is as stated and that you only have 2 choices. If we get into
>>>> "Well, in a real life situation, you'd have to factor in this, that and
>>>> the other thing" we'll never get through this exercise.
>>>>
>>>> Here goes:
>>>>
>>>> 5 workers are standing on the railroad tracks. A train is heading in
>>>> their
>>>> direction. They have no escape route. If the train continues down the
>>>> tracks,
>>>> it will most assuredly kill them all.
>>>>
>>>> You are standing next to the lever that will switch the train to another
>>>> track before it reaches the workers. On the other track is a lone worker,
>>>> also with no escape route.
>>>>
>>>> You have 2, and only 2, options. If you do nothing, all 5 workers will
>>>> be killed. If you pull the lever, only 1 worker will be killed.
>>>>
>>>> Which option do you choose?
>>>>
>>>
>>> Pull the lever, Choosing to do nothing is the choice to kill 5.
>>
>> Well I have mentioned this before, and it goes back to comments I have
>> made in the past about decision making. It seems the majority here use
>> emotional over rational thinking to come up with a decision.
>>
>> It was said you only have two choices and who these people are or might
>> be is not a consideration. You can't make a rational decision with
>> what-if's. You only have two options, kill 5 or kill 1. Rational for
>> me says save 5, for the rest of you that are bringing in scenarios past
>> what should be considered will waste too much time and you end up with a
>> kill before you decide what to do.
>
> Rational thinking would state that trains run on a schedule, the
> switch would be locked, and for better or worse the five were not
> supposed to be there in the first place.
No, you are adding "what if's to the given restraints. This is easy, you
either choose to move the switch or not. There is no other situation to
consider.
>
> So how can I make a decision more rational than the scheduler, even if
> I had the key to the lock.
>
Again you are adding what-if's.
On 11/23/2017 1:14 AM, OFWW wrote:
> On Wed, 22 Nov 2017 18:12:06 -0600, Leon <lcb11211@swbelldotnet>
> wrote:
>
>> On 11/22/2017 1:17 PM, OFWW wrote:
>>> On Wed, 22 Nov 2017 12:45:11 -0600, Leon <lcb11211@swbelldotnet>
>>> wrote:
>>>
>>>> On 11/22/2017 8:45 AM, Leon wrote:
>>>>> On 11/22/2017 6:52 AM, DerbyDad03 wrote:
>>>>>> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>>>>>>> [email protected] wrote:
>>>>>>>
>>>>>>>> I have to say, I am sorry to see that.
>>>>>>>
>>>>>>> Â technophobia [tek-nuh-foh-bee-uh]
>>>>>>> Â noun -- abnormal fear of or anxiety about the effects of advanced
>>>>>>> technology.
>>>>>>>
>>>>>>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>>>>>>>
>>>>>>
>>>>>> I'm not sure how this will work out on usenet, but I'm going to present
>>>>>> a scenario and ask for an answer. After some amount of time, maybe 48
>>>>>> hours,
>>>>>> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
>>>>>> another answer.
>>>>>>
>>>>>> Trust me, this will eventually lead back to technology, AI and most
>>>>>> certainly, people.
>>>>>>
>>>>>> In the following scenario you must assume that all options have been
>>>>>> considered and narrowed down to only 2. Please just accept that the
>>>>>> situation is as stated and that you only have 2 choices. If we get into
>>>>>> "Well, in a real life situation, you'd have to factor in this, that and
>>>>>> the other thing" we'll never get through this exercise.
>>>>>>
>>>>>> Here goes:
>>>>>>
>>>>>> 5 workers are standing on the railroad tracks. A train is heading in
>>>>>> their
>>>>>> direction. They have no escape route. If the train continues down the
>>>>>> tracks,
>>>>>> it will most assuredly kill them all.
>>>>>>
>>>>>> You are standing next to the lever that will switch the train to another
>>>>>> track before it reaches the workers. On the other track is a lone worker,
>>>>>> also with no escape route.
>>>>>>
>>>>>> You have 2, and only 2, options. If you do nothing, all 5 workers will
>>>>>> be killed. If you pull the lever, only 1 worker will be killed.
>>>>>>
>>>>>> Which option do you choose?
>>>>>>
>>>>>
>>>>> Pull the lever, Choosing to do nothing is the choice to kill 5.
>>>>
>>>> Well I have mentioned this before, and it goes back to comments I have
>>>> made in the past about decision making. It seems the majority here use
>>>> emotional over rational thinking to come up with a decision.
>>>>
>>>> It was said you only have two choices and who these people are or might
>>>> be is not a consideration. You can't make a rational decision with
>>>> what-if's. You only have two options, kill 5 or kill 1. Rational for
>>>> me says save 5, for the rest of you that are bringing in scenarios past
>>>> what should be considered will waste too much time and you end up with a
>>>> kill before you decide what to do.
>>>
>>> Rational thinking would state that trains run on a schedule, the
>>> switch would be locked, and for better or worse the five were not
>>> supposed to be there in the first place.
>>
>> No, you are adding "what if's to the given restraints. This is easy, you
>> either choose to move the switch or not. There is no other situation to
>> consider.
>>
>>>
>>> So how can I make a decision more rational than the scheduler, even if
>>> I had the key to the lock.
>>>
>>
>> Again you are adding what-if's.
>
> I understand what you are saying, but I would consider them inherent
> to the scenario.
>
LOL. Yeah well blame Derby for leaving out details to consider. ;~)
On 11/22/2017 7:52 AM, DerbyDad03 wrote:
> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>> [email protected] wrote:
>>
>>> I have to say, I am sorry to see that.
>>
>> technophobia [tek-nuh-foh-bee-uh]
>> noun -- abnormal fear of or anxiety about the effects of advanced technology.
>>
>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>
> I'm not sure how this will work out on usenet, but I'm going to present
> a scenario and ask for an answer. After some amount of time, maybe 48 hours,
> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
> another answer.
>
> Trust me, this will eventually lead back to technology, AI and most
> certainly, people.
>
> In the following scenario you must assume that all options have been
> considered and narrowed down to only 2. Please just accept that the
> situation is as stated and that you only have 2 choices. If we get into
> "Well, in a real life situation, you'd have to factor in this, that and
> the other thing" we'll never get through this exercise.
>
> Here goes:
>
> 5 workers are standing on the railroad tracks. A train is heading in their
> direction. They have no escape route. If the train continues down the tracks,
> it will most assuredly kill them all.
>
> You are standing next to the lever that will switch the train to another
> track before it reaches the workers. On the other track is a lone worker,
> also with no escape route.
>
> You have 2, and only 2, options. If you do nothing, all 5 workers will
> be killed. If you pull the lever, only 1 worker will be killed.
>
> Which option do you choose?
>
The short answer is to pull the switch and save as many lives as possible.
The long answer, it depends. Would you make that same decision if the
lone person was a family member? If the lone person was you? Five old
people or one child? Of course, AI would take all the emotions out of
the decision making. I think that is what you may be getting at.
On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
<[email protected]> wrote:
>On Thu, 23 Nov 2017 18:44:05 -0800, OFWW <[email protected]>
>wrote:
>
>>On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
>><[email protected]> wrote:
>>
>>>On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
>>>> On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
>>>> <[email protected]> wrote:
>>>>
>>>> >On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
>>>> >> On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>>>> >>
>>>> >> >
>>>> >> > Oh, well, no sense in waiting...
>>>> >> >
>>>> >> > 2nd scenario:
>>>> >> >
>>>> >> > 5 workers are standing on the railroad tracks. A train is heading in their
>>>> >> > direction. They have no escape route. If the train continues down the tracks,
>>>> >> > it will most assuredly kill them all.
>>>> >> >
>>>> >> > You are standing on a bridge overlooking the tracks. Next to you is a fairly
>>>> >> > large person. We'll save you some trouble and let that person be a stranger.
>>>> >> >
>>>> >> > You have 2, and only 2, options. If you do nothing, all 5 workers will
>>>> >> > be killed. If you push the stranger off the bridge, the train will kill
>>>> >> > him but be stopped before the 5 workers are killed. (Don't question the
>>>> >> > physics, just accept the outcome.)
>>>> >> >
>>>> >> > Which option do you choose?
>>>> >> >
>>>> >>
>>>> >> I don't know. It was easy to pull the switch as there was a bit of
>>>> >> disconnect there. Now it is up close and you are doing the pushing.
>>>> >> One alternative is to jump yourself, but I'd not do that. Don't think I
>>>> >> could push the guy either.
>>>> >>
>>>> >
>>>> >And there in lies the rub. The "disconnected" part.
>>>> >
>>>> >Now, as promised, let's bring this back to technology, AI and most
>>>> >certainly, people. Let's talk specifically about autonomous vehicles,
>>>> >but please avoid the rabbit hole and realize that the concept applies
>>>> >to just about any where AI is used and people are involved. Autonomus
>>>> >vehicles (AV) are just one example.
>>>> >
>>>> >Imagine it's X years from now and AV's are fairly common. Imagine that an AV
>>>> >is traveling down the road, with its AI in complete control of the vehicle.
>>>> >The driver is using one hand get a cup of coffee from the built-in Keurig
>>>> >machine and choosing a Pandora station with the other. He is completely
>>>> >oblivious to what's happening outside of his vehicle.
>>>> >
>>>> >Now imagine that a 4 year old runs out into the road. The AI uses all of the
>>>> >data at its disposal (speed, distance, weather conditions, tire pressure,
>>>> >etc.) and decides that it will not be able to stop in time. It checks the
>>>> >input from its 360° cameras. Can't go right because of the line of parked
>>>> >cars. They won't slow the vehicle enough to avoid hitting the kid. Using
>>>> >facial recognition the AI determines that the mini-van on the left contains
>>>> >5 elderly people. If the AV swerves left, it will push the mini-van into
>>>> >oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
>>>> >with the 18 wheeler's AI who responds and says "I have no place to go. If
>>>> >you push the van into my lane, I'm taking out a bunch of Grandmas and
>>>> >Grandpas."
>>>> >
>>>> >Now the AI has to make basically the same decision as in my first scenario:
>>>> >Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>>>> >
>>>> >"Bye Bye, kid. You should have stayed on the sidewalk."
>>>> >
>>>> >No emotion, right? Right, not once the AI is programmed, not once the initial
>>>> >AI rules have been written, not once the facial recognition database has
>>>> >been built. The question is who wrote those rules? Who decided it's OK to
>>>> >kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
>>>> >it's better to save the kid and let the old folks die. They've had a full
>>>> >life. Who wrote that rule? In other words, someone(s) have to decide whose
>>>> >life is worth more than another's. They are essentially standing on a bridge
>>>> >deciding whether to push the guy or not. They have to write the rule. They
>>>> >are either going to kill the kid or push the car into the other lane.
>>>> >
>>>> >I, for one, don't think that I want to be sitting around that table. Having
>>>> >to make the decisions would be one thing. Having to sit next to the person
>>>> >that would push the guy off the bridge with a gleam in his eye would be a
>>>> >totally different story.
>>>>
>>>> I reconsidered my thoughts on this one as well.
>>>>
>>>> The AV should do as it was designed to do, to the best of its
>>>> capabilities. Staying in the lane when there is no option to swerve
>>>> safely.
>>>>
>>>> There is already a legal reason for that, that being that the swerving
>>>> driver assumes all the damages that incur from his action, including
>>>> manslaughter.
>>>
>>>So in the following brake failure scenario, if the AV stays in lane and
>>>kills the four "highly rated" pedestrians there are no charges, but if
>>>it changes lanes and takes out the 4 slugs, jail time may ensue.
>>>
>>>http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-800
>>>
>>>Interesting.
>>
>>Yes, and I've been warned that by my taking evasive action I could
>>cause someone else to respond likewise and that I would he held
>>accountable for what happened.
>
>I find the assumption that a fatality involving a robot car would lead
>to someone being jailed to be amusing. The people who assert this
>never identify the statute under which someone would be jailed or who,
>precisely this someone might be. They seem to assume that because a
>human driving a car could be jailed for vehicular homicide or criminal
>negligence or some such, it is automatic that someone else would be
>jailed for the same offense--the trouble is that the car is legally an
>inanimate object and we don't put inanimate objects in jail. So it
>gets down to proving that the occupant is negligent, which is a hard
>sell given that the government allowed the car to be licensed with the
>understanding that it would not be controlled by the occupant, or
>proving that the engineering team responsible for developing it was
>negligent, which given that they can show the logic the thing used and
>no doubt provide legal justification for the decision it made, will be
>another tall order. So who goes to jail?
>
You've taken it to the next level, into the real word scenario and out
of the programming stage.
Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.
But we should be sticking to this hypothetical example given us.
On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
<[email protected]> wrote:
>On Thu, 23 Nov 2017 18:44:05 -0800, OFWW <[email protected]>
>wrote:
>
>>On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
>><[email protected]> wrote:
>>
>>>On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
>>>> On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
>>>> <[email protected]> wrote:
>>>>
>>>> >On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
>>>> >> On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>>>> >>
>>>> >> >
>>>> >> > Oh, well, no sense in waiting...
>>>> >> >
>>>> >> > 2nd scenario:
>>>> >> >
>>>> >> > 5 workers are standing on the railroad tracks. A train is heading in their
>>>> >> > direction. They have no escape route. If the train continues down the tracks,
>>>> >> > it will most assuredly kill them all.
>>>> >> >
>>>> >> > You are standing on a bridge overlooking the tracks. Next to you is a fairly
>>>> >> > large person. We'll save you some trouble and let that person be a stranger.
>>>> >> >
>>>> >> > You have 2, and only 2, options. If you do nothing, all 5 workers will
>>>> >> > be killed. If you push the stranger off the bridge, the train will kill
>>>> >> > him but be stopped before the 5 workers are killed. (Don't question the
>>>> >> > physics, just accept the outcome.)
>>>> >> >
>>>> >> > Which option do you choose?
>>>> >> >
>>>> >>
>>>> >> I don't know. It was easy to pull the switch as there was a bit of
>>>> >> disconnect there. Now it is up close and you are doing the pushing.
>>>> >> One alternative is to jump yourself, but I'd not do that. Don't think I
>>>> >> could push the guy either.
>>>> >>
>>>> >
>>>> >And there in lies the rub. The "disconnected" part.
>>>> >
>>>> >Now, as promised, let's bring this back to technology, AI and most
>>>> >certainly, people. Let's talk specifically about autonomous vehicles,
>>>> >but please avoid the rabbit hole and realize that the concept applies
>>>> >to just about any where AI is used and people are involved. Autonomus
>>>> >vehicles (AV) are just one example.
>>>> >
>>>> >Imagine it's X years from now and AV's are fairly common. Imagine that an AV
>>>> >is traveling down the road, with its AI in complete control of the vehicle.
>>>> >The driver is using one hand get a cup of coffee from the built-in Keurig
>>>> >machine and choosing a Pandora station with the other. He is completely
>>>> >oblivious to what's happening outside of his vehicle.
>>>> >
>>>> >Now imagine that a 4 year old runs out into the road. The AI uses all of the
>>>> >data at its disposal (speed, distance, weather conditions, tire pressure,
>>>> >etc.) and decides that it will not be able to stop in time. It checks the
>>>> >input from its 360° cameras. Can't go right because of the line of parked
>>>> >cars. They won't slow the vehicle enough to avoid hitting the kid. Using
>>>> >facial recognition the AI determines that the mini-van on the left contains
>>>> >5 elderly people. If the AV swerves left, it will push the mini-van into
>>>> >oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
>>>> >with the 18 wheeler's AI who responds and says "I have no place to go. If
>>>> >you push the van into my lane, I'm taking out a bunch of Grandmas and
>>>> >Grandpas."
>>>> >
>>>> >Now the AI has to make basically the same decision as in my first scenario:
>>>> >Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>>>> >
>>>> >"Bye Bye, kid. You should have stayed on the sidewalk."
>>>> >
>>>> >No emotion, right? Right, not once the AI is programmed, not once the initial
>>>> >AI rules have been written, not once the facial recognition database has
>>>> >been built. The question is who wrote those rules? Who decided it's OK to
>>>> >kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
>>>> >it's better to save the kid and let the old folks die. They've had a full
>>>> >life. Who wrote that rule? In other words, someone(s) have to decide whose
>>>> >life is worth more than another's. They are essentially standing on a bridge
>>>> >deciding whether to push the guy or not. They have to write the rule. They
>>>> >are either going to kill the kid or push the car into the other lane.
>>>> >
>>>> >I, for one, don't think that I want to be sitting around that table. Having
>>>> >to make the decisions would be one thing. Having to sit next to the person
>>>> >that would push the guy off the bridge with a gleam in his eye would be a
>>>> >totally different story.
>>>>
>>>> I reconsidered my thoughts on this one as well.
>>>>
>>>> The AV should do as it was designed to do, to the best of its
>>>> capabilities. Staying in the lane when there is no option to swerve
>>>> safely.
>>>>
>>>> There is already a legal reason for that, that being that the swerving
>>>> driver assumes all the damages that incur from his action, including
>>>> manslaughter.
>>>
>>>So in the following brake failure scenario, if the AV stays in lane and
>>>kills the four "highly rated" pedestrians there are no charges, but if
>>>it changes lanes and takes out the 4 slugs, jail time may ensue.
>>>
>>>http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-800
>>>
>>>Interesting.
>>
>>Yes, and I've been warned that by my taking evasive action I could
>>cause someone else to respond likewise and that I would he held
>>accountable for what happened.
>
>I find the assumption that a fatality involving a robot car would lead
>to someone being jailed to be amusing. The people who assert this
>never identify the statute under which someone would be jailed or who,
>precisely this someone might be. They seem to assume that because a
>human driving a car could be jailed for vehicular homicide or criminal
>negligence or some such, it is automatic that someone else would be
>jailed for the same offense--the trouble is that the car is legally an
>inanimate object and we don't put inanimate objects in jail. So it
>gets down to proving that the occupant is negligent, which is a hard
>sell given that the government allowed the car to be licensed with the
>understanding that it would not be controlled by the occupant, or
>proving that the engineering team responsible for developing it was
>negligent, which given that they can show the logic the thing used and
>no doubt provide legal justification for the decision it made, will be
>another tall order. So who goes to jail?
>
The software developer who signed off on the failing module.
On 11/24/2017 9:20 PM, Doug Miller wrote:
> DerbyDad03 <[email protected]> wrote in news:1bb19287-aa33-4417-b009-
> [email protected]:
>
>> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>>> [email protected] wrote:
>>>
>>>> I have to say, I am sorry to see that.
>>>
>>> technophobia [tek-nuh-foh-bee-uh]
>>> noun -- abnormal fear of or anxiety about the effects of advanced technology.
>>>
>>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0
> &iv_load_policy=3&rel=0
>>
>> I'm not sure how this will work out on usenet, but I'm going to present
>> a scenario and ask for an answer. After some amount of time, maybe 48 hours,
>> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
>> another answer.
>>
>> Trust me, this will eventually lead back to technology, AI and most
>> certainly, people.
>>
>> In the following scenario you must assume that all options have been
>> considered and narrowed down to only 2. Please just accept that the
>> situation is as stated and that you only have 2 choices. If we get into
>> "Well, in a real life situation, you'd have to factor in this, that and
>> the other thing" we'll never get through this exercise.
>>
>> Here goes:
>>
>> 5 workers are standing on the railroad tracks. A train is heading in their
>> direction. They have no escape route. If the train continues down the tracks,
>> it will most assuredly kill them all.
>>
>> You are standing next to the lever that will switch the train to another
>> track before it reaches the workers. On the other track is a lone worker,
>> also with no escape route.
>>
>> You have 2, and only 2, options. If you do nothing, all 5 workers will
>> be killed. If you pull the lever, only 1 worker will be killed.
>>
>> Which option do you choose?
>
> Neither one. This is a classic example of the logical fallacy "false choice", the assumption
> that the choices presented are the only ones available.
>
> I'd choose instead to yell "move your ass, there's a train coming!".
>
;~) BUT that was not one of the options. You have 2, and only 2, options
On 11/22/2017 8:45 AM, Leon wrote:
> On 11/22/2017 6:52 AM, DerbyDad03 wrote:
>> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>>> [email protected] wrote:
>>>
>>>> I have to say, I am sorry to see that.
>>>
>>> Â technophobia [tek-nuh-foh-bee-uh]
>>> Â noun -- abnormal fear of or anxiety about the effects of advanced
>>> technology.
>>>
>>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>>>
>>
>> I'm not sure how this will work out on usenet, but I'm going to present
>> a scenario and ask for an answer. After some amount of time, maybe 48
>> hours,
>> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
>> another answer.
>>
>> Trust me, this will eventually lead back to technology, AI and most
>> certainly, people.
>>
>> In the following scenario you must assume that all options have been
>> considered and narrowed down to only 2. Please just accept that the
>> situation is as stated and that you only have 2 choices. If we get into
>> "Well, in a real life situation, you'd have to factor in this, that and
>> the other thing" we'll never get through this exercise.
>>
>> Here goes:
>>
>> 5 workers are standing on the railroad tracks. A train is heading in
>> their
>> direction. They have no escape route. If the train continues down the
>> tracks,
>> it will most assuredly kill them all.
>>
>> You are standing next to the lever that will switch the train to another
>> track before it reaches the workers. On the other track is a lone worker,
>> also with no escape route.
>>
>> You have 2, and only 2, options. If you do nothing, all 5 workers will
>> be killed. If you pull the lever, only 1 worker will be killed.
>>
>> Which option do you choose?
>>
>
> Pull the lever, Choosing to do nothing is the choice to kill 5.
Well I have mentioned this before, and it goes back to comments I have
made in the past about decision making. It seems the majority here use
emotional over rational thinking to come up with a decision.
It was said you only have two choices and who these people are or might
be is not a consideration. You can't make a rational decision with
what-if's. You only have two options, kill 5 or kill 1. Rational for
me says save 5, for the rest of you that are bringing in scenarios past
what should be considered will waste too much time and you end up with a
kill before you decide what to do.
On 11/22/2017 9:47 PM, DerbyDad03 wrote:
> On Wednesday, November 22, 2017 at 7:12:18 PM UTC-5, Leon wrote:
>> On 11/22/2017 1:17 PM, OFWW wrote:
>>> On Wed, 22 Nov 2017 12:45:11 -0600, Leon <lcb11211@swbelldotnet>
>>> wrote:
>>>
>>>> On 11/22/2017 8:45 AM, Leon wrote:
>>>>> On 11/22/2017 6:52 AM, DerbyDad03 wrote:
>>>>>> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>>>>>>> [email protected] wrote:
>>>>>>>
>>>>>>>> I have to say, I am sorry to see that.
>>>>>>>
>>>>>>> Â technophobia [tek-nuh-foh-bee-uh]
>>>>>>> Â noun -- abnormal fear of or anxiety about the effects of advanced
>>>>>>> technology.
>>>>>>>
>>>>>>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>>>>>>>
>>>>>>
>>>>>> I'm not sure how this will work out on usenet, but I'm going to present
>>>>>> a scenario and ask for an answer. After some amount of time, maybe 48
>>>>>> hours,
>>>>>> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
>>>>>> another answer.
>>>>>>
>>>>>> Trust me, this will eventually lead back to technology, AI and most
>>>>>> certainly, people.
>>>>>>
>>>>>> In the following scenario you must assume that all options have been
>>>>>> considered and narrowed down to only 2. Please just accept that the
>>>>>> situation is as stated and that you only have 2 choices. If we get into
>>>>>> "Well, in a real life situation, you'd have to factor in this, that and
>>>>>> the other thing" we'll never get through this exercise.
>>>>>>
>>>>>> Here goes:
>>>>>>
>>>>>> 5 workers are standing on the railroad tracks. A train is heading in
>>>>>> their
>>>>>> direction. They have no escape route. If the train continues down the
>>>>>> tracks,
>>>>>> it will most assuredly kill them all.
>>>>>>
>>>>>> You are standing next to the lever that will switch the train to another
>>>>>> track before it reaches the workers. On the other track is a lone worker,
>>>>>> also with no escape route.
>>>>>>
>>>>>> You have 2, and only 2, options. If you do nothing, all 5 workers will
>>>>>> be killed. If you pull the lever, only 1 worker will be killed.
>>>>>>
>>>>>> Which option do you choose?
>>>>>>
>>>>>
>>>>> Pull the lever, Choosing to do nothing is the choice to kill 5.
>>>>
>>>> Well I have mentioned this before, and it goes back to comments I have
>>>> made in the past about decision making. It seems the majority here use
>>>> emotional over rational thinking to come up with a decision.
>>>>
>>>> It was said you only have two choices and who these people are or might
>>>> be is not a consideration. You can't make a rational decision with
>>>> what-if's. You only have two options, kill 5 or kill 1. Rational for
>>>> me says save 5, for the rest of you that are bringing in scenarios past
>>>> what should be considered will waste too much time and you end up with a
>>>> kill before you decide what to do.
>>>
>>> Rational thinking would state that trains run on a schedule, the
>>> switch would be locked, and for better or worse the five were not
>>> supposed to be there in the first place.
>>
>> No, you are adding "what if's to the given restraints. This is easy, you
>> either choose to move the switch or not. There is no other situation to
>> consider.
>>
>
> I tried, I really tried:
>
> "Please just accept that the situation is as stated and that you only have
> 2 choices. If we get into "Well, in a real life situation, you'd have to
> factor in this, that and the other thing" we'll never get through this
> exercise."
Precisely!
On 11/21/2017 9:44 AM, notbob wrote:
> On 2017-11-21, Ed Pawlowski <[email protected]> wrote:
>
>> Just think of the lives of depressed people it will save.
>
> ????
>
> nb
>
Stops this sort of thing
https://www.documentingreality.com/forum/f10/suicide-bandsaw-11688/
DerbyDad03 <[email protected]> wrote in news:1bb19287-aa33-4417-b009-
[email protected]:
> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>> [email protected] wrote:
>>
>> > I have to say, I am sorry to see that.
>>
>> technophobia [tek-nuh-foh-bee-uh]
>> noun -- abnormal fear of or anxiety about the effects of advanced technology.
>>
>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0
&iv_load_policy=3&rel=0
>
> I'm not sure how this will work out on usenet, but I'm going to present
> a scenario and ask for an answer. After some amount of time, maybe 48 hours,
> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
> another answer.
>
> Trust me, this will eventually lead back to technology, AI and most
> certainly, people.
>
> In the following scenario you must assume that all options have been
> considered and narrowed down to only 2. Please just accept that the
> situation is as stated and that you only have 2 choices. If we get into
> "Well, in a real life situation, you'd have to factor in this, that and
> the other thing" we'll never get through this exercise.
>
> Here goes:
>
> 5 workers are standing on the railroad tracks. A train is heading in their
> direction. They have no escape route. If the train continues down the tracks,
> it will most assuredly kill them all.
>
> You are standing next to the lever that will switch the train to another
> track before it reaches the workers. On the other track is a lone worker,
> also with no escape route.
>
> You have 2, and only 2, options. If you do nothing, all 5 workers will
> be killed. If you pull the lever, only 1 worker will be killed.
>
> Which option do you choose?
Neither one. This is a classic example of the logical fallacy "false choice", the assumption
that the choices presented are the only ones available.
I'd choose instead to yell "move your ass, there's a train coming!".
On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
> On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>=20
> >=20
> > Oh, well, no sense in waiting...
> >=20
> > 2nd scenario:
> >=20
> > 5 workers are standing on the railroad tracks. A train is heading in th=
eir
> > direction. They have no escape route. If the train continues down the t=
racks,
> > it will most assuredly kill them all.
> >=20
> > You are standing on a bridge overlooking the tracks. Next to you is a f=
airly
> > large person. We'll save you some trouble and let that person be a stra=
nger.
> >=20
> > You have 2, and only 2, options. If you do nothing, all 5 workers will
> > be killed. If you push the stranger off the bridge, the train will kill
> > him but be stopped before the 5 workers are killed. (Don't question the
> > physics, just accept the outcome.)
> >=20
> > Which option do you choose?
> >=20
>=20
> I don't know. It was easy to pull the switch as there was a bit of=20
> disconnect there. Now it is up close and you are doing the pushing.=20
> One alternative is to jump yourself, but I'd not do that. Don't think I=
=20
> could push the guy either.
>=20
And there in lies the rub. The "disconnected" part.
Now, as promised, let's bring this back to technology, AI and most=20
certainly, people. Let's talk specifically about autonomous vehicles,=20
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.
Imagine it's X years from now and AV's are fairly common. Imagine that an A=
V=20
is traveling down the road, with its AI in complete control of the vehicle.=
=20
The driver is using one hand get a cup of coffee from the built-in Keurig=
=20
machine and choosing a Pandora station with the other. He is completely=20
oblivious to what's happening outside of his vehicle.=20
Now imagine that a 4 year old runs out into the road. The AI uses all of th=
e=20
data at its disposal (speed, distance, weather conditions, tire pressure,=
=20
etc.) and decides that it will not be able to stop in time. It checks the=
=20
input from its 360=C2=B0 cameras. Can't go right because of the line of par=
ked=20
cars. They won't slow the vehicle enough to avoid hitting the kid. Using=20
facial recognition the AI determines that the mini-van on the left contains=
=20
5 elderly people. If the AV swerves left, it will push the mini-van into=20
oncoming traffic, directly into the path of a 18 wheeler. The AI communicat=
es=20
with the 18 wheeler's AI who responds and says "I have no place to go. If=
=20
you push the van into my lane, I'm taking out a bunch of Grandmas and=20
Grandpas."
Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?=20
"Bye Bye, kid. You should have stayed on the sidewalk."
No emotion, right? Right, not once the AI is programmed, not once the initi=
al
AI rules have been written, not once the facial recognition database has=20
been built. The question is who wrote those rules? Who decided it's OK to=
=20
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose=
=20
life is worth more than another's. They are essentially standing on a bridg=
e=20
deciding whether to push the guy or not. They have to write the rule. They=
=20
are either going to kill the kid or push the car into the other lane.
I, for one, don't think that I want to be sitting around that table. Having=
=20
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.=20
On Fri, 24 Nov 2017 11:58:06 -0600, Markem <[email protected]>
wrote:
>On Fri, 24 Nov 2017 00:53:07 -0500, J. Clarke
><[email protected]> wrote:
>
>>On Thu, 23 Nov 2017 23:46:52 -0600, Markem <[email protected]>
>>wrote:
>>
>>>On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
>>><[email protected]> wrote:
>>>
>>>>It was suggested that someone would go to jail. I still want to know
>>>>who and what crime they committed.
>>>
>>>Damages would be a tort case,
>>
>>So why do you mention damages?
>>
>>> as to who and what crime that would be
>>>determined in court. Some DA looking for publicty would brings
>>>charges.
>>
>>What charges? To bring charges there must have been a chargeable
>>offense, which means that a plausible argument can be made that some
>>law was violated. So what law do you believe would have been
>>violated? Or do you just _like_ being laughed out of court?
>
>I am not looking for political office, ever heard the saying a DA can
>indict a ham sandwich.
But when was the last time a ham sandwich was imprisoned?
On Fri, 24 Nov 2017 18:09:51 -0800, OFWW <[email protected]>
wrote:
>On Fri, 24 Nov 2017 09:11:16 -0500, J. Clarke
><[email protected]> wrote:
>
>>>>>But we should be sticking to this hypothetical example given us.
>>>>
>>>>It was suggested that someone would go to jail. I still want to know
>>>>who and what crime they committed.
>>>
>>>The person who did not stay in their own lane, and ended up committing
>>>involuntary manslaughter.
>>
>>Are you arguing that an autonomous vehicle is a "person"? You
>>really don't seem to grasp the concept. Rather than a car with an
>>occupant, make it a car, say a robot taxicab, that is going somewhere
>>or other unoccupied.
>>
>
>Is not a "who" a person? and yes, I realize the optimum goal is for a
>stand alone vehicle independent of owner operator. The robotic taxicab
>is already in test mode.
>
>>>In the case you bring up the AV can be currently over ridden at
>>>anytime by the occupant. There are already AV vehicles operating on
>>>the streets.
>>
>>In what case that I bring up?
>
>The case of the option for switching lanes. Your questioning as who
>can be at fault. I brought up the fact that experiment air craft have
>a lifetime indebtedness going back to the original maker and designer.
>It was to answer just who was culpable.
Check your attributions. There are many people participating in this
discussion. I did not bring up that case.
>> Globalhawk doesn't _have_ an occupant.
>>(when people use words with which you are unfamiliar, you should at
>>least Google those words before opining). There are very few
>>autonomous vehicles and currently they are for the most part operated
>>with a safety driver, but that is not anybody's long-term plan. Google
>>already has at least one demonstrator with no steering wheel or pedals
>>and Uber is planning on using driverless cars in their ride sharing
>>service--ultimately those would also have no controls accessible to
>>the passenger.
>>
>
>There are a lot
For certain rather small values of "lot".
> of autonomous vehicles running around, it just depends
>are where you are, some have already been in real world accidents,
Yes, mostly other vehicles hitting them. I believe that there has
been one Google car collision that was attributed to decisionmaking by
the software. I'm ignoring the Tesla incident because that is not
supposed to be a completely autonomous system.
>Uber already were testing vehicles but required a person in the case
>just in case.
I believe it is the government requring the person.
>And yes, I knew globalhawks do not have an occupant resident in the
>vehicle, but they are all monitored.
What do you mean when you say "monitored"? A human has to detect that
there is a danger, turn off the robot, and take control. If the robot
does not know that there is a danger it is unlikely that the human
will have any more information than the robot does.
>As to vehicles some have a safety
>driver and some do not. The globalhawks have built in sensory devices
>themselves for alarming, etc. and all the data from radar, satellites
>etc. The info for the full technology that they and the operators have
>is not disclosed. Plus it is a secret as to who all are operating the
>vehicles so the bottom line would be the government operating them.
So you're saying that the entire government would go to jail? Dream
on.
>But thank you for your comment on my knowledge and how to fix it. :)
>
>>>Regarding your "whose at fault" scenario, just look at the court cases
>>>against gun makers, as if guns kill people.
>>
>>I have not introduced a "who's at fault scenariao". I have asked what
>>law would be violated and who would be jailed. "At fault" decides who
>>pays damages, not who goes to jail. I am not discussing damages, I am
>>discussing JAIL. You do know what a jail is, do you not?
>>
>
>Sorry, my Internet connection is down and I cannot google it.
And yet you can post here.
>>>So can we know return to the question or at the least, wood working?
>>
>>You're the one who started feeding the troll.
>
>Sorry, I am not privy to the list, so I'll just make this my last post
>on the subject, but I will read your reply.
Hope springs eternal.
I have to say, I am sorry to see that.=20
It means that all over the internet, in a high concentration here, and at t=
he old men's table at Woodcraft the teeth gnashing will start.
Screams of civil rights violations, chest thumping of those declaring that =
their generation had no guards or safety devices and they were fine, the pa=
ranoids buying saws now before the nanny state Commie/weenies make safety s=
ome kind of bullshit issue... all of it.
Ready for the first 250 thread here for a long, long time. Nothing like ge=
tting a good bitch on to fire one up, though.
Robert=20
On Wed, 22 Nov 2017 21:06:38 +0000, Spalted Walt
<[email protected]> wrote:
>DerbyDad03 <[email protected]> wrote:
>
>> On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
>> > On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>> >
>> > >
>> > > Oh, well, no sense in waiting...
>> > >
>> > > 2nd scenario:
>> > >
>> > > 5 workers are standing on the railroad tracks. A train is heading in their
>> > > direction. They have no escape route. If the train continues down the tracks,
>> > > it will most assuredly kill them all.
>> > >
>> > > You are standing on a bridge overlooking the tracks. Next to you is a fairly
>> > > large person. We'll save you some trouble and let that person be a stranger.
>> > >
>> > > You have 2, and only 2, options. If you do nothing, all 5 workers will
>> > > be killed. If you push the stranger off the bridge, the train will kill
>> > > him but be stopped before the 5 workers are killed. (Don't question the
>> > > physics, just accept the outcome.)
>> > >
>> > > Which option do you choose?
>> > >
>> >
>> > I don't know. It was easy to pull the switch as there was a bit of
>> > disconnect there. Now it is up close and you are doing the pushing.
>> > One alternative is to jump yourself, but I'd not do that. Don't think I
>> > could push the guy either.
>> >
>>
>> And there in lies the rub. The "disconnected" part.
>>
>> Now, as promised, let's bring this back to technology, AI and most
>> certainly, people. Let's talk specifically about autonomous vehicles,
>> but please avoid the rabbit hole and realize that the concept applies
>> to just about any where AI is used and people are involved. Autonomus
>> vehicles (AV) are just one example.
>>
>> Imagine it's X years from now and AV's are fairly common. Imagine that an AV
>> is traveling down the road, with its AI in complete control of the vehicle.
>> The driver is using one hand get a cup of coffee from the built-in Keurig
>> machine and choosing a Pandora station with the other. He is completely
>> oblivious to what's happening outside of his vehicle.
>>
>> Now imagine that a 4 year old runs out into the road. The AI uses all of the
>> data at its disposal (speed, distance, weather conditions, tire pressure,
>> etc.) and decides that it will not be able to stop in time. It checks the
>> input from its 360° cameras. Can't go right because of the line of parked
>> cars. They won't slow the vehicle enough to avoid hitting the kid. Using
>> facial recognition the AI determines that the mini-van on the left contains
>> 5 elderly people. If the AV swerves left, it will push the mini-van into
>> oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
>> with the 18 wheeler's AI who responds and says "I have no place to go. If
>> you push the van into my lane, I'm taking out a bunch of Grandmas and
>> Grandpas."
The problem with this scenario is that it assumes that the AI has only
human eyes for sensors. It sees the four year old on radar near the
side of the road, detects a possible hazard, and slows down before
arriving near the four year old.
>>
>> Now the AI has to make basically the same decision as in my first scenario:
>> Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>>
>> "Bye Bye, kid. You should have stayed on the sidewalk."
>>
>> No emotion, right? Right, not once the AI is programmed, not once the initial
>> AI rules have been written, not once the facial recognition database has
>> been built. The question is who wrote those rules? Who decided it's OK to
>> kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
>> it's better to save the kid and let the old folks die. They've had a full
>> life. Who wrote that rule? In other words, someone(s) have to decide whose
>> life is worth more than another's. They are essentially standing on a bridge
>> deciding whether to push the guy or not. They have to write the rule. They
>> are either going to kill the kid or push the car into the other lane.
>>
>> I, for one, don't think that I want to be sitting around that table. Having
>> to make the decisions would be one thing. Having to sit next to the person
>> that would push the guy off the bridge with a gleam in his eye would be a
>> totally different story.
>
>https://pbs.twimg.com/media/Cp0D5oCWIAAxSUT.jpg
>
>LOL!
On Wednesday, November 22, 2017 at 6:38:28 PM UTC-5, J. Clarke wrote:
...snip...
> The problem with this scenario is that it assumes that the AI has only
> human eyes for sensors. It sees the four year old on radar near the
> side of the road, detects a possible hazard, and slows down before
> arriving near the four year old.
> >>=20
OK, have it your way.
"To truly guarantee a pedestrian=E2=80=99s safety, an AV would have to slow=
to a=20
crawl any time a pedestrian is walking nearby on a sidewalk, in case the=20
pedestrian decided to throw themselves in front of the vehicle," Noah=20
Goodall, a scientist with the Virginia Transportation Research Council,=20
wrote by email."
http://www.businessinsider.com/self-driving-cars-already-deciding-who-to-ki=
ll-2016-12
...snip...
On Wed, 22 Nov 2017 21:06:38 +0000, Spalted Walt
<[email protected]> wrote:
>DerbyDad03 <[email protected]> wrote:
>
>> On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
>> > On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>> >
>> > >
>> > > Oh, well, no sense in waiting...
>> > >
>> > > 2nd scenario:
>> > >
>> > > 5 workers are standing on the railroad tracks. A train is heading in their
>> > > direction. They have no escape route. If the train continues down the tracks,
>> > > it will most assuredly kill them all.
>> > >
>> > > You are standing on a bridge overlooking the tracks. Next to you is a fairly
>> > > large person. We'll save you some trouble and let that person be a stranger.
>> > >
>> > > You have 2, and only 2, options. If you do nothing, all 5 workers will
>> > > be killed. If you push the stranger off the bridge, the train will kill
>> > > him but be stopped before the 5 workers are killed. (Don't question the
>> > > physics, just accept the outcome.)
>> > >
>> > > Which option do you choose?
>> > >
>> >
>> > I don't know. It was easy to pull the switch as there was a bit of
>> > disconnect there. Now it is up close and you are doing the pushing.
>> > One alternative is to jump yourself, but I'd not do that. Don't think I
>> > could push the guy either.
>> >
>>
>> And there in lies the rub. The "disconnected" part.
>>
>> Now, as promised, let's bring this back to technology, AI and most
>> certainly, people. Let's talk specifically about autonomous vehicles,
>> but please avoid the rabbit hole and realize that the concept applies
>> to just about any where AI is used and people are involved. Autonomus
>> vehicles (AV) are just one example.
>>
>> Imagine it's X years from now and AV's are fairly common. Imagine that an AV
>> is traveling down the road, with its AI in complete control of the vehicle.
>> The driver is using one hand get a cup of coffee from the built-in Keurig
>> machine and choosing a Pandora station with the other. He is completely
>> oblivious to what's happening outside of his vehicle.
>>
>> Now imagine that a 4 year old runs out into the road. The AI uses all of the
>> data at its disposal (speed, distance, weather conditions, tire pressure,
>> etc.) and decides that it will not be able to stop in time. It checks the
>> input from its 360° cameras. Can't go right because of the line of parked
>> cars. They won't slow the vehicle enough to avoid hitting the kid. Using
>> facial recognition the AI determines that the mini-van on the left contains
>> 5 elderly people. If the AV swerves left, it will push the mini-van into
>> oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
>> with the 18 wheeler's AI who responds and says "I have no place to go. If
>> you push the van into my lane, I'm taking out a bunch of Grandmas and
>> Grandpas."
>>
>> Now the AI has to make basically the same decision as in my first scenario:
>> Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>>
>> "Bye Bye, kid. You should have stayed on the sidewalk."
>>
>> No emotion, right? Right, not once the AI is programmed, not once the initial
>> AI rules have been written, not once the facial recognition database has
>> been built. The question is who wrote those rules? Who decided it's OK to
>> kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
>> it's better to save the kid and let the old folks die. They've had a full
>> life. Who wrote that rule? In other words, someone(s) have to decide whose
>> life is worth more than another's. They are essentially standing on a bridge
>> deciding whether to push the guy or not. They have to write the rule. They
>> are either going to kill the kid or push the car into the other lane.
>>
>> I, for one, don't think that I want to be sitting around that table. Having
>> to make the decisions would be one thing. Having to sit next to the person
>> that would push the guy off the bridge with a gleam in his eye would be a
>> totally different story.
>
>https://pbs.twimg.com/media/Cp0D5oCWIAAxSUT.jpg
ROTF
>LOL!
On Wednesday, November 22, 2017 at 6:38:28 PM UTC-5, J. Clarke wrote:
> On Wed, 22 Nov 2017 21:06:38 +0000, Spalted Walt
> <[email protected]> wrote:
>=20
> >DerbyDad03 <[email protected]> wrote:
> >
> >> On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrot=
e:
> >> > On 11/22/2017 1:20 PM, DerbyDad03 wrote:
> >> >=20
> >> > >=20
> >> > > Oh, well, no sense in waiting...
> >> > >=20
> >> > > 2nd scenario:
> >> > >=20
> >> > > 5 workers are standing on the railroad tracks. A train is heading =
in their
> >> > > direction. They have no escape route. If the train continues down =
the tracks,
> >> > > it will most assuredly kill them all.
> >> > >=20
> >> > > You are standing on a bridge overlooking the tracks. Next to you i=
s a fairly
> >> > > large person. We'll save you some trouble and let that person be a=
stranger.
> >> > >=20
> >> > > You have 2, and only 2, options. If you do nothing, all 5 workers =
will
> >> > > be killed. If you push the stranger off the bridge, the train will=
kill
> >> > > him but be stopped before the 5 workers are killed. (Don't questio=
n the
> >> > > physics, just accept the outcome.)
> >> > >=20
> >> > > Which option do you choose?
> >> > >=20
> >> >=20
> >> > I don't know. It was easy to pull the switch as there was a bit of=
=20
> >> > disconnect there. Now it is up close and you are doing the pushing.=
=20
> >> > One alternative is to jump yourself, but I'd not do that. Don't thi=
nk I=20
> >> > could push the guy either.
> >> >=20
> >>=20
> >> And there in lies the rub. The "disconnected" part.
> >>=20
> >> Now, as promised, let's bring this back to technology, AI and most=20
> >> certainly, people. Let's talk specifically about autonomous vehicles,=
=20
> >> but please avoid the rabbit hole and realize that the concept applies
> >> to just about any where AI is used and people are involved. Autonomus
> >> vehicles (AV) are just one example.
> >>=20
> >> Imagine it's X years from now and AV's are fairly common. Imagine that=
an AV=20
> >> is traveling down the road, with its AI in complete control of the veh=
icle.=20
> >> The driver is using one hand get a cup of coffee from the built-in Keu=
rig=20
> >> machine and choosing a Pandora station with the other. He is completel=
y=20
> >> oblivious to what's happening outside of his vehicle.=20
> >>=20
> >> Now imagine that a 4 year old runs out into the road. The AI uses all =
of the=20
> >> data at its disposal (speed, distance, weather conditions, tire pressu=
re,=20
> >> etc.) and decides that it will not be able to stop in time. It checks =
the=20
> >> input from its 360=C2=B0 cameras. Can't go right because of the line o=
f parked=20
> >> cars. They won't slow the vehicle enough to avoid hitting the kid. Usi=
ng=20
> >> facial recognition the AI determines that the mini-van on the left con=
tains=20
> >> 5 elderly people. If the AV swerves left, it will push the mini-van in=
to=20
> >> oncoming traffic, directly into the path of a 18 wheeler. The AI commu=
nicates=20
> >> with the 18 wheeler's AI who responds and says "I have no place to go.=
If=20
> >> you push the van into my lane, I'm taking out a bunch of Grandmas and=
=20
> >> Grandpas."
>=20
> The problem with this scenario is that it assumes that the AI has only
> human eyes for sensors. It sees the four year old on radar near the
> side of the road, detects a possible hazard, and slows down before
> arriving near the four year old.
Gee, I don't know which of my 2 comments to post first...
No, the problem is that you did not read the description of the scenario
carefully enough. "Can't go right because of the line of parked=20
cars." Unless the radar is airborne or can see through metal, it won't
detect the kid in time.
But - and this is big but - it ain't about the seeing of the kid or not.=20
You could shoot a thousand arrows through the the scenario. It was never
meant to be perfect. The point is that there are still humans involved=20
that have to write the rules related which person or persons to kill.
People have to decide when it's OK to push the big guy off the bridge.
On Wednesday, November 22, 2017 at 7:12:18 PM UTC-5, Leon wrote:
> On 11/22/2017 1:17 PM, OFWW wrote:
> > On Wed, 22 Nov 2017 12:45:11 -0600, Leon <lcb11211@swbelldotnet>
> > wrote:
> >=20
> >> On 11/22/2017 8:45 AM, Leon wrote:
> >>> On 11/22/2017 6:52 AM, DerbyDad03 wrote:
> >>>> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wro=
te:
> >>>>> [email protected] wrote:
> >>>>>
> >>>>>> I have to say, I am sorry to see that.
> >>>>>
> >>>>> =C2=A0 technophobia [tek-nuh-foh-bee-uh]
> >>>>> =C2=A0 noun -- abnormal fear of or anxiety about the effects of ad=
vanced
> >>>>> technology.
> >>>>>
> >>>>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=3D1&autohide=3D1=
&showinfo=3D0&iv_load_policy=3D3&rel=3D0
> >>>>>
> >>>>
> >>>> I'm not sure how this will work out on usenet, but I'm going to pres=
ent
> >>>> a scenario and ask for an answer. After some amount of time, maybe 4=
8
> >>>> hours,
> >>>> since tomorrow is Thanksgiving, I'll expand on that scenario and ask=
for
> >>>> another answer.
> >>>>
> >>>> Trust me, this will eventually lead back to technology, AI and most
> >>>> certainly, people.
> >>>>
> >>>> In the following scenario you must assume that all options have been
> >>>> considered and narrowed down to only 2. Please just accept that the
> >>>> situation is as stated and that you only have 2 choices. If we get i=
nto
> >>>> "Well, in a real life situation, you'd have to factor in this, that =
and
> >>>> the other thing" we'll never get through this exercise.
> >>>>
> >>>> Here goes:
> >>>>
> >>>> 5 workers are standing on the railroad tracks. A train is heading in
> >>>> their
> >>>> direction. They have no escape route. If the train continues down th=
e
> >>>> tracks,
> >>>> it will most assuredly kill them all.
> >>>>
> >>>> You are standing next to the lever that will switch the train to ano=
ther
> >>>> track before it reaches the workers. On the other track is a lone wo=
rker,
> >>>> also with no escape route.
> >>>>
> >>>> You have 2, and only 2, options. If you do nothing, all 5 workers wi=
ll
> >>>> be killed. If you pull the lever, only 1 worker will be killed.
> >>>>
> >>>> Which option do you choose?
> >>>>
> >>>
> >>> Pull the lever, Choosing to do nothing is the choice to kill 5.
> >>
> >> Well I have mentioned this before, and it goes back to comments I have
> >> made in the past about decision making. It seems the majority here us=
e
> >> emotional over rational thinking to come up with a decision.
> >>
> >> It was said you only have two choices and who these people are or migh=
t
> >> be is not a consideration. You can't make a rational decision with
> >> what-if's. You only have two options, kill 5 or kill 1. Rational for
> >> me says save 5, for the rest of you that are bringing in scenarios pas=
t
> >> what should be considered will waste too much time and you end up with=
a
> >> kill before you decide what to do.
> >=20
> > Rational thinking would state that trains run on a schedule, the
> > switch would be locked, and for better or worse the five were not
> > supposed to be there in the first place.
>=20
> No, you are adding "what if's to the given restraints. This is easy, you=
=20
> either choose to move the switch or not. There is no other situation to=
=20
> consider.
>=20
I tried, I really tried:
"Please just accept that the situation is as stated and that you only have=
=20
2 choices. If we get into "Well, in a real life situation, you'd have to=20
factor in this, that and the other thing" we'll never get through this=20
exercise."
> >=20
> > So how can I make a decision more rational than the scheduler, even if
> > I had the key to the lock.
> >=20
>=20
> Again you are adding what-if's.
On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
> On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
> <[email protected]> wrote:
>=20
> >On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
> >> On 11/22/2017 1:20 PM, DerbyDad03 wrote:
> >>=20
> >> >=20
> >> > Oh, well, no sense in waiting...
> >> >=20
> >> > 2nd scenario:
> >> >=20
> >> > 5 workers are standing on the railroad tracks. A train is heading in=
their
> >> > direction. They have no escape route. If the train continues down th=
e tracks,
> >> > it will most assuredly kill them all.
> >> >=20
> >> > You are standing on a bridge overlooking the tracks. Next to you is =
a fairly
> >> > large person. We'll save you some trouble and let that person be a s=
tranger.
> >> >=20
> >> > You have 2, and only 2, options. If you do nothing, all 5 workers wi=
ll
> >> > be killed. If you push the stranger off the bridge, the train will k=
ill
> >> > him but be stopped before the 5 workers are killed. (Don't question =
the
> >> > physics, just accept the outcome.)
> >> >=20
> >> > Which option do you choose?
> >> >=20
> >>=20
> >> I don't know. It was easy to pull the switch as there was a bit of=20
> >> disconnect there. Now it is up close and you are doing the pushing.=
=20
> >> One alternative is to jump yourself, but I'd not do that. Don't think=
I=20
> >> could push the guy either.
> >>=20
> >
> >And there in lies the rub. The "disconnected" part.
> >
> >Now, as promised, let's bring this back to technology, AI and most=20
> >certainly, people. Let's talk specifically about autonomous vehicles,=20
> >but please avoid the rabbit hole and realize that the concept applies
> >to just about any where AI is used and people are involved. Autonomus
> >vehicles (AV) are just one example.
> >
> >Imagine it's X years from now and AV's are fairly common. Imagine that a=
n AV=20
> >is traveling down the road, with its AI in complete control of the vehic=
le.=20
> >The driver is using one hand get a cup of coffee from the built-in Keuri=
g=20
> >machine and choosing a Pandora station with the other. He is completely=
=20
> >oblivious to what's happening outside of his vehicle.=20
> >
> >Now imagine that a 4 year old runs out into the road. The AI uses all of=
the=20
> >data at its disposal (speed, distance, weather conditions, tire pressure=
,=20
> >etc.) and decides that it will not be able to stop in time. It checks th=
e=20
> >input from its 360=C2=B0 cameras. Can't go right because of the line of =
parked=20
> >cars. They won't slow the vehicle enough to avoid hitting the kid. Using=
=20
> >facial recognition the AI determines that the mini-van on the left conta=
ins=20
> >5 elderly people. If the AV swerves left, it will push the mini-van into=
=20
> >oncoming traffic, directly into the path of a 18 wheeler. The AI communi=
cates=20
> >with the 18 wheeler's AI who responds and says "I have no place to go. I=
f=20
> >you push the van into my lane, I'm taking out a bunch of Grandmas and=20
> >Grandpas."
> >
> >Now the AI has to make basically the same decision as in my first scenar=
io:
> >Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?=20
> >
> >"Bye Bye, kid. You should have stayed on the sidewalk."
> >
> >No emotion, right? Right, not once the AI is programmed, not once the in=
itial
> >AI rules have been written, not once the facial recognition database has=
=20
> >been built. The question is who wrote those rules? Who decided it's OK t=
o=20
> >kill a young kid to save the lives of 5 rickety old folks? Oh wait, mayb=
e
> >it's better to save the kid and let the old folks die. They've had a ful=
l
> >life. Who wrote that rule? In other words, someone(s) have to decide who=
se=20
> >life is worth more than another's. They are essentially standing on a br=
idge=20
> >deciding whether to push the guy or not. They have to write the rule. Th=
ey=20
> >are either going to kill the kid or push the car into the other lane.
> >
> >I, for one, don't think that I want to be sitting around that table. Hav=
ing=20
> >to make the decisions would be one thing. Having to sit next to the pers=
on
> >that would push the guy off the bridge with a gleam in his eye would be =
a
> >totally different story.=20
>=20
> I reconsidered my thoughts on this one as well.
>=20
> The AV should do as it was designed to do, to the best of its
> capabilities. Staying in the lane when there is no option to swerve
> safely.
>=20
> There is already a legal reason for that, that being that the swerving
> driver assumes all the damages that incur from his action, including
> manslaughter.
So in the following brake failure scenario, if the AV stays in lane and=20
kills the four "highly rated" pedestrians there are no charges, but if=20
it changes lanes and takes out the 4 slugs, jail time may ensue.
http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-800
Interesting.
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
<[email protected]> wrote:
>On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
>> On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>>
>> >
>> > Oh, well, no sense in waiting...
>> >
>> > 2nd scenario:
>> >
>> > 5 workers are standing on the railroad tracks. A train is heading in their
>> > direction. They have no escape route. If the train continues down the tracks,
>> > it will most assuredly kill them all.
>> >
>> > You are standing on a bridge overlooking the tracks. Next to you is a fairly
>> > large person. We'll save you some trouble and let that person be a stranger.
>> >
>> > You have 2, and only 2, options. If you do nothing, all 5 workers will
>> > be killed. If you push the stranger off the bridge, the train will kill
>> > him but be stopped before the 5 workers are killed. (Don't question the
>> > physics, just accept the outcome.)
>> >
>> > Which option do you choose?
>> >
>>
>> I don't know. It was easy to pull the switch as there was a bit of
>> disconnect there. Now it is up close and you are doing the pushing.
>> One alternative is to jump yourself, but I'd not do that. Don't think I
>> could push the guy either.
>>
>
>And there in lies the rub. The "disconnected" part.
>
>Now, as promised, let's bring this back to technology, AI and most
>certainly, people. Let's talk specifically about autonomous vehicles,
>but please avoid the rabbit hole and realize that the concept applies
>to just about any where AI is used and people are involved. Autonomus
>vehicles (AV) are just one example.
>
>Imagine it's X years from now and AV's are fairly common. Imagine that an AV
>is traveling down the road, with its AI in complete control of the vehicle.
>The driver is using one hand get a cup of coffee from the built-in Keurig
>machine and choosing a Pandora station with the other. He is completely
>oblivious to what's happening outside of his vehicle.
>
>Now imagine that a 4 year old runs out into the road. The AI uses all of the
>data at its disposal (speed, distance, weather conditions, tire pressure,
>etc.) and decides that it will not be able to stop in time. It checks the
>input from its 360° cameras. Can't go right because of the line of parked
>cars. They won't slow the vehicle enough to avoid hitting the kid. Using
>facial recognition the AI determines that the mini-van on the left contains
>5 elderly people. If the AV swerves left, it will push the mini-van into
>oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
>with the 18 wheeler's AI who responds and says "I have no place to go. If
>you push the van into my lane, I'm taking out a bunch of Grandmas and
>Grandpas."
>
>Now the AI has to make basically the same decision as in my first scenario:
>Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>
>"Bye Bye, kid. You should have stayed on the sidewalk."
>
>No emotion, right? Right, not once the AI is programmed, not once the initial
>AI rules have been written, not once the facial recognition database has
>been built. The question is who wrote those rules? Who decided it's OK to
>kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
>it's better to save the kid and let the old folks die. They've had a full
>life. Who wrote that rule? In other words, someone(s) have to decide whose
>life is worth more than another's. They are essentially standing on a bridge
>deciding whether to push the guy or not. They have to write the rule. They
>are either going to kill the kid or push the car into the other lane.
>
>I, for one, don't think that I want to be sitting around that table. Having
>to make the decisions would be one thing. Having to sit next to the person
>that would push the guy off the bridge with a gleam in his eye would be a
>totally different story.
I reconsidered my thoughts on this one as well.
The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.
There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.
On Nov 24, 2017, J. Clarke wrote
(in article<[email protected]>):
> On Fri, 24 Nov 2017 18:39:03 -0500, Joseph Gwinn
> <[email protected]> wrote:
>
> > On Nov 24, 2017, J. Clarke wrote
> > (in article<[email protected]>):
> >
> > > On Fri, 24 Nov 2017 11:33:41 -0500, Joseph Gwinn
> > > <[email protected]> wrote:
> > >
> > > > On Nov 24, 2017, OFWW wrote
> > > > (in article<[email protected]>):
> > > >
> > > > > On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
> > > > > <[email protected]> wrote:
> > > > >
> > > > > > On Thu, 23 Nov 2017 20:52:09 -0800, OFWW<[email protected]>
> > > > > > wrote:
> > > > > >
> > > > > > > On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
> > > > > > > <[email protected]> wrote:
> > > > > > >
> > > > > > > > On Thu, 23 Nov 2017 18:44:05 -0800, OFWW<[email protected]>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
> > > > > > > > > <[email protected]> wrote:
> > > > > > > > >
> > > > > > > > > > On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
> > > > > > > > > > > On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
> > > > > > > > > > > <[email protected]> wrote:
> > > > > > > > > > >
> > > > > > > > > > > > On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski
> > > > > > > > > > > > wrote:
> > > > > > > > > > > > > On 11/22/2017 1:20 PM, DerbyDad03 wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Oh, well, no sense in waiting...
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > 2nd scenario:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > 5 workers are standing on the railroad tracks. A train is
> > > > > > > > > > > > > > heading
> > > > > > > > > > > > > > in their
> > > > > > > > > > > > > > direction. They have no escape route. If the train continues
> > > > > > > > > > > > > > down
> > > > > > > > > > > > > > the tracks,
> > > > > > > > > > > > > > it will most assuredly kill them all.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > You are standing on a bridge overlooking the tracks. Next to you
> > > > > > > > > > > > > > is
> > > > > > > > > > > > > > a fairly
> > > > > > > > > > > > > > large person. We'll save you some trouble and let that person
> > > > > > > > > > > > > > be a
> > > > > > > > > > > > > > stranger.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > You have 2, and only 2, options. If you do nothing, all 5
> > > > > > > > > > > > > > workers
> > > > > > > > > > > > > > will
> > > > > > > > > > > > > > be killed. If you push the stranger off the bridge, the train
> > > > > > > > > > > > > > will
> > > > > > > > > > > > > > kill
> > > > > > > > > > > > > > him but be stopped before the 5 workers are killed. (Don't
> > > > > > > > > > > > > > question
> > > > > > > > > > > > > > the
> > > > > > > > > > > > > > physics, just accept the outcome.)
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Which option do you choose?
> > > > > > > > > > > > >
> > > > > > > > > > > > > I don't know. It was easy to pull the switch as there was a bit
> > > > > > > > > > > > > of
> > > > > > > > > > > > > disconnect there. Now it is up close and you are doing the
> > > > > > > > > > > > > pushing.
> > > > > > > > > > > > > One alternative is to jump yourself, but I'd not do that. Don't
> > > > > > > > > > > > > think I
> > > > > > > > > > > > > could push the guy either.
> > > > > > > > > > > >
> > > > > > > > > > > > And there in lies the rub. The "disconnected" part.
> > > > > > > > > > > >
> > > > > > > > > > > > Now, as promised, let's bring this back to technology, AI and most
> > > > > > > > > > > > certainly, people. Let's talk specifically about autonomous
> > > > > > > > > > > > vehicles,
> > > > > > > > > > > > but please avoid the rabbit hole and realize that the concept
> > > > > > > > > > > > applies
> > > > > > > > > > > > to just about any where AI is used and people are involved.
> > > > > > > > > > > > Autonomus
> > > > > > > > > > > > vehicles (AV) are just one example.
> > > > > > > > > > > >
> > > > > > > > > > > > Imagine it's X years from now and AV's are fairly common. Imagine
> > > > > > > > > > > > that an AV
> > > > > > > > > > > > is traveling down the road, with its AI in complete control of the
> > > > > > > > > > > > vehicle.
> > > > > > > > > > > > The driver is using one hand get a cup of coffee from the built-in
> > > > > > > > > > > > Keurig
> > > > > > > > > > > > machine and choosing a Pandora station with the other. He is
> > > > > > > > > > > > completely
> > > > > > > > > > > > oblivious to what's happening outside of his vehicle.
> > > > > > > > > > > >
> > > > > > > > > > > > Now imagine that a 4 year old runs out into the road. The AI uses
> > > > > > > > > > > > all
> > > > > > > > > > > > of the
> > > > > > > > > > > > data at its disposal (speed, distance, weather conditions, tire
> > > > > > > > > > > > pressure,
> > > > > > > > > > > > etc.) and decides that it will not be able to stop in time. It
> > > > > > > > > > > > checks
> > > > > > > > > > > > the
> > > > > > > > > > > > input from its 360° cameras. Can't go right because of the line
> > > > > > > > > > > > of
> > > > > > > > > > > > parked
> > > > > > > > > > > > cars. They won't slow the vehicle enough to avoid hitting the kid.
> > > > > > > > > > > > Using
> > > > > > > > > > > > facial recognition the AI determines that the mini-van on the left
> > > > > > > > > > > > contains
> > > > > > > > > > > > 5 elderly people. If the AV swerves left, it will push the
> > > > > > > > > > > > mini-van
> > > > > > > > > > > > into
> > > > > > > > > > > > oncoming traffic, directly into the path of a 18 wheeler. The AI
> > > > > > > > > > > > communicates
> > > > > > > > > > > > with the 18 wheeler's AI who responds and says "I have no place to
> > > > > > > > > > > > go. If
> > > > > > > > > > > > you push the van into my lane, I'm taking out a bunch of Grandmas
> > > > > > > > > > > > and
> > > > > > > > > > > > Grandpas."
> > > > > > > > > > > >
> > > > > > > > > > > > Now the AI has to make basically the same decision as in my first
> > > > > > > > > > > > scenario:
> > > > > > > > > > > > Kill 1 or kill 5. For the AI, it's as easy as it was for us,
> > > > > > > > > > > > right?
> > > > > > > > > > > >
> > > > > > > > > > > > "Bye Bye, kid. You should have stayed on the sidewalk."
> > > > > > > > > > > >
> > > > > > > > > > > > No emotion, right? Right, not once the AI is programmed, not once
> > > > > > > > > > > > the
> > > > > > > > > > > > initial
> > > > > > > > > > > > AI rules have been written, not once the facial recognition
> > > > > > > > > > > > database
> > > > > > > > > > > > has
> > > > > > > > > > > > been built. The question is who wrote those rules? Who decided
> > > > > > > > > > > > it's
> > > > > > > > > > > > OK to
> > > > > > > > > > > > kill a young kid to save the lives of 5 rickety old folks? Oh
> > > > > > > > > > > > wait,
> > > > > > > > > > > > maybe
> > > > > > > > > > > > it's better to save the kid and let the old folks die. They've
> > > > > > > > > > > > had a
> > > > > > > > > > > > full
> > > > > > > > > > > > life. Who wrote that rule? In other words, someone(s) have to
> > > > > > > > > > > > decide
> > > > > > > > > > > > whose
> > > > > > > > > > > > life is worth more than another's. They are essentially standing
> > > > > > > > > > > > on
> > > > > > > > > > > > a
> > > > > > > > > > > > bridge
> > > > > > > > > > > > deciding whether to push the guy or not. They have to write the
> > > > > > > > > > > > rule.
> > > > > > > > > > > > They
> > > > > > > > > > > > are either going to kill the kid or push the car into the other
> > > > > > > > > > > > lane.
> > > > > > > > > > > >
> > > > > > > > > > > > I, for one, don't think that I want to be sitting around that
> > > > > > > > > > > > table.
> > > > > > > > > > > > Having
> > > > > > > > > > > > to make the decisions would be one thing. Having to sit next to
> > > > > > > > > > > > the
> > > > > > > > > > > > person
> > > > > > > > > > > > that would push the guy off the bridge with a gleam in his eye
> > > > > > > > > > > > would
> > > > > > > > > > > > be a
> > > > > > > > > > > > totally different story.
> > > > > > > > > > >
> > > > > > > > > > > I reconsidered my thoughts on this one as well.
> > > > > > > > > > >
> > > > > > > > > > > The AV should do as it was designed to do, to the best of its
> > > > > > > > > > > capabilities. Staying in the lane when there is no option to swerve
> > > > > > > > > > > safely.
> > > > > > > > > > >
> > > > > > > > > > > There is already a legal reason for that, that being that the
> > > > > > > > > > > swerving
> > > > > > > > > > > driver assumes all the damages that incur from his action,
> > > > > > > > > > > including
> > > > > > > > > > > manslaughter.
> > > > > > > > > >
> > > > > > > > > > So in the following brake failure scenario, if the AV stays in lane
> > > > > > > > > > and
> > > > > > > > > > kills the four "highly rated" pedestrians there are no charges, but
> > > > > > > > > > if
> > > > > > > > > > it changes lanes and takes out the 4 slugs, jail time may ensue.
> > > > > > > > > >
> > > > > > > > > > http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-80
> > > > > > > > > > 0
> > > > > > > > > >
> > > > > > > > > > Interesting.
> > > > > > > > >
> > > > > > > > > Yes, and I've been warned that by my taking evasive action I could
> > > > > > > > > cause someone else to respond likewise and that I would he held
> > > > > > > > > accountable for what happened.
> > > > > > > >
> > > > > > > > I find the assumption that a fatality involving a robot car would lead
> > > > > > > > to someone being jailed to be amusing. The people who assert this
> > > > > > > > never identify the statute under which someone would be jailed or who,
> > > > > > > > precisely this someone might be. They seem to assume that because a
> > > > > > > > human driving a car could be jailed for vehicular homicide or criminal
> > > > > > > > negligence or some such, it is automatic that someone else would be
> > > > > > > > jailed for the same offense--the trouble is that the car is legally an
> > > > > > > > inanimate object and we don't put inanimate objects in jail. So it
> > > > > > > > gets down to proving that the occupant is negligent, which is a hard
> > > > > > > > sell given that the government allowed the car to be licensed with the
> > > > > > > > understanding that it would not be controlled by the occupant, or
> > > > > > > > proving that the engineering team responsible for developing it was
> > > > > > > > negligent, which given that they can show the logic the thing used and
> > > > > > > > no doubt provide legal justification for the decision it made, will be
> > > > > > > > another tall order. So who goes to jail?
> > > > > > >
> > > > > > > You've taken it to the next level, into the real word scenario and out
> > > > > > > of the programming stage.
> > > > > > >
> > > > > > > Personally I would assume that anything designed would have to
> > > > > > > co-exist with real world laws and responsibilities. Even the final
> > > > > > > owner could be held responsible. See the laws regarding experimental
> > > > > > > aircraft, hang gliders, etc.
> > > > > >
> > > > > > Experimental aircraft and hang gliders are controlled by a human. If
> > > > > > they are involved in a fatl accident, the operator gets scrutinized.
> > > > > > An autonomous car is not under human control, it is its own operator,
> > > > > > the occupant is a passenger.
> > > > > >
> > > > > > We don't have "real world law" governing fatalities involving
> > > > > > autonomous vehicles. The engineering would, initially (I hope) be
> > > > > > based on existing case law involving human drivers and what the courts
> > > > > > held that they should or should not have done in particular
> > > > > > situations. But there won't be any actual law until either the
> > > > > > legislatures write statutes or the courts issue rulings, and the
> > > > > > latter won't happen until there are such vehicles in service in
> > > > > > sufficient quantity to generate cases.
> > > > > >
> > > > > > Rather than hang gliders and homebuilts, consider a Globalhawk that
> > > > > > hits an airliner. Assuming no negligence on the part of the airliner
> > > > > > crew, who do you go after? Do you go after the Air Force, Northrop
> > > > > > Grumman, Raytheon, or somebody else? And of what are they likely to
> > > > > > be found guilty?
> > > >
> > > > GlobalHawk drones do have human pilots. Although they are not on board,
> > > > they
> > > > are in control via a stellite link and can be thousands of miles away.
> > > >
> > > > .<http://www.aviationtoday.com/2017/03/16/day-life-us-air-force-drone-pilo
> > > > t/
> > >
> > > You are conflating Reaper and Globalhawk and totally missing the
> > > point.
> >
> > Could you be more specific? Exactly what is wrong?
>
> Reaper is a combat drone and is normally operated manually. We don't
> let robots decided to shoot people yet. Globalhawk is a recon drone
> and is normally autonomous. It has no weapons so shooting people is
> not an issue. It can be operated manually and normally is in high
> traffic areas for exactly the "what if it hits an airliner" reason,
> but for most of its mission profile it is autonomous.
So GlobalHawk is autonomous in the same sense as an airliner under autopilot
during the long flight to and from the theater. It is the human pilot who is
responsible for the whole flight.
.
> The article mentions Globalhawk in passing but then goes on to spend
> the rest of its time discussing piloting Predator, which while still
> in the inventory is ancestral to Reaper.
Yep.
Joe Gwinn
On Fri, 24 Nov 2017 16:23:59 -0500, J. Clarke
<[email protected]> wrote:
>On Fri, 24 Nov 2017 11:58:06 -0600, Markem <[email protected]>
>wrote:
>
>>On Fri, 24 Nov 2017 00:53:07 -0500, J. Clarke
>><[email protected]> wrote:
>>
>>>On Thu, 23 Nov 2017 23:46:52 -0600, Markem <[email protected]>
>>>wrote:
>>>
>>>>On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
>>>><[email protected]> wrote:
>>>>
>>>>>It was suggested that someone would go to jail. I still want to know
>>>>>who and what crime they committed.
>>>>
>>>>Damages would be a tort case,
>>>
>>>So why do you mention damages?
>>>
>>>> as to who and what crime that would be
>>>>determined in court. Some DA looking for publicty would brings
>>>>charges.
>>>
>>>What charges? To bring charges there must have been a chargeable
>>>offense, which means that a plausible argument can be made that some
>>>law was violated. So what law do you believe would have been
>>>violated? Or do you just _like_ being laughed out of court?
>>
>>I am not looking for political office, ever heard the saying a DA can
>>indict a ham sandwich.
>
>But when was the last time a ham sandwich was imprisoned?
It transformed into a penicillin based mold and could no longer be
held.
On Fri, 24 Nov 2017 18:39:03 -0500, Joseph Gwinn
<[email protected]> wrote:
>On Nov 24, 2017, J. Clarke wrote
>(in article<[email protected]>):
>
>> On Fri, 24 Nov 2017 11:33:41 -0500, Joseph Gwinn
>> <[email protected]> wrote:
>>
>> > On Nov 24, 2017, OFWW wrote
>> > (in article<[email protected]>):
>> >
>> > > On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
>> > > <[email protected]> wrote:
>> > >
>> > > > On Thu, 23 Nov 2017 20:52:09 -0800, OFWW<[email protected]>
>> > > > wrote:
>> > > >
>> > > > > On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
>> > > > > <[email protected]> wrote:
>> > > > >
>> > > > > > On Thu, 23 Nov 2017 18:44:05 -0800, OFWW<[email protected]>
>> > > > > > wrote:
>> > > > > >
>> > > > > > > On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
>> > > > > > > <[email protected]> wrote:
>> > > > > > >
>> > > > > > > > On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
>> > > > > > > > > On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
>> > > > > > > > > <[email protected]> wrote:
>> > > > > > > > >
>> > > > > > > > > > On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski
>> > > > > > > > > > wrote:
>> > > > > > > > > > > On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>> > > > > > > > > > >
>> > > > > > > > > > > >
>> > > > > > > > > > > > Oh, well, no sense in waiting...
>> > > > > > > > > > > >
>> > > > > > > > > > > > 2nd scenario:
>> > > > > > > > > > > >
>> > > > > > > > > > > > 5 workers are standing on the railroad tracks. A train is heading
>> > > > > > > > > > > > in their
>> > > > > > > > > > > > direction. They have no escape route. If the train continues down
>> > > > > > > > > > > > the tracks,
>> > > > > > > > > > > > it will most assuredly kill them all.
>> > > > > > > > > > > >
>> > > > > > > > > > > > You are standing on a bridge overlooking the tracks. Next to you
>> > > > > > > > > > > > is
>> > > > > > > > > > > > a fairly
>> > > > > > > > > > > > large person. We'll save you some trouble and let that person be a
>> > > > > > > > > > > > stranger.
>> > > > > > > > > > > >
>> > > > > > > > > > > > You have 2, and only 2, options. If you do nothing, all 5 workers
>> > > > > > > > > > > > will
>> > > > > > > > > > > > be killed. If you push the stranger off the bridge, the train will
>> > > > > > > > > > > > kill
>> > > > > > > > > > > > him but be stopped before the 5 workers are killed. (Don't
>> > > > > > > > > > > > question
>> > > > > > > > > > > > the
>> > > > > > > > > > > > physics, just accept the outcome.)
>> > > > > > > > > > > >
>> > > > > > > > > > > > Which option do you choose?
>> > > > > > > > > > >
>> > > > > > > > > > > I don't know. It was easy to pull the switch as there was a bit of
>> > > > > > > > > > > disconnect there. Now it is up close and you are doing the pushing.
>> > > > > > > > > > > One alternative is to jump yourself, but I'd not do that. Don't
>> > > > > > > > > > > think I
>> > > > > > > > > > > could push the guy either.
>> > > > > > > > > >
>> > > > > > > > > > And there in lies the rub. The "disconnected" part.
>> > > > > > > > > >
>> > > > > > > > > > Now, as promised, let's bring this back to technology, AI and most
>> > > > > > > > > > certainly, people. Let's talk specifically about autonomous
>> > > > > > > > > > vehicles,
>> > > > > > > > > > but please avoid the rabbit hole and realize that the concept
>> > > > > > > > > > applies
>> > > > > > > > > > to just about any where AI is used and people are involved.
>> > > > > > > > > > Autonomus
>> > > > > > > > > > vehicles (AV) are just one example.
>> > > > > > > > > >
>> > > > > > > > > > Imagine it's X years from now and AV's are fairly common. Imagine
>> > > > > > > > > > that an AV
>> > > > > > > > > > is traveling down the road, with its AI in complete control of the
>> > > > > > > > > > vehicle.
>> > > > > > > > > > The driver is using one hand get a cup of coffee from the built-in
>> > > > > > > > > > Keurig
>> > > > > > > > > > machine and choosing a Pandora station with the other. He is
>> > > > > > > > > > completely
>> > > > > > > > > > oblivious to what's happening outside of his vehicle.
>> > > > > > > > > >
>> > > > > > > > > > Now imagine that a 4 year old runs out into the road. The AI uses
>> > > > > > > > > > all
>> > > > > > > > > > of the
>> > > > > > > > > > data at its disposal (speed, distance, weather conditions, tire
>> > > > > > > > > > pressure,
>> > > > > > > > > > etc.) and decides that it will not be able to stop in time. It
>> > > > > > > > > > checks
>> > > > > > > > > > the
>> > > > > > > > > > input from its 360° cameras. Can't go right because of the line of
>> > > > > > > > > > parked
>> > > > > > > > > > cars. They won't slow the vehicle enough to avoid hitting the kid.
>> > > > > > > > > > Using
>> > > > > > > > > > facial recognition the AI determines that the mini-van on the left
>> > > > > > > > > > contains
>> > > > > > > > > > 5 elderly people. If the AV swerves left, it will push the mini-van
>> > > > > > > > > > into
>> > > > > > > > > > oncoming traffic, directly into the path of a 18 wheeler. The AI
>> > > > > > > > > > communicates
>> > > > > > > > > > with the 18 wheeler's AI who responds and says "I have no place to
>> > > > > > > > > > go. If
>> > > > > > > > > > you push the van into my lane, I'm taking out a bunch of Grandmas
>> > > > > > > > > > and
>> > > > > > > > > > Grandpas."
>> > > > > > > > > >
>> > > > > > > > > > Now the AI has to make basically the same decision as in my first
>> > > > > > > > > > scenario:
>> > > > > > > > > > Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>> > > > > > > > > >
>> > > > > > > > > > "Bye Bye, kid. You should have stayed on the sidewalk."
>> > > > > > > > > >
>> > > > > > > > > > No emotion, right? Right, not once the AI is programmed, not once
>> > > > > > > > > > the
>> > > > > > > > > > initial
>> > > > > > > > > > AI rules have been written, not once the facial recognition database
>> > > > > > > > > > has
>> > > > > > > > > > been built. The question is who wrote those rules? Who decided it's
>> > > > > > > > > > OK to
>> > > > > > > > > > kill a young kid to save the lives of 5 rickety old folks? Oh wait,
>> > > > > > > > > > maybe
>> > > > > > > > > > it's better to save the kid and let the old folks die. They've had a
>> > > > > > > > > > full
>> > > > > > > > > > life. Who wrote that rule? In other words, someone(s) have to decide
>> > > > > > > > > > whose
>> > > > > > > > > > life is worth more than another's. They are essentially standing on
>> > > > > > > > > > a
>> > > > > > > > > > bridge
>> > > > > > > > > > deciding whether to push the guy or not. They have to write the
>> > > > > > > > > > rule.
>> > > > > > > > > > They
>> > > > > > > > > > are either going to kill the kid or push the car into the other
>> > > > > > > > > > lane.
>> > > > > > > > > >
>> > > > > > > > > > I, for one, don't think that I want to be sitting around that table.
>> > > > > > > > > > Having
>> > > > > > > > > > to make the decisions would be one thing. Having to sit next to the
>> > > > > > > > > > person
>> > > > > > > > > > that would push the guy off the bridge with a gleam in his eye would
>> > > > > > > > > > be a
>> > > > > > > > > > totally different story.
>> > > > > > > > >
>> > > > > > > > > I reconsidered my thoughts on this one as well.
>> > > > > > > > >
>> > > > > > > > > The AV should do as it was designed to do, to the best of its
>> > > > > > > > > capabilities. Staying in the lane when there is no option to swerve
>> > > > > > > > > safely.
>> > > > > > > > >
>> > > > > > > > > There is already a legal reason for that, that being that the
>> > > > > > > > > swerving
>> > > > > > > > > driver assumes all the damages that incur from his action, including
>> > > > > > > > > manslaughter.
>> > > > > > > >
>> > > > > > > > So in the following brake failure scenario, if the AV stays in lane
>> > > > > > > > and
>> > > > > > > > kills the four "highly rated" pedestrians there are no charges, but if
>> > > > > > > > it changes lanes and takes out the 4 slugs, jail time may ensue.
>> > > > > > > >
>> > > > > > > > http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-800
>> > > > > > > >
>> > > > > > > > Interesting.
>> > > > > > >
>> > > > > > > Yes, and I've been warned that by my taking evasive action I could
>> > > > > > > cause someone else to respond likewise and that I would he held
>> > > > > > > accountable for what happened.
>> > > > > >
>> > > > > > I find the assumption that a fatality involving a robot car would lead
>> > > > > > to someone being jailed to be amusing. The people who assert this
>> > > > > > never identify the statute under which someone would be jailed or who,
>> > > > > > precisely this someone might be. They seem to assume that because a
>> > > > > > human driving a car could be jailed for vehicular homicide or criminal
>> > > > > > negligence or some such, it is automatic that someone else would be
>> > > > > > jailed for the same offense--the trouble is that the car is legally an
>> > > > > > inanimate object and we don't put inanimate objects in jail. So it
>> > > > > > gets down to proving that the occupant is negligent, which is a hard
>> > > > > > sell given that the government allowed the car to be licensed with the
>> > > > > > understanding that it would not be controlled by the occupant, or
>> > > > > > proving that the engineering team responsible for developing it was
>> > > > > > negligent, which given that they can show the logic the thing used and
>> > > > > > no doubt provide legal justification for the decision it made, will be
>> > > > > > another tall order. So who goes to jail?
>> > > > >
>> > > > > You've taken it to the next level, into the real word scenario and out
>> > > > > of the programming stage.
>> > > > >
>> > > > > Personally I would assume that anything designed would have to
>> > > > > co-exist with real world laws and responsibilities. Even the final
>> > > > > owner could be held responsible. See the laws regarding experimental
>> > > > > aircraft, hang gliders, etc.
>> > > >
>> > > > Experimental aircraft and hang gliders are controlled by a human. If
>> > > > they are involved in a fatl accident, the operator gets scrutinized.
>> > > > An autonomous car is not under human control, it is its own operator,
>> > > > the occupant is a passenger.
>> > > >
>> > > > We don't have "real world law" governing fatalities involving
>> > > > autonomous vehicles. The engineering would, initially (I hope) be
>> > > > based on existing case law involving human drivers and what the courts
>> > > > held that they should or should not have done in particular
>> > > > situations. But there won't be any actual law until either the
>> > > > legislatures write statutes or the courts issue rulings, and the
>> > > > latter won't happen until there are such vehicles in service in
>> > > > sufficient quantity to generate cases.
>> > > >
>> > > > Rather than hang gliders and homebuilts, consider a Globalhawk that
>> > > > hits an airliner. Assuming no negligence on the part of the airliner
>> > > > crew, who do you go after? Do you go after the Air Force, Northrop
>> > > > Grumman, Raytheon, or somebody else? And of what are they likely to
>> > > > be found guilty?
>> >
>> > GlobalHawk drones do have human pilots. Although they are not on board, they
>> > are in control via a stellite link and can be thousands of miles away.
>> >
>> > .<http://www.aviationtoday.com/2017/03/16/day-life-us-air-force-drone-pilot/
>>
>> You are conflating Reaper and Globalhawk and totally missing the
>> point.
>
>Could you be more specific? Exactly what is wrong?
Reaper is a combat drone and is normally operated manually. We don't
let robots decided to shoot people yet. Globalhawk is a recon drone
and is normally autonomous. It has no weapons so shooting people is
not an issue. It can be operated manually and normally is in high
traffic areas for exactly the "what if it hits an airliner" reason,
but for most of its mission profile it is autonomous.
The article mentions Globalhawk in passing but then goes on to spend
the rest of its time discussing piloting Predator, which while still
in the inventory is ancestral to Reaper.
On Wednesday, November 22, 2017 at 11:21:24 AM UTC-5, Spalted Walt wrote:
> DerbyDad03 <[email protected]> wrote:
>
> > On Wednesday, November 22, 2017 at 10:32:54 AM UTC-5, Ed Pawlowski wrote:
> > > On 11/22/2017 7:52 AM, DerbyDad03 wrote:
> > > > On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
> > > >> [email protected] wrote:
> > > >>
> > > >>> I have to say, I am sorry to see that.
> > > >>
> > > >> technophobia [tek-nuh-foh-bee-uh]
> > > >> noun -- abnormal fear of or anxiety about the effects of advanced technology.
> > > >>
> > > >> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
> > > >
> > > > I'm not sure how this will work out on usenet, but I'm going to present
> > > > a scenario and ask for an answer. After some amount of time, maybe 48 hours,
> > > > since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
> > > > another answer.
> > > >
> > > > Trust me, this will eventually lead back to technology, AI and most
> > > > certainly, people.
> > > >
> > > > In the following scenario you must assume that all options have been
> > > > considered and narrowed down to only 2. Please just accept that the
> > > > situation is as stated and that you only have 2 choices. If we get into
> > > > "Well, in a real life situation, you'd have to factor in this, that and
> > > > the other thing" we'll never get through this exercise.
> > > >
> > > > Here goes:
> > > >
> > > > 5 workers are standing on the railroad tracks. A train is heading in their
> > > > direction. They have no escape route. If the train continues down the tracks,
> > > > it will most assuredly kill them all.
> > > >
> > > > You are standing next to the lever that will switch the train to another
> > > > track before it reaches the workers. On the other track is a lone worker,
> > > > also with no escape route.
> > > >
> > > > You have 2, and only 2, options. If you do nothing, all 5 workers will
> > > > be killed. If you pull the lever, only 1 worker will be killed.
> > > >
> > > > Which option do you choose?
> > > >
> > >
> > > The short answer is to pull the switch and save as many lives as possible.
> > >
> > > The long answer, it depends. Would you make that same decision if the
> > > lone person was a family member? If the lone person was you? Five old
> > > people or one child? Of course, AI would take all the emotions out of
> > > the decision making. I think that is what you may be getting at.
> >
> > AI will not take *all* of the emotion out of it. More on that later.
>
> When do we get to the 'pushing the fat guy off the bridge' part of
> this moral dilemma quiz? ;')
>
> https://www.youtube.com/embed/bOpf6KcWYyw?autoplay=1
After we get enough "Pull the lever" answers. ;-)
Oh, well, no sense in waiting...
2nd scenario:
5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.
You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.
You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)
Which option do you choose?
On Thu, 23 Nov 2017 23:00:51 -0800, OFWW <[email protected]>
wrote:
>So can we know return to the question or at the least, wood working?
Probably not
On Friday, November 24, 2017 at 9:11:22 AM UTC-5, J. Clarke wrote:
> On Thu, 23 Nov 2017 23:00:51 -0800, OFWW <[email protected]>
> wrote:
>=20
> >On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
> ><[email protected]> wrote:
> >
> >>On Thu, 23 Nov 2017 20:52:09 -0800, OFWW <[email protected]>
> >>wrote:
> >>
> >>>On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
> >>><[email protected]> wrote:
> >>>
> >>>>On Thu, 23 Nov 2017 18:44:05 -0800, OFWW <[email protected]>
> >>>>wrote:
> >>>>
> >>>>>On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
> >>>>><[email protected]> wrote:
> >>>>>
> >>>>>>On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
> >>>>>>> On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
> >>>>>>> <[email protected]> wrote:
> >>>>>>>=20
> >>>>>>> >On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowsk=
i wrote:
> >>>>>>> >> On 11/22/2017 1:20 PM, DerbyDad03 wrote:
> >>>>>>> >>=20
> >>>>>>> >> >=20
> >>>>>>> >> > Oh, well, no sense in waiting...
> >>>>>>> >> >=20
> >>>>>>> >> > 2nd scenario:
> >>>>>>> >> >=20
> >>>>>>> >> > 5 workers are standing on the railroad tracks. A train is he=
ading in their
> >>>>>>> >> > direction. They have no escape route. If the train continues=
down the tracks,
> >>>>>>> >> > it will most assuredly kill them all.
> >>>>>>> >> >=20
> >>>>>>> >> > You are standing on a bridge overlooking the tracks. Next to=
you is a fairly
> >>>>>>> >> > large person. We'll save you some trouble and let that perso=
n be a stranger.
> >>>>>>> >> >=20
> >>>>>>> >> > You have 2, and only 2, options. If you do nothing, all 5 wo=
rkers will
> >>>>>>> >> > be killed. If you push the stranger off the bridge, the trai=
n will kill
> >>>>>>> >> > him but be stopped before the 5 workers are killed. (Don't q=
uestion the
> >>>>>>> >> > physics, just accept the outcome.)
> >>>>>>> >> >=20
> >>>>>>> >> > Which option do you choose?
> >>>>>>> >> >=20
> >>>>>>> >>=20
> >>>>>>> >> I don't know. It was easy to pull the switch as there was a b=
it of=20
> >>>>>>> >> disconnect there. Now it is up close and you are doing the pu=
shing.=20
> >>>>>>> >> One alternative is to jump yourself, but I'd not do that. Don=
't think I=20
> >>>>>>> >> could push the guy either.
> >>>>>>> >>=20
> >>>>>>> >
> >>>>>>> >And there in lies the rub. The "disconnected" part.
> >>>>>>> >
> >>>>>>> >Now, as promised, let's bring this back to technology, AI and mo=
st=20
> >>>>>>> >certainly, people. Let's talk specifically about autonomous vehi=
cles,=20
> >>>>>>> >but please avoid the rabbit hole and realize that the concept ap=
plies
> >>>>>>> >to just about any where AI is used and people are involved. Auto=
nomus
> >>>>>>> >vehicles (AV) are just one example.
> >>>>>>> >
> >>>>>>> >Imagine it's X years from now and AV's are fairly common. Imagin=
e that an AV=20
> >>>>>>> >is traveling down the road, with its AI in complete control of t=
he vehicle.=20
> >>>>>>> >The driver is using one hand get a cup of coffee from the built-=
in Keurig=20
> >>>>>>> >machine and choosing a Pandora station with the other. He is com=
pletely=20
> >>>>>>> >oblivious to what's happening outside of his vehicle.=20
> >>>>>>> >
> >>>>>>> >Now imagine that a 4 year old runs out into the road. The AI use=
s all of the=20
> >>>>>>> >data at its disposal (speed, distance, weather conditions, tire =
pressure,=20
> >>>>>>> >etc.) and decides that it will not be able to stop in time. It c=
hecks the=20
> >>>>>>> >input from its 360=C2=B0 cameras. Can't go right because of the =
line of parked=20
> >>>>>>> >cars. They won't slow the vehicle enough to avoid hitting the ki=
d. Using=20
> >>>>>>> >facial recognition the AI determines that the mini-van on the le=
ft contains=20
> >>>>>>> >5 elderly people. If the AV swerves left, it will push the mini-=
van into=20
> >>>>>>> >oncoming traffic, directly into the path of a 18 wheeler. The AI=
communicates=20
> >>>>>>> >with the 18 wheeler's AI who responds and says "I have no place =
to go. If=20
> >>>>>>> >you push the van into my lane, I'm taking out a bunch of Grandma=
s and=20
> >>>>>>> >Grandpas."
> >>>>>>> >
> >>>>>>> >Now the AI has to make basically the same decision as in my firs=
t scenario:
> >>>>>>> >Kill 1 or kill 5. For the AI, it's as easy as it was for us, rig=
ht?=20
> >>>>>>> >
> >>>>>>> >"Bye Bye, kid. You should have stayed on the sidewalk."
> >>>>>>> >
> >>>>>>> >No emotion, right? Right, not once the AI is programmed, not onc=
e the initial
> >>>>>>> >AI rules have been written, not once the facial recognition data=
base has=20
> >>>>>>> >been built. The question is who wrote those rules? Who decided i=
t's OK to=20
> >>>>>>> >kill a young kid to save the lives of 5 rickety old folks? Oh wa=
it, maybe
> >>>>>>> >it's better to save the kid and let the old folks die. They've h=
ad a full
> >>>>>>> >life. Who wrote that rule? In other words, someone(s) have to de=
cide whose=20
> >>>>>>> >life is worth more than another's. They are essentially standing=
on a bridge=20
> >>>>>>> >deciding whether to push the guy or not. They have to write the =
rule. They=20
> >>>>>>> >are either going to kill the kid or push the car into the other =
lane.
> >>>>>>> >
> >>>>>>> >I, for one, don't think that I want to be sitting around that ta=
ble. Having=20
> >>>>>>> >to make the decisions would be one thing. Having to sit next to =
the person
> >>>>>>> >that would push the guy off the bridge with a gleam in his eye w=
ould be a
> >>>>>>> >totally different story.=20
> >>>>>>>=20
> >>>>>>> I reconsidered my thoughts on this one as well.
> >>>>>>>=20
> >>>>>>> The AV should do as it was designed to do, to the best of its
> >>>>>>> capabilities. Staying in the lane when there is no option to swer=
ve
> >>>>>>> safely.
> >>>>>>>=20
> >>>>>>> There is already a legal reason for that, that being that the swe=
rving
> >>>>>>> driver assumes all the damages that incur from his action, includ=
ing
> >>>>>>> manslaughter.
> >>>>>>
> >>>>>>So in the following brake failure scenario, if the AV stays in lane=
and=20
> >>>>>>kills the four "highly rated" pedestrians there are no charges, but=
if=20
> >>>>>>it changes lanes and takes out the 4 slugs, jail time may ensue.
> >>>>>>
> >>>>>>http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-8=
00
> >>>>>>
> >>>>>>Interesting.
> >>>>>
> >>>>>Yes, and I've been warned that by my taking evasive action I could
> >>>>>cause someone else to respond likewise and that I would he held
> >>>>>accountable for what happened.
> >>>>
> >>>>I find the assumption that a fatality involving a robot car would lea=
d
> >>>>to someone being jailed to be amusing. The people who assert this
> >>>>never identify the statute under which someone would be jailed or who=
,
> >>>>precisely this someone might be. They seem to assume that because a
> >>>>human driving a car could be jailed for vehicular homicide or crimina=
l
> >>>>negligence or some such, it is automatic that someone else would be
> >>>>jailed for the same offense--the trouble is that the car is legally a=
n
> >>>>inanimate object and we don't put inanimate objects in jail. So it
> >>>>gets down to proving that the occupant is negligent, which is a hard
> >>>>sell given that the government allowed the car to be licensed with th=
e
> >>>>understanding that it would not be controlled by the occupant, or
> >>>>proving that the engineering team responsible for developing it was
> >>>>negligent, which given that they can show the logic the thing used an=
d
> >>>>no doubt provide legal justification for the decision it made, will b=
e
> >>>>another tall order. So who goes to jail?
> >>>>
> >>>
> >>>You've taken it to the next level, into the real word scenario and out
> >>>of the programming stage.
> >>>
> >>>Personally I would assume that anything designed would have to
> >>>co-exist with real world laws and responsibilities. Even the final
> >>>owner could be held responsible. See the laws regarding experimental
> >>>aircraft, hang gliders, etc.
> >>
> >>Experimental aircraft and hang gliders are controlled by a human. If
> >>they are involved in a fatl accident, the operator gets scrutinized.
> >>An autonomous car is not under human control, it is its own operator,
> >>the occupant is a passenger.
> >>
> >>We don't have "real world law" governing fatalities involving
> >>autonomous vehicles. The engineering would, initially (I hope) be
> >>based on existing case law involving human drivers and what the courts
> >>held that they should or should not have done in particular
> >>situations. But there won't be any actual law until either the
> >>legislatures write statutes or the courts issue rulings, and the
> >>latter won't happen until there are such vehicles in service in
> >>sufficient quantity to generate cases.
> >>
> >>Rather than hang gliders and homebuilts, consider a Globalhawk that
> >>hits an airliner. Assuming no negligence on the part of the airliner
> >>crew, who do you go after? Do you go after the Air Force, Northrop
> >>Grumman, Raytheon, or somebody else? And of what are they likely to
> >>be found guilty?
> >>
> >>>But we should be sticking to this hypothetical example given us.
> >>
> >>It was suggested that someone would go to jail. I still want to know
> >>who and what crime they committed.
> >
> >The person who did not stay in their own lane, and ended up committing
> >involuntary manslaughter.
>=20
> Are you arguing that an autonomous vehicle is a "person"? You
> really don't seem to grasp the concept. Rather than a car with an
> occupant, make it a car, say a robot taxicab, that is going somewhere
> or other unoccupied.
>=20
> >In the case you bring up the AV can be currently over ridden at
> >anytime by the occupant. There are already AV vehicles operating on
> >the streets.
>=20
> In what case that I bring up? Globalhawk doesn't _have_ an occupant.
> (when people use words with which you are unfamiliar, you should at
> least Google those words before opining). There are very few
> autonomous vehicles and currently they are for the most part operated
> with a safety driver, but that is not anybody's long-term plan. Google
> already has at least one demonstrator with no steering wheel or pedals
> and Uber is planning on using driverless cars in their ride sharing
> service--ultimately those would also have no controls accessible to
> the passenger.
>=20
> >Regarding your "whose at fault" scenario, just look at the court cases
> >against gun makers, as if guns kill people.
>=20
> I have not introduced a "who's at fault scenariao". I have asked what
> law would be violated and who would be jailed. "At fault" decides who
> pays damages, not who goes to jail. I am not discussing damages, I am
> discussing JAIL. You do know what a jail is, do you not?
>=20
> >So can we know return to the question or at the least, wood working?
>=20
> You're the one who started feeding the troll.
...and then you joined the meal.
On Thu, 23 Nov 2017 23:00:51 -0800, OFWW <[email protected]>
wrote:
>On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
><[email protected]> wrote:
>
>>On Thu, 23 Nov 2017 20:52:09 -0800, OFWW <[email protected]>
>>wrote:
>>
>>>On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
>>><[email protected]> wrote:
>>>
>>>>On Thu, 23 Nov 2017 18:44:05 -0800, OFWW <[email protected]>
>>>>wrote:
>>>>
>>>>>On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
>>>>><[email protected]> wrote:
>>>>>
>>>>>>On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
>>>>>>> On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
>>>>>>> <[email protected]> wrote:
>>>>>>>
>>>>>>> >On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
>>>>>>> >> On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>>>>>>> >>
>>>>>>> >> >
>>>>>>> >> > Oh, well, no sense in waiting...
>>>>>>> >> >
>>>>>>> >> > 2nd scenario:
>>>>>>> >> >
>>>>>>> >> > 5 workers are standing on the railroad tracks. A train is heading in their
>>>>>>> >> > direction. They have no escape route. If the train continues down the tracks,
>>>>>>> >> > it will most assuredly kill them all.
>>>>>>> >> >
>>>>>>> >> > You are standing on a bridge overlooking the tracks. Next to you is a fairly
>>>>>>> >> > large person. We'll save you some trouble and let that person be a stranger.
>>>>>>> >> >
>>>>>>> >> > You have 2, and only 2, options. If you do nothing, all 5 workers will
>>>>>>> >> > be killed. If you push the stranger off the bridge, the train will kill
>>>>>>> >> > him but be stopped before the 5 workers are killed. (Don't question the
>>>>>>> >> > physics, just accept the outcome.)
>>>>>>> >> >
>>>>>>> >> > Which option do you choose?
>>>>>>> >> >
>>>>>>> >>
>>>>>>> >> I don't know. It was easy to pull the switch as there was a bit of
>>>>>>> >> disconnect there. Now it is up close and you are doing the pushing.
>>>>>>> >> One alternative is to jump yourself, but I'd not do that. Don't think I
>>>>>>> >> could push the guy either.
>>>>>>> >>
>>>>>>> >
>>>>>>> >And there in lies the rub. The "disconnected" part.
>>>>>>> >
>>>>>>> >Now, as promised, let's bring this back to technology, AI and most
>>>>>>> >certainly, people. Let's talk specifically about autonomous vehicles,
>>>>>>> >but please avoid the rabbit hole and realize that the concept applies
>>>>>>> >to just about any where AI is used and people are involved. Autonomus
>>>>>>> >vehicles (AV) are just one example.
>>>>>>> >
>>>>>>> >Imagine it's X years from now and AV's are fairly common. Imagine that an AV
>>>>>>> >is traveling down the road, with its AI in complete control of the vehicle.
>>>>>>> >The driver is using one hand get a cup of coffee from the built-in Keurig
>>>>>>> >machine and choosing a Pandora station with the other. He is completely
>>>>>>> >oblivious to what's happening outside of his vehicle.
>>>>>>> >
>>>>>>> >Now imagine that a 4 year old runs out into the road. The AI uses all of the
>>>>>>> >data at its disposal (speed, distance, weather conditions, tire pressure,
>>>>>>> >etc.) and decides that it will not be able to stop in time. It checks the
>>>>>>> >input from its 360° cameras. Can't go right because of the line of parked
>>>>>>> >cars. They won't slow the vehicle enough to avoid hitting the kid. Using
>>>>>>> >facial recognition the AI determines that the mini-van on the left contains
>>>>>>> >5 elderly people. If the AV swerves left, it will push the mini-van into
>>>>>>> >oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
>>>>>>> >with the 18 wheeler's AI who responds and says "I have no place to go. If
>>>>>>> >you push the van into my lane, I'm taking out a bunch of Grandmas and
>>>>>>> >Grandpas."
>>>>>>> >
>>>>>>> >Now the AI has to make basically the same decision as in my first scenario:
>>>>>>> >Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>>>>>>> >
>>>>>>> >"Bye Bye, kid. You should have stayed on the sidewalk."
>>>>>>> >
>>>>>>> >No emotion, right? Right, not once the AI is programmed, not once the initial
>>>>>>> >AI rules have been written, not once the facial recognition database has
>>>>>>> >been built. The question is who wrote those rules? Who decided it's OK to
>>>>>>> >kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
>>>>>>> >it's better to save the kid and let the old folks die. They've had a full
>>>>>>> >life. Who wrote that rule? In other words, someone(s) have to decide whose
>>>>>>> >life is worth more than another's. They are essentially standing on a bridge
>>>>>>> >deciding whether to push the guy or not. They have to write the rule. They
>>>>>>> >are either going to kill the kid or push the car into the other lane.
>>>>>>> >
>>>>>>> >I, for one, don't think that I want to be sitting around that table. Having
>>>>>>> >to make the decisions would be one thing. Having to sit next to the person
>>>>>>> >that would push the guy off the bridge with a gleam in his eye would be a
>>>>>>> >totally different story.
>>>>>>>
>>>>>>> I reconsidered my thoughts on this one as well.
>>>>>>>
>>>>>>> The AV should do as it was designed to do, to the best of its
>>>>>>> capabilities. Staying in the lane when there is no option to swerve
>>>>>>> safely.
>>>>>>>
>>>>>>> There is already a legal reason for that, that being that the swerving
>>>>>>> driver assumes all the damages that incur from his action, including
>>>>>>> manslaughter.
>>>>>>
>>>>>>So in the following brake failure scenario, if the AV stays in lane and
>>>>>>kills the four "highly rated" pedestrians there are no charges, but if
>>>>>>it changes lanes and takes out the 4 slugs, jail time may ensue.
>>>>>>
>>>>>>http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-800
>>>>>>
>>>>>>Interesting.
>>>>>
>>>>>Yes, and I've been warned that by my taking evasive action I could
>>>>>cause someone else to respond likewise and that I would he held
>>>>>accountable for what happened.
>>>>
>>>>I find the assumption that a fatality involving a robot car would lead
>>>>to someone being jailed to be amusing. The people who assert this
>>>>never identify the statute under which someone would be jailed or who,
>>>>precisely this someone might be. They seem to assume that because a
>>>>human driving a car could be jailed for vehicular homicide or criminal
>>>>negligence or some such, it is automatic that someone else would be
>>>>jailed for the same offense--the trouble is that the car is legally an
>>>>inanimate object and we don't put inanimate objects in jail. So it
>>>>gets down to proving that the occupant is negligent, which is a hard
>>>>sell given that the government allowed the car to be licensed with the
>>>>understanding that it would not be controlled by the occupant, or
>>>>proving that the engineering team responsible for developing it was
>>>>negligent, which given that they can show the logic the thing used and
>>>>no doubt provide legal justification for the decision it made, will be
>>>>another tall order. So who goes to jail?
>>>>
>>>
>>>You've taken it to the next level, into the real word scenario and out
>>>of the programming stage.
>>>
>>>Personally I would assume that anything designed would have to
>>>co-exist with real world laws and responsibilities. Even the final
>>>owner could be held responsible. See the laws regarding experimental
>>>aircraft, hang gliders, etc.
>>
>>Experimental aircraft and hang gliders are controlled by a human. If
>>they are involved in a fatl accident, the operator gets scrutinized.
>>An autonomous car is not under human control, it is its own operator,
>>the occupant is a passenger.
>>
>>We don't have "real world law" governing fatalities involving
>>autonomous vehicles. The engineering would, initially (I hope) be
>>based on existing case law involving human drivers and what the courts
>>held that they should or should not have done in particular
>>situations. But there won't be any actual law until either the
>>legislatures write statutes or the courts issue rulings, and the
>>latter won't happen until there are such vehicles in service in
>>sufficient quantity to generate cases.
>>
>>Rather than hang gliders and homebuilts, consider a Globalhawk that
>>hits an airliner. Assuming no negligence on the part of the airliner
>>crew, who do you go after? Do you go after the Air Force, Northrop
>>Grumman, Raytheon, or somebody else? And of what are they likely to
>>be found guilty?
>>
>>>But we should be sticking to this hypothetical example given us.
>>
>>It was suggested that someone would go to jail. I still want to know
>>who and what crime they committed.
>
>The person who did not stay in their own lane, and ended up committing
>involuntary manslaughter.
Are you arguing that an autonomous vehicle is a "person"? You
really don't seem to grasp the concept. Rather than a car with an
occupant, make it a car, say a robot taxicab, that is going somewhere
or other unoccupied.
>In the case you bring up the AV can be currently over ridden at
>anytime by the occupant. There are already AV vehicles operating on
>the streets.
In what case that I bring up? Globalhawk doesn't _have_ an occupant.
(when people use words with which you are unfamiliar, you should at
least Google those words before opining). There are very few
autonomous vehicles and currently they are for the most part operated
with a safety driver, but that is not anybody's long-term plan. Google
already has at least one demonstrator with no steering wheel or pedals
and Uber is planning on using driverless cars in their ride sharing
service--ultimately those would also have no controls accessible to
the passenger.
>Regarding your "whose at fault" scenario, just look at the court cases
>against gun makers, as if guns kill people.
I have not introduced a "who's at fault scenariao". I have asked what
law would be violated and who would be jailed. "At fault" decides who
pays damages, not who goes to jail. I am not discussing damages, I am
discussing JAIL. You do know what a jail is, do you not?
>So can we know return to the question or at the least, wood working?
You're the one who started feeding the troll.
On Fri, 24 Nov 2017 10:09:56 -0500, Ed Pawlowski <[email protected]> wrote:
>On 11/24/2017 12:37 AM, J. Clarke wrote:
>
>>>>
>>>> I find the assumption that a fatality involving a robot car would lead
>>>> to someone being jailed to be amusing. The people who assert this
>>>> never identify the statute under which someone would be jailed or who,
>>>> precisely this someone might be. They seem to assume that because a
>>>> human driving a car could be jailed for vehicular homicide or criminal
>>>> negligence or some such, it is automatic that someone else would be
>>>> jailed for the same offense--the trouble is that the car is legally an
>>>> inanimate object and we don't put inanimate objects in jail.
>>>>
>>>
>
>They can impound your car in a drug bust. Maybe they will impound your
>car for the offense. We'll build special long term impound lots for
>serious offenses, just disconnect the battery for lesser ones.
And of course that impoundment was ordered by a jury. You seem to not
understand the difference between seizure of property and jail. And
also totally miss the point.
>>> You've taken it to the next level, into the real word scenario and out
>>> of the programming stage.
>>>
>>> Personally I would assume that anything designed would have to
>>> co-exist with real world laws and responsibilities. Even the final
>>> owner could be held responsible. See the laws regarding experimental
>>> aircraft, hang gliders, etc.
>>
>> Experimental aircraft and hang gliders are controlled by a human. If
>> they are involved in a fatl accident, the operator gets scrutinized.
>> An autonomous car is not under human control, it is its own operator,
>> the occupant is a passenger.
>
>The programmer will be jailed. Or maybe they will stick a pin in a
>Voodoo doll to punish him.
Which programmer? This isn't some guy working alone in his basement.
Is it the guy who wrote the code, the one who wrote the spec he
implemented, the manager who approved it? And when has anyone ever
been jailed because a device on which he was an engineer worked as
designed and someone came to harm?
>> We don't have "real world law" governing fatalities involving
>> autonomous vehicles. The engineering would, initially (I hope) be
>> based on existing case law involving human drivers and what the courts
>> held that they should or should not have done in particular
>> situations. But there won't be any actual law until either the
>> legislatures write statutes or the courts issue rulings, and the
>> latter won't happen until there are such vehicles in service in
>> sufficient quantity to generate cases.
>
>The sensible thing would be to gather the most brilliant minds of the TV
>ambulance chasing lawyers and let them come up with guidelines for
>liability. Can you think of anything more fair than that?
You might actually have something.
On Thu, 23 Nov 2017 23:46:52 -0600, Markem <[email protected]>
wrote:
>On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
><[email protected]> wrote:
>
>>It was suggested that someone would go to jail. I still want to know
>>who and what crime they committed.
>
>Damages would be a tort case,
So why do you mention damages?
> as to who and what crime that would be
>determined in court. Some DA looking for publicty would brings
>charges.
What charges? To bring charges there must have been a chargeable
offense, which means that a plausible argument can be made that some
law was violated. So what law do you believe would have been
violated? Or do you just _like_ being laughed out of court?
On 2017-11-21, Ed Pawlowski <[email protected]> wrote:
> Just think of the lives of depressed people it will save.
????
nb
On Monday, November 27, 2017 at 1:35:23 AM UTC-5, [email protected] wrote:
> Leon <lcb11211@swbelldotnet> wrote in
> news:[email protected]:
>
>
> > ;~) BUT that was not one of the options. You have 2, and only 2,
> > options
>
> There's always the third option... Probably the only good part of that
> movie:
> The only winning move is not to play.
>
If you choose not to decide, you still have made a choice. "Freewill", Rush, 1980
Not playing is the same thing as Option 1, doing nothing. 5 workers die.
On 11/21/2017 2:04 AM, [email protected] wrote:
> I have to say, I am sorry to see that.
>
> It means that all over the internet, in a high concentration here, and at the old men's table at Woodcraft the teeth gnashing will start.
>
> Screams of civil rights violations, chest thumping of those declaring that their generation had no guards or safety devices and they were fine, the paranoids buying saws now before the nanny state Commie/weenies make safety some kind of bullshit issue... all of it.
>
> Ready for the first 250 thread here for a long, long time. Nothing like getting a good bitch on to fire one up, though.
>
> Robert
>
There was an suicide by bandsaw. Just think of the lives of depressed
people it will save.
DerbyDad03 <[email protected]> wrote:
> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
> > [email protected] wrote:
> >
> > > I have to say, I am sorry to see that.
> >
> > technophobia [tek-nuh-foh-bee-uh]
> > noun -- abnormal fear of or anxiety about the effects of advanced technology.
> >
> > https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>
> I'm not sure how this will work out on usenet, but I'm going to present
> a scenario and ask for an answer. After some amount of time, maybe 48 hours,
> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
> another answer.
>
> Trust me, this will eventually lead back to technology, AI and most
> certainly, people.
>
> In the following scenario you must assume that all options have been
> considered and narrowed down to only 2. Please just accept that the
> situation is as stated and that you only have 2 choices. If we get into
> "Well, in a real life situation, you'd have to factor in this, that and
> the other thing" we'll never get through this exercise.
>
> Here goes:
>
> 5 workers are standing on the railroad tracks. A train is heading in their
> direction. They have no escape route. If the train continues down the tracks,
> it will most assuredly kill them all.
>
> You are standing next to the lever that will switch the train to another
> track before it reaches the workers. On the other track is a lone worker,
> also with no escape route.
>
> You have 2, and only 2, options. If you do nothing, all 5 workers will
> be killed. If you pull the lever, only 1 worker will be killed.
>
> Which option do you choose?
I think humans have an aversion to harming others that needs to be
overridden by something (artificial intelligence). By rational
thinking we can sometimes override it -- by thinking about the people
we will save, for example. But for some people, that increase in
anxiety may be so overpowering that they don't make the utilitarian
choice, the choice for the greater good.
On Wed, 22 Nov 2017 18:12:06 -0600, Leon <lcb11211@swbelldotnet>
wrote:
>On 11/22/2017 1:17 PM, OFWW wrote:
>> On Wed, 22 Nov 2017 12:45:11 -0600, Leon <lcb11211@swbelldotnet>
>> wrote:
>>
>>> On 11/22/2017 8:45 AM, Leon wrote:
>>>> On 11/22/2017 6:52 AM, DerbyDad03 wrote:
>>>>> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>>>>>> [email protected] wrote:
>>>>>>
>>>>>>> I have to say, I am sorry to see that.
>>>>>>
>>>>>> technophobia [tek-nuh-foh-bee-uh]
>>>>>> noun -- abnormal fear of or anxiety about the effects of advanced
>>>>>> technology.
>>>>>>
>>>>>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>>>>>>
>>>>>
>>>>> I'm not sure how this will work out on usenet, but I'm going to present
>>>>> a scenario and ask for an answer. After some amount of time, maybe 48
>>>>> hours,
>>>>> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
>>>>> another answer.
>>>>>
>>>>> Trust me, this will eventually lead back to technology, AI and most
>>>>> certainly, people.
>>>>>
>>>>> In the following scenario you must assume that all options have been
>>>>> considered and narrowed down to only 2. Please just accept that the
>>>>> situation is as stated and that you only have 2 choices. If we get into
>>>>> "Well, in a real life situation, you'd have to factor in this, that and
>>>>> the other thing" we'll never get through this exercise.
>>>>>
>>>>> Here goes:
>>>>>
>>>>> 5 workers are standing on the railroad tracks. A train is heading in
>>>>> their
>>>>> direction. They have no escape route. If the train continues down the
>>>>> tracks,
>>>>> it will most assuredly kill them all.
>>>>>
>>>>> You are standing next to the lever that will switch the train to another
>>>>> track before it reaches the workers. On the other track is a lone worker,
>>>>> also with no escape route.
>>>>>
>>>>> You have 2, and only 2, options. If you do nothing, all 5 workers will
>>>>> be killed. If you pull the lever, only 1 worker will be killed.
>>>>>
>>>>> Which option do you choose?
>>>>>
>>>>
>>>> Pull the lever, Choosing to do nothing is the choice to kill 5.
>>>
>>> Well I have mentioned this before, and it goes back to comments I have
>>> made in the past about decision making. It seems the majority here use
>>> emotional over rational thinking to come up with a decision.
>>>
>>> It was said you only have two choices and who these people are or might
>>> be is not a consideration. You can't make a rational decision with
>>> what-if's. You only have two options, kill 5 or kill 1. Rational for
>>> me says save 5, for the rest of you that are bringing in scenarios past
>>> what should be considered will waste too much time and you end up with a
>>> kill before you decide what to do.
>>
>> Rational thinking would state that trains run on a schedule, the
>> switch would be locked, and for better or worse the five were not
>> supposed to be there in the first place.
>
>No, you are adding "what if's to the given restraints. This is easy, you
>either choose to move the switch or not. There is no other situation to
>consider.
>
>>
>> So how can I make a decision more rational than the scheduler, even if
>> I had the key to the lock.
>>
>
>Again you are adding what-if's.
I understand what you are saying, but I would consider them inherent
to the scenario.
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
<[email protected]> wrote:
>On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
>> On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>>
>> >
>> > Oh, well, no sense in waiting...
>> >
>> > 2nd scenario:
>> >
>> > 5 workers are standing on the railroad tracks. A train is heading in their
>> > direction. They have no escape route. If the train continues down the tracks,
>> > it will most assuredly kill them all.
>> >
>> > You are standing on a bridge overlooking the tracks. Next to you is a fairly
>> > large person. We'll save you some trouble and let that person be a stranger.
>> >
>> > You have 2, and only 2, options. If you do nothing, all 5 workers will
>> > be killed. If you push the stranger off the bridge, the train will kill
>> > him but be stopped before the 5 workers are killed. (Don't question the
>> > physics, just accept the outcome.)
>> >
>> > Which option do you choose?
>> >
>>
>> I don't know. It was easy to pull the switch as there was a bit of
>> disconnect there. Now it is up close and you are doing the pushing.
>> One alternative is to jump yourself, but I'd not do that. Don't think I
>> could push the guy either.
>>
>
>And there in lies the rub. The "disconnected" part.
>
>Now, as promised, let's bring this back to technology, AI and most
>certainly, people. Let's talk specifically about autonomous vehicles,
>but please avoid the rabbit hole and realize that the concept applies
>to just about any where AI is used and people are involved. Autonomus
>vehicles (AV) are just one example.
>
>Imagine it's X years from now and AV's are fairly common. Imagine that an AV
>is traveling down the road, with its AI in complete control of the vehicle.
>The driver is using one hand get a cup of coffee from the built-in Keurig
>machine and choosing a Pandora station with the other. He is completely
>oblivious to what's happening outside of his vehicle.
>
>Now imagine that a 4 year old runs out into the road. The AI uses all of the
>data at its disposal (speed, distance, weather conditions, tire pressure,
>etc.) and decides that it will not be able to stop in time. It checks the
>input from its 360° cameras. Can't go right because of the line of parked
>cars. They won't slow the vehicle enough to avoid hitting the kid. Using
>facial recognition the AI determines that the mini-van on the left contains
>5 elderly people. If the AV swerves left, it will push the mini-van into
>oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
>with the 18 wheeler's AI who responds and says "I have no place to go. If
>you push the van into my lane, I'm taking out a bunch of Grandmas and
>Grandpas."
>
>Now the AI has to make basically the same decision as in my first scenario:
>Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>
>"Bye Bye, kid. You should have stayed on the sidewalk."
>
>No emotion, right? Right, not once the AI is programmed, not once the initial
>AI rules have been written, not once the facial recognition database has
>been built. The question is who wrote those rules? Who decided it's OK to
>kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
>it's better to save the kid and let the old folks die. They've had a full
>life. Who wrote that rule? In other words, someone(s) have to decide whose
>life is worth more than another's. They are essentially standing on a bridge
>deciding whether to push the guy or not. They have to write the rule. They
>are either going to kill the kid or push the car into the other lane.
>
>I, for one, don't think that I want to be sitting around that table. Having
>to make the decisions would be one thing. Having to sit next to the person
>that would push the guy off the bridge with a gleam in his eye would be a
>totally different story.
Then there is the added input of the possible wealth of those rickety
old people or the political power the office holder holds and the
disruption to the economy or social power.
Who lives and who dies would have to be state mandated or lawsuits
filed on the programmer would ensue.
On 22-Nov-17 9:39 AM, Spalted Walt wrote:
...
> I think humans have an aversion to harming others that needs to be
> overridden by something (artificial intelligence). By rational
> thinking we can sometimes override it -- by thinking about the people
> we will save, for example. But for some people, that increase in
> anxiety may be so overpowering that they don't make the utilitarian
> choice, the choice for the greater good.
But if the one happens to Einstein or similar vis a vis the five "just
ordinary folks" what's utilitarian?
--
On Tue, 21 Nov 2017 05:18:37 +0000
Spalted Walt <[email protected]> wrote:
> "BladeStop=E2=84=A2 improves band saw safety, with the ability to stop a
> bandsaw blade within a fraction of a second from when contact is made
why do sawstop have to move over do they have a bandsaw product too
seems like a good idea but i think they need to make a razor knife
that will not cut the operator and also wood that has no splinters
On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>
> Oh, well, no sense in waiting...
>
> 2nd scenario:
>
> 5 workers are standing on the railroad tracks. A train is heading in their
> direction. They have no escape route. If the train continues down the tracks,
> it will most assuredly kill them all.
>
> You are standing on a bridge overlooking the tracks. Next to you is a fairly
> large person. We'll save you some trouble and let that person be a stranger.
>
> You have 2, and only 2, options. If you do nothing, all 5 workers will
> be killed. If you push the stranger off the bridge, the train will kill
> him but be stopped before the 5 workers are killed. (Don't question the
> physics, just accept the outcome.)
>
> Which option do you choose?
>
I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.
Next, are the answers you get to the question what would actually
happen? It is easy to say "sure, I'd push the guy and save the other
lives" but IRL, would that happen? I can sit at my computer and
rationalize but if the time came, emotion may take over.
On Nov 24, 2017, J. Clarke wrote
(in article<[email protected]>):
> On Fri, 24 Nov 2017 11:33:41 -0500, Joseph Gwinn
> <[email protected]> wrote:
>
> > On Nov 24, 2017, OFWW wrote
> > (in article<[email protected]>):
> >
> > > On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
> > > <[email protected]> wrote:
> > >
> > > > On Thu, 23 Nov 2017 20:52:09 -0800, OFWW<[email protected]>
> > > > wrote:
> > > >
> > > > > On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
> > > > > <[email protected]> wrote:
> > > > >
> > > > > > On Thu, 23 Nov 2017 18:44:05 -0800, OFWW<[email protected]>
> > > > > > wrote:
> > > > > >
> > > > > > > On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
> > > > > > > <[email protected]> wrote:
> > > > > > >
> > > > > > > > On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
> > > > > > > > > On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
> > > > > > > > > <[email protected]> wrote:
> > > > > > > > >
> > > > > > > > > > On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski
> > > > > > > > > > wrote:
> > > > > > > > > > > On 11/22/2017 1:20 PM, DerbyDad03 wrote:
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Oh, well, no sense in waiting...
> > > > > > > > > > > >
> > > > > > > > > > > > 2nd scenario:
> > > > > > > > > > > >
> > > > > > > > > > > > 5 workers are standing on the railroad tracks. A train is heading
> > > > > > > > > > > > in their
> > > > > > > > > > > > direction. They have no escape route. If the train continues down
> > > > > > > > > > > > the tracks,
> > > > > > > > > > > > it will most assuredly kill them all.
> > > > > > > > > > > >
> > > > > > > > > > > > You are standing on a bridge overlooking the tracks. Next to you
> > > > > > > > > > > > is
> > > > > > > > > > > > a fairly
> > > > > > > > > > > > large person. We'll save you some trouble and let that person be a
> > > > > > > > > > > > stranger.
> > > > > > > > > > > >
> > > > > > > > > > > > You have 2, and only 2, options. If you do nothing, all 5 workers
> > > > > > > > > > > > will
> > > > > > > > > > > > be killed. If you push the stranger off the bridge, the train will
> > > > > > > > > > > > kill
> > > > > > > > > > > > him but be stopped before the 5 workers are killed. (Don't
> > > > > > > > > > > > question
> > > > > > > > > > > > the
> > > > > > > > > > > > physics, just accept the outcome.)
> > > > > > > > > > > >
> > > > > > > > > > > > Which option do you choose?
> > > > > > > > > > >
> > > > > > > > > > > I don't know. It was easy to pull the switch as there was a bit of
> > > > > > > > > > > disconnect there. Now it is up close and you are doing the pushing.
> > > > > > > > > > > One alternative is to jump yourself, but I'd not do that. Don't
> > > > > > > > > > > think I
> > > > > > > > > > > could push the guy either.
> > > > > > > > > >
> > > > > > > > > > And there in lies the rub. The "disconnected" part.
> > > > > > > > > >
> > > > > > > > > > Now, as promised, let's bring this back to technology, AI and most
> > > > > > > > > > certainly, people. Let's talk specifically about autonomous
> > > > > > > > > > vehicles,
> > > > > > > > > > but please avoid the rabbit hole and realize that the concept
> > > > > > > > > > applies
> > > > > > > > > > to just about any where AI is used and people are involved.
> > > > > > > > > > Autonomus
> > > > > > > > > > vehicles (AV) are just one example.
> > > > > > > > > >
> > > > > > > > > > Imagine it's X years from now and AV's are fairly common. Imagine
> > > > > > > > > > that an AV
> > > > > > > > > > is traveling down the road, with its AI in complete control of the
> > > > > > > > > > vehicle.
> > > > > > > > > > The driver is using one hand get a cup of coffee from the built-in
> > > > > > > > > > Keurig
> > > > > > > > > > machine and choosing a Pandora station with the other. He is
> > > > > > > > > > completely
> > > > > > > > > > oblivious to what's happening outside of his vehicle.
> > > > > > > > > >
> > > > > > > > > > Now imagine that a 4 year old runs out into the road. The AI uses
> > > > > > > > > > all
> > > > > > > > > > of the
> > > > > > > > > > data at its disposal (speed, distance, weather conditions, tire
> > > > > > > > > > pressure,
> > > > > > > > > > etc.) and decides that it will not be able to stop in time. It
> > > > > > > > > > checks
> > > > > > > > > > the
> > > > > > > > > > input from its 360° cameras. Can't go right because of the line of
> > > > > > > > > > parked
> > > > > > > > > > cars. They won't slow the vehicle enough to avoid hitting the kid.
> > > > > > > > > > Using
> > > > > > > > > > facial recognition the AI determines that the mini-van on the left
> > > > > > > > > > contains
> > > > > > > > > > 5 elderly people. If the AV swerves left, it will push the mini-van
> > > > > > > > > > into
> > > > > > > > > > oncoming traffic, directly into the path of a 18 wheeler. The AI
> > > > > > > > > > communicates
> > > > > > > > > > with the 18 wheeler's AI who responds and says "I have no place to
> > > > > > > > > > go. If
> > > > > > > > > > you push the van into my lane, I'm taking out a bunch of Grandmas
> > > > > > > > > > and
> > > > > > > > > > Grandpas."
> > > > > > > > > >
> > > > > > > > > > Now the AI has to make basically the same decision as in my first
> > > > > > > > > > scenario:
> > > > > > > > > > Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
> > > > > > > > > >
> > > > > > > > > > "Bye Bye, kid. You should have stayed on the sidewalk."
> > > > > > > > > >
> > > > > > > > > > No emotion, right? Right, not once the AI is programmed, not once
> > > > > > > > > > the
> > > > > > > > > > initial
> > > > > > > > > > AI rules have been written, not once the facial recognition database
> > > > > > > > > > has
> > > > > > > > > > been built. The question is who wrote those rules? Who decided it's
> > > > > > > > > > OK to
> > > > > > > > > > kill a young kid to save the lives of 5 rickety old folks? Oh wait,
> > > > > > > > > > maybe
> > > > > > > > > > it's better to save the kid and let the old folks die. They've had a
> > > > > > > > > > full
> > > > > > > > > > life. Who wrote that rule? In other words, someone(s) have to decide
> > > > > > > > > > whose
> > > > > > > > > > life is worth more than another's. They are essentially standing on
> > > > > > > > > > a
> > > > > > > > > > bridge
> > > > > > > > > > deciding whether to push the guy or not. They have to write the
> > > > > > > > > > rule.
> > > > > > > > > > They
> > > > > > > > > > are either going to kill the kid or push the car into the other
> > > > > > > > > > lane.
> > > > > > > > > >
> > > > > > > > > > I, for one, don't think that I want to be sitting around that table.
> > > > > > > > > > Having
> > > > > > > > > > to make the decisions would be one thing. Having to sit next to the
> > > > > > > > > > person
> > > > > > > > > > that would push the guy off the bridge with a gleam in his eye would
> > > > > > > > > > be a
> > > > > > > > > > totally different story.
> > > > > > > > >
> > > > > > > > > I reconsidered my thoughts on this one as well.
> > > > > > > > >
> > > > > > > > > The AV should do as it was designed to do, to the best of its
> > > > > > > > > capabilities. Staying in the lane when there is no option to swerve
> > > > > > > > > safely.
> > > > > > > > >
> > > > > > > > > There is already a legal reason for that, that being that the
> > > > > > > > > swerving
> > > > > > > > > driver assumes all the damages that incur from his action, including
> > > > > > > > > manslaughter.
> > > > > > > >
> > > > > > > > So in the following brake failure scenario, if the AV stays in lane
> > > > > > > > and
> > > > > > > > kills the four "highly rated" pedestrians there are no charges, but if
> > > > > > > > it changes lanes and takes out the 4 slugs, jail time may ensue.
> > > > > > > >
> > > > > > > > http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-800
> > > > > > > >
> > > > > > > > Interesting.
> > > > > > >
> > > > > > > Yes, and I've been warned that by my taking evasive action I could
> > > > > > > cause someone else to respond likewise and that I would he held
> > > > > > > accountable for what happened.
> > > > > >
> > > > > > I find the assumption that a fatality involving a robot car would lead
> > > > > > to someone being jailed to be amusing. The people who assert this
> > > > > > never identify the statute under which someone would be jailed or who,
> > > > > > precisely this someone might be. They seem to assume that because a
> > > > > > human driving a car could be jailed for vehicular homicide or criminal
> > > > > > negligence or some such, it is automatic that someone else would be
> > > > > > jailed for the same offense--the trouble is that the car is legally an
> > > > > > inanimate object and we don't put inanimate objects in jail. So it
> > > > > > gets down to proving that the occupant is negligent, which is a hard
> > > > > > sell given that the government allowed the car to be licensed with the
> > > > > > understanding that it would not be controlled by the occupant, or
> > > > > > proving that the engineering team responsible for developing it was
> > > > > > negligent, which given that they can show the logic the thing used and
> > > > > > no doubt provide legal justification for the decision it made, will be
> > > > > > another tall order. So who goes to jail?
> > > > >
> > > > > You've taken it to the next level, into the real word scenario and out
> > > > > of the programming stage.
> > > > >
> > > > > Personally I would assume that anything designed would have to
> > > > > co-exist with real world laws and responsibilities. Even the final
> > > > > owner could be held responsible. See the laws regarding experimental
> > > > > aircraft, hang gliders, etc.
> > > >
> > > > Experimental aircraft and hang gliders are controlled by a human. If
> > > > they are involved in a fatl accident, the operator gets scrutinized.
> > > > An autonomous car is not under human control, it is its own operator,
> > > > the occupant is a passenger.
> > > >
> > > > We don't have "real world law" governing fatalities involving
> > > > autonomous vehicles. The engineering would, initially (I hope) be
> > > > based on existing case law involving human drivers and what the courts
> > > > held that they should or should not have done in particular
> > > > situations. But there won't be any actual law until either the
> > > > legislatures write statutes or the courts issue rulings, and the
> > > > latter won't happen until there are such vehicles in service in
> > > > sufficient quantity to generate cases.
> > > >
> > > > Rather than hang gliders and homebuilts, consider a Globalhawk that
> > > > hits an airliner. Assuming no negligence on the part of the airliner
> > > > crew, who do you go after? Do you go after the Air Force, Northrop
> > > > Grumman, Raytheon, or somebody else? And of what are they likely to
> > > > be found guilty?
> >
> > GlobalHawk drones do have human pilots. Although they are not on board, they
> > are in control via a stellite link and can be thousands of miles away.
> >
> > .<http://www.aviationtoday.com/2017/03/16/day-life-us-air-force-drone-pilot/
>
> You are conflating Reaper and Globalhawk and totally missing the
> point.
Could you be more specific? Exactly what is wrong?
Joe Gwinn
On Fri, 24 Nov 2017 09:11:16 -0500, J. Clarke
<[email protected]> wrote:
>>>>But we should be sticking to this hypothetical example given us.
>>>
>>>It was suggested that someone would go to jail. I still want to know
>>>who and what crime they committed.
>>
>>The person who did not stay in their own lane, and ended up committing
>>involuntary manslaughter.
>
>Are you arguing that an autonomous vehicle is a "person"? You
>really don't seem to grasp the concept. Rather than a car with an
>occupant, make it a car, say a robot taxicab, that is going somewhere
>or other unoccupied.
>
Is not a "who" a person? and yes, I realize the optimum goal is for a
stand alone vehicle independent of owner operator. The robotic taxicab
is already in test mode.
>>In the case you bring up the AV can be currently over ridden at
>>anytime by the occupant. There are already AV vehicles operating on
>>the streets.
>
>In what case that I bring up?
The case of the option for switching lanes. Your questioning as who
can be at fault. I brought up the fact that experiment air craft have
a lifetime indebtedness going back to the original maker and designer.
It was to answer just who was culpable.
> Globalhawk doesn't _have_ an occupant.
>(when people use words with which you are unfamiliar, you should at
>least Google those words before opining). There are very few
>autonomous vehicles and currently they are for the most part operated
>with a safety driver, but that is not anybody's long-term plan. Google
>already has at least one demonstrator with no steering wheel or pedals
>and Uber is planning on using driverless cars in their ride sharing
>service--ultimately those would also have no controls accessible to
>the passenger.
>
There are a lot of autonomous vehicles running around, it just depends
are where you are, some have already been in real world accidents,
Uber already were testing vehicles but required a person in the case
just in case.
And yes, I knew globalhawks do not have an occupant resident in the
vehicle, but they are all monitored. As to vehicles some have a safety
driver and some do not. The globalhawks have built in sensory devices
themselves for alarming, etc. and all the data from radar, satellites
etc. The info for the full technology that they and the operators have
is not disclosed. Plus it is a secret as to who all are operating the
vehicles so the bottom line would be the government operating them.
But thank you for your comment on my knowledge and how to fix it. :)
>>Regarding your "whose at fault" scenario, just look at the court cases
>>against gun makers, as if guns kill people.
>
>I have not introduced a "who's at fault scenariao". I have asked what
>law would be violated and who would be jailed. "At fault" decides who
>pays damages, not who goes to jail. I am not discussing damages, I am
>discussing JAIL. You do know what a jail is, do you not?
>
Sorry, my Internet connection is down and I cannot google it.
>>So can we know return to the question or at the least, wood working?
>
>You're the one who started feeding the troll.
Sorry, I am not privy to the list, so I'll just make this my last post
on the subject, but I will read your reply.
On Fri, 24 Nov 2017 11:33:41 -0500, Joseph Gwinn
<[email protected]> wrote:
>> > Rather than hang gliders and homebuilts, consider a Globalhawk that
>> > hits an airliner. Assuming no negligence on the part of the airliner
>> > crew, who do you go after? Do you go after the Air Force, Northrop
>> > Grumman, Raytheon, or somebody else? And of what are they likely to
>> > be found guilty?
>
>GlobalHawk drones do have human pilots. Although they are not on board, they
>are in control via a stellite link and can be thousands of miles away.
>
>.<http://www.aviationtoday.com/2017/03/16/day-life-us-air-force-drone-pilot/>
>
>Joe Gwinn
Yes, I know. They, some versions, can be refueled in air.
On Fri, 24 Nov 2017 11:33:41 -0500, Joseph Gwinn
<[email protected]> wrote:
>On Nov 24, 2017, OFWW wrote
>(in article<[email protected]>):
>
>> On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
>> <[email protected]> wrote:
>>
>> > On Thu, 23 Nov 2017 20:52:09 -0800, OFWW<[email protected]>
>> > wrote:
>> >
>> > > On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
>> > > <[email protected]> wrote:
>> > >
>> > > > On Thu, 23 Nov 2017 18:44:05 -0800, OFWW<[email protected]>
>> > > > wrote:
>> > > >
>> > > > > On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
>> > > > > <[email protected]> wrote:
>> > > > >
>> > > > > > On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
>> > > > > > > On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
>> > > > > > > <[email protected]> wrote:
>> > > > > > >
>> > > > > > > > On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski
>> > > > > > > > wrote:
>> > > > > > > > > On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>> > > > > > > > >
>> > > > > > > > > >
>> > > > > > > > > > Oh, well, no sense in waiting...
>> > > > > > > > > >
>> > > > > > > > > > 2nd scenario:
>> > > > > > > > > >
>> > > > > > > > > > 5 workers are standing on the railroad tracks. A train is heading
>> > > > > > > > > > in their
>> > > > > > > > > > direction. They have no escape route. If the train continues down
>> > > > > > > > > > the tracks,
>> > > > > > > > > > it will most assuredly kill them all.
>> > > > > > > > > >
>> > > > > > > > > > You are standing on a bridge overlooking the tracks. Next to you is
>> > > > > > > > > > a fairly
>> > > > > > > > > > large person. We'll save you some trouble and let that person be a
>> > > > > > > > > > stranger.
>> > > > > > > > > >
>> > > > > > > > > > You have 2, and only 2, options. If you do nothing, all 5 workers
>> > > > > > > > > > will
>> > > > > > > > > > be killed. If you push the stranger off the bridge, the train will
>> > > > > > > > > > kill
>> > > > > > > > > > him but be stopped before the 5 workers are killed. (Don't question
>> > > > > > > > > > the
>> > > > > > > > > > physics, just accept the outcome.)
>> > > > > > > > > >
>> > > > > > > > > > Which option do you choose?
>> > > > > > > > >
>> > > > > > > > > I don't know. It was easy to pull the switch as there was a bit of
>> > > > > > > > > disconnect there. Now it is up close and you are doing the pushing.
>> > > > > > > > > One alternative is to jump yourself, but I'd not do that. Don't
>> > > > > > > > > think I
>> > > > > > > > > could push the guy either.
>> > > > > > > >
>> > > > > > > > And there in lies the rub. The "disconnected" part.
>> > > > > > > >
>> > > > > > > > Now, as promised, let's bring this back to technology, AI and most
>> > > > > > > > certainly, people. Let's talk specifically about autonomous vehicles,
>> > > > > > > > but please avoid the rabbit hole and realize that the concept applies
>> > > > > > > > to just about any where AI is used and people are involved. Autonomus
>> > > > > > > > vehicles (AV) are just one example.
>> > > > > > > >
>> > > > > > > > Imagine it's X years from now and AV's are fairly common. Imagine
>> > > > > > > > that an AV
>> > > > > > > > is traveling down the road, with its AI in complete control of the
>> > > > > > > > vehicle.
>> > > > > > > > The driver is using one hand get a cup of coffee from the built-in
>> > > > > > > > Keurig
>> > > > > > > > machine and choosing a Pandora station with the other. He is
>> > > > > > > > completely
>> > > > > > > > oblivious to what's happening outside of his vehicle.
>> > > > > > > >
>> > > > > > > > Now imagine that a 4 year old runs out into the road. The AI uses all
>> > > > > > > > of the
>> > > > > > > > data at its disposal (speed, distance, weather conditions, tire
>> > > > > > > > pressure,
>> > > > > > > > etc.) and decides that it will not be able to stop in time. It checks
>> > > > > > > > the
>> > > > > > > > input from its 360° cameras. Can't go right because of the line of
>> > > > > > > > parked
>> > > > > > > > cars. They won't slow the vehicle enough to avoid hitting the kid.
>> > > > > > > > Using
>> > > > > > > > facial recognition the AI determines that the mini-van on the left
>> > > > > > > > contains
>> > > > > > > > 5 elderly people. If the AV swerves left, it will push the mini-van
>> > > > > > > > into
>> > > > > > > > oncoming traffic, directly into the path of a 18 wheeler. The AI
>> > > > > > > > communicates
>> > > > > > > > with the 18 wheeler's AI who responds and says "I have no place to
>> > > > > > > > go. If
>> > > > > > > > you push the van into my lane, I'm taking out a bunch of Grandmas and
>> > > > > > > > Grandpas."
>> > > > > > > >
>> > > > > > > > Now the AI has to make basically the same decision as in my first
>> > > > > > > > scenario:
>> > > > > > > > Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>> > > > > > > >
>> > > > > > > > "Bye Bye, kid. You should have stayed on the sidewalk."
>> > > > > > > >
>> > > > > > > > No emotion, right? Right, not once the AI is programmed, not once the
>> > > > > > > > initial
>> > > > > > > > AI rules have been written, not once the facial recognition database
>> > > > > > > > has
>> > > > > > > > been built. The question is who wrote those rules? Who decided it's
>> > > > > > > > OK to
>> > > > > > > > kill a young kid to save the lives of 5 rickety old folks? Oh wait,
>> > > > > > > > maybe
>> > > > > > > > it's better to save the kid and let the old folks die. They've had a
>> > > > > > > > full
>> > > > > > > > life. Who wrote that rule? In other words, someone(s) have to decide
>> > > > > > > > whose
>> > > > > > > > life is worth more than another's. They are essentially standing on a
>> > > > > > > > bridge
>> > > > > > > > deciding whether to push the guy or not. They have to write the rule.
>> > > > > > > > They
>> > > > > > > > are either going to kill the kid or push the car into the other lane.
>> > > > > > > >
>> > > > > > > > I, for one, don't think that I want to be sitting around that table.
>> > > > > > > > Having
>> > > > > > > > to make the decisions would be one thing. Having to sit next to the
>> > > > > > > > person
>> > > > > > > > that would push the guy off the bridge with a gleam in his eye would
>> > > > > > > > be a
>> > > > > > > > totally different story.
>> > > > > > >
>> > > > > > > I reconsidered my thoughts on this one as well.
>> > > > > > >
>> > > > > > > The AV should do as it was designed to do, to the best of its
>> > > > > > > capabilities. Staying in the lane when there is no option to swerve
>> > > > > > > safely.
>> > > > > > >
>> > > > > > > There is already a legal reason for that, that being that the swerving
>> > > > > > > driver assumes all the damages that incur from his action, including
>> > > > > > > manslaughter.
>> > > > > >
>> > > > > > So in the following brake failure scenario, if the AV stays in lane and
>> > > > > > kills the four "highly rated" pedestrians there are no charges, but if
>> > > > > > it changes lanes and takes out the 4 slugs, jail time may ensue.
>> > > > > >
>> > > > > > http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-800
>> > > > > >
>> > > > > > Interesting.
>> > > > >
>> > > > > Yes, and I've been warned that by my taking evasive action I could
>> > > > > cause someone else to respond likewise and that I would he held
>> > > > > accountable for what happened.
>> > > >
>> > > > I find the assumption that a fatality involving a robot car would lead
>> > > > to someone being jailed to be amusing. The people who assert this
>> > > > never identify the statute under which someone would be jailed or who,
>> > > > precisely this someone might be. They seem to assume that because a
>> > > > human driving a car could be jailed for vehicular homicide or criminal
>> > > > negligence or some such, it is automatic that someone else would be
>> > > > jailed for the same offense--the trouble is that the car is legally an
>> > > > inanimate object and we don't put inanimate objects in jail. So it
>> > > > gets down to proving that the occupant is negligent, which is a hard
>> > > > sell given that the government allowed the car to be licensed with the
>> > > > understanding that it would not be controlled by the occupant, or
>> > > > proving that the engineering team responsible for developing it was
>> > > > negligent, which given that they can show the logic the thing used and
>> > > > no doubt provide legal justification for the decision it made, will be
>> > > > another tall order. So who goes to jail?
>> > >
>> > > You've taken it to the next level, into the real word scenario and out
>> > > of the programming stage.
>> > >
>> > > Personally I would assume that anything designed would have to
>> > > co-exist with real world laws and responsibilities. Even the final
>> > > owner could be held responsible. See the laws regarding experimental
>> > > aircraft, hang gliders, etc.
>> >
>> > Experimental aircraft and hang gliders are controlled by a human. If
>> > they are involved in a fatl accident, the operator gets scrutinized.
>> > An autonomous car is not under human control, it is its own operator,
>> > the occupant is a passenger.
>> >
>> > We don't have "real world law" governing fatalities involving
>> > autonomous vehicles. The engineering would, initially (I hope) be
>> > based on existing case law involving human drivers and what the courts
>> > held that they should or should not have done in particular
>> > situations. But there won't be any actual law until either the
>> > legislatures write statutes or the courts issue rulings, and the
>> > latter won't happen until there are such vehicles in service in
>> > sufficient quantity to generate cases.
>> >
>> > Rather than hang gliders and homebuilts, consider a Globalhawk that
>> > hits an airliner. Assuming no negligence on the part of the airliner
>> > crew, who do you go after? Do you go after the Air Force, Northrop
>> > Grumman, Raytheon, or somebody else? And of what are they likely to
>> > be found guilty?
>
>GlobalHawk drones do have human pilots. Although they are not on board, they
>are in control via a stellite link and can be thousands of miles away.
>
>.<http://www.aviationtoday.com/2017/03/16/day-life-us-air-force-drone-pilot/>
You are conflating Reaper and Globalhawk and totally missing the
point.
On Fri, 24 Nov 2017 00:53:07 -0500, J. Clarke
<[email protected]> wrote:
>On Thu, 23 Nov 2017 23:46:52 -0600, Markem <[email protected]>
>wrote:
>
>>On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
>><[email protected]> wrote:
>>
>>>It was suggested that someone would go to jail. I still want to know
>>>who and what crime they committed.
>>
>>Damages would be a tort case,
>
>So why do you mention damages?
>
>> as to who and what crime that would be
>>determined in court. Some DA looking for publicty would brings
>>charges.
>
>What charges? To bring charges there must have been a chargeable
>offense, which means that a plausible argument can be made that some
>law was violated. So what law do you believe would have been
>violated? Or do you just _like_ being laughed out of court?
I am not looking for political office, ever heard the saying a DA can
indict a ham sandwich.
[email protected] wrote:
> I have to say, I am sorry to see that.
technophobia [tek-nuh-foh-bee-uh]
noun -- abnormal fear of or anxiety about the effects of advanced technology.
https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
On Thu, 23 Nov 2017 07:36:23 -0800 (PST), DerbyDad03
<[email protected]> wrote:
>On Thursday, November 23, 2017 at 10:21:38 AM UTC-5, Leon wrote:
>> On 11/23/2017 1:14 AM, OFWW wrote:
>> > On Wed, 22 Nov 2017 18:12:06 -0600, Leon <lcb11211@swbelldotnet>
>> > wrote:
>> >
>> >> On 11/22/2017 1:17 PM, OFWW wrote:
>> >>> On Wed, 22 Nov 2017 12:45:11 -0600, Leon <lcb11211@swbelldotnet>
>> >>> wrote:
>> >>>
>> >>>> On 11/22/2017 8:45 AM, Leon wrote:
>> >>>>> On 11/22/2017 6:52 AM, DerbyDad03 wrote:
>> >>>>>> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>> >>>>>>> [email protected] wrote:
>> >>>>>>>
>> >>>>>>>> I have to say, I am sorry to see that.
>> >>>>>>>
>> >>>>>>> technophobia [tek-nuh-foh-bee-uh]
>> >>>>>>> noun -- abnormal fear of or anxiety about the effects of advanced
>> >>>>>>> technology.
>> >>>>>>>
>> >>>>>>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>> >>>>>>>
>> >>>>>>
>> >>>>>> I'm not sure how this will work out on usenet, but I'm going to present
>> >>>>>> a scenario and ask for an answer. After some amount of time, maybe 48
>> >>>>>> hours,
>> >>>>>> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
>> >>>>>> another answer.
>> >>>>>>
>> >>>>>> Trust me, this will eventually lead back to technology, AI and most
>> >>>>>> certainly, people.
>> >>>>>>
>> >>>>>> In the following scenario you must assume that all options have been
>> >>>>>> considered and narrowed down to only 2. Please just accept that the
>> >>>>>> situation is as stated and that you only have 2 choices. If we get into
>> >>>>>> "Well, in a real life situation, you'd have to factor in this, that and
>> >>>>>> the other thing" we'll never get through this exercise.
>> >>>>>>
>> >>>>>> Here goes:
>> >>>>>>
>> >>>>>> 5 workers are standing on the railroad tracks. A train is heading in
>> >>>>>> their
>> >>>>>> direction. They have no escape route. If the train continues down the
>> >>>>>> tracks,
>> >>>>>> it will most assuredly kill them all.
>> >>>>>>
>> >>>>>> You are standing next to the lever that will switch the train to another
>> >>>>>> track before it reaches the workers. On the other track is a lone worker,
>> >>>>>> also with no escape route.
>> >>>>>>
>> >>>>>> You have 2, and only 2, options. If you do nothing, all 5 workers will
>> >>>>>> be killed. If you pull the lever, only 1 worker will be killed.
>> >>>>>>
>> >>>>>> Which option do you choose?
>> >>>>>>
>> >>>>>
>> >>>>> Pull the lever, Choosing to do nothing is the choice to kill 5.
>> >>>>
>> >>>> Well I have mentioned this before, and it goes back to comments I have
>> >>>> made in the past about decision making. It seems the majority here use
>> >>>> emotional over rational thinking to come up with a decision.
>> >>>>
>> >>>> It was said you only have two choices and who these people are or might
>> >>>> be is not a consideration. You can't make a rational decision with
>> >>>> what-if's. You only have two options, kill 5 or kill 1. Rational for
>> >>>> me says save 5, for the rest of you that are bringing in scenarios past
>> >>>> what should be considered will waste too much time and you end up with a
>> >>>> kill before you decide what to do.
>> >>>
>> >>> Rational thinking would state that trains run on a schedule, the
>> >>> switch would be locked, and for better or worse the five were not
>> >>> supposed to be there in the first place.
>> >>
>> >> No, you are adding "what if's to the given restraints. This is easy, you
>> >> either choose to move the switch or not. There is no other situation to
>> >> consider.
>> >>
>> >>>
>> >>> So how can I make a decision more rational than the scheduler, even if
>> >>> I had the key to the lock.
>> >>>
>> >>
>> >> Again you are adding what-if's.
>> >
>> > I understand what you are saying, but I would consider them inherent
>> > to the scenario.
>> >
>>
>> LOL. Yeah well blame Derby for leaving out details to consider. ;~)
>
>The train schedule, labor contract and key access process was not available
>at the time of my posting. Sorry.
Thinking along the lines if I were the programmer for the code, I
would have to conclude insufficient info and let what happens happen
until such time as there is more info.
replying to Spalted Walt, Ernesto wrote:
I recently read Max Tegmark's new book "Life 3.0" and have read others by Ray
Kurzweil and other physicists and engineers that touch on this subject. As a
result I know that everything in this video is accurate and disturbingly
likely. I don't believe those who are never content with the amount of power
and wealth they have will refrain from developing this technology, especially
because in their ignorance and arrogance they will mistakenly believe they
will be able to control it once they have it. As such, I don't think the
efforts to restrict and guide the development of AI will be successful in
keeping us safe from the dark side of strong AI.
--
for full context, visit https://www.homeownershub.com/woodworking/move-over-sawstop-812400-.htm
On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
<[email protected]> wrote:
>On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
>> On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
>> <[email protected]> wrote:
>>
>> >On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
>> >> On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>> >>
>> >> >
>> >> > Oh, well, no sense in waiting...
>> >> >
>> >> > 2nd scenario:
>> >> >
>> >> > 5 workers are standing on the railroad tracks. A train is heading in their
>> >> > direction. They have no escape route. If the train continues down the tracks,
>> >> > it will most assuredly kill them all.
>> >> >
>> >> > You are standing on a bridge overlooking the tracks. Next to you is a fairly
>> >> > large person. We'll save you some trouble and let that person be a stranger.
>> >> >
>> >> > You have 2, and only 2, options. If you do nothing, all 5 workers will
>> >> > be killed. If you push the stranger off the bridge, the train will kill
>> >> > him but be stopped before the 5 workers are killed. (Don't question the
>> >> > physics, just accept the outcome.)
>> >> >
>> >> > Which option do you choose?
>> >> >
>> >>
>> >> I don't know. It was easy to pull the switch as there was a bit of
>> >> disconnect there. Now it is up close and you are doing the pushing.
>> >> One alternative is to jump yourself, but I'd not do that. Don't think I
>> >> could push the guy either.
>> >>
>> >
>> >And there in lies the rub. The "disconnected" part.
>> >
>> >Now, as promised, let's bring this back to technology, AI and most
>> >certainly, people. Let's talk specifically about autonomous vehicles,
>> >but please avoid the rabbit hole and realize that the concept applies
>> >to just about any where AI is used and people are involved. Autonomus
>> >vehicles (AV) are just one example.
>> >
>> >Imagine it's X years from now and AV's are fairly common. Imagine that an AV
>> >is traveling down the road, with its AI in complete control of the vehicle.
>> >The driver is using one hand get a cup of coffee from the built-in Keurig
>> >machine and choosing a Pandora station with the other. He is completely
>> >oblivious to what's happening outside of his vehicle.
>> >
>> >Now imagine that a 4 year old runs out into the road. The AI uses all of the
>> >data at its disposal (speed, distance, weather conditions, tire pressure,
>> >etc.) and decides that it will not be able to stop in time. It checks the
>> >input from its 360° cameras. Can't go right because of the line of parked
>> >cars. They won't slow the vehicle enough to avoid hitting the kid. Using
>> >facial recognition the AI determines that the mini-van on the left contains
>> >5 elderly people. If the AV swerves left, it will push the mini-van into
>> >oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
>> >with the 18 wheeler's AI who responds and says "I have no place to go. If
>> >you push the van into my lane, I'm taking out a bunch of Grandmas and
>> >Grandpas."
>> >
>> >Now the AI has to make basically the same decision as in my first scenario:
>> >Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>> >
>> >"Bye Bye, kid. You should have stayed on the sidewalk."
>> >
>> >No emotion, right? Right, not once the AI is programmed, not once the initial
>> >AI rules have been written, not once the facial recognition database has
>> >been built. The question is who wrote those rules? Who decided it's OK to
>> >kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
>> >it's better to save the kid and let the old folks die. They've had a full
>> >life. Who wrote that rule? In other words, someone(s) have to decide whose
>> >life is worth more than another's. They are essentially standing on a bridge
>> >deciding whether to push the guy or not. They have to write the rule. They
>> >are either going to kill the kid or push the car into the other lane.
>> >
>> >I, for one, don't think that I want to be sitting around that table. Having
>> >to make the decisions would be one thing. Having to sit next to the person
>> >that would push the guy off the bridge with a gleam in his eye would be a
>> >totally different story.
>>
>> I reconsidered my thoughts on this one as well.
>>
>> The AV should do as it was designed to do, to the best of its
>> capabilities. Staying in the lane when there is no option to swerve
>> safely.
>>
>> There is already a legal reason for that, that being that the swerving
>> driver assumes all the damages that incur from his action, including
>> manslaughter.
>
>So in the following brake failure scenario, if the AV stays in lane and
>kills the four "highly rated" pedestrians there are no charges, but if
>it changes lanes and takes out the 4 slugs, jail time may ensue.
>
>http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-800
>
>Interesting.
Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.
On Sat, 25 Nov 2017 12:25:28 -0500, Joseph Gwinn
<[email protected]> wrote:
>On Nov 24, 2017, J. Clarke wrote
>(in article<[email protected]>):
>
>> On Fri, 24 Nov 2017 18:39:03 -0500, Joseph Gwinn
>> <[email protected]> wrote:
>>
>> > On Nov 24, 2017, J. Clarke wrote
>> > (in article<[email protected]>):
>> >
>> > > On Fri, 24 Nov 2017 11:33:41 -0500, Joseph Gwinn
>> > > <[email protected]> wrote:
>> > >
>> > > > On Nov 24, 2017, OFWW wrote
>> > > > (in article<[email protected]>):
>> > > >
>> > > > > On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
>> > > > > <[email protected]> wrote:
>> > > > >
>> > > > > > On Thu, 23 Nov 2017 20:52:09 -0800, OFWW<[email protected]>
>> > > > > > wrote:
>> > > > > >
>> > > > > > > On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
>> > > > > > > <[email protected]> wrote:
>> > > > > > >
>> > > > > > > > On Thu, 23 Nov 2017 18:44:05 -0800, OFWW<[email protected]>
>> > > > > > > > wrote:
>> > > > > > > >
>> > > > > > > > > On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
>> > > > > > > > > <[email protected]> wrote:
>> > > > > > > > >
>> > > > > > > > > > On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
>> > > > > > > > > > > On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
>> > > > > > > > > > > <[email protected]> wrote:
>> > > > > > > > > > >
>> > > > > > > > > > > > On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski
>> > > > > > > > > > > > wrote:
>> > > > > > > > > > > > > On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>> > > > > > > > > > > > >
>> > > > > > > > > > > > > >
>> > > > > > > > > > > > > > Oh, well, no sense in waiting...
>> > > > > > > > > > > > > >
>> > > > > > > > > > > > > > 2nd scenario:
>> > > > > > > > > > > > > >
>> > > > > > > > > > > > > > 5 workers are standing on the railroad tracks. A train is
>> > > > > > > > > > > > > > heading
>> > > > > > > > > > > > > > in their
>> > > > > > > > > > > > > > direction. They have no escape route. If the train continues
>> > > > > > > > > > > > > > down
>> > > > > > > > > > > > > > the tracks,
>> > > > > > > > > > > > > > it will most assuredly kill them all.
>> > > > > > > > > > > > > >
>> > > > > > > > > > > > > > You are standing on a bridge overlooking the tracks. Next to you
>> > > > > > > > > > > > > > is
>> > > > > > > > > > > > > > a fairly
>> > > > > > > > > > > > > > large person. We'll save you some trouble and let that person
>> > > > > > > > > > > > > > be a
>> > > > > > > > > > > > > > stranger.
>> > > > > > > > > > > > > >
>> > > > > > > > > > > > > > You have 2, and only 2, options. If you do nothing, all 5
>> > > > > > > > > > > > > > workers
>> > > > > > > > > > > > > > will
>> > > > > > > > > > > > > > be killed. If you push the stranger off the bridge, the train
>> > > > > > > > > > > > > > will
>> > > > > > > > > > > > > > kill
>> > > > > > > > > > > > > > him but be stopped before the 5 workers are killed. (Don't
>> > > > > > > > > > > > > > question
>> > > > > > > > > > > > > > the
>> > > > > > > > > > > > > > physics, just accept the outcome.)
>> > > > > > > > > > > > > >
>> > > > > > > > > > > > > > Which option do you choose?
>> > > > > > > > > > > > >
>> > > > > > > > > > > > > I don't know. It was easy to pull the switch as there was a bit
>> > > > > > > > > > > > > of
>> > > > > > > > > > > > > disconnect there. Now it is up close and you are doing the
>> > > > > > > > > > > > > pushing.
>> > > > > > > > > > > > > One alternative is to jump yourself, but I'd not do that. Don't
>> > > > > > > > > > > > > think I
>> > > > > > > > > > > > > could push the guy either.
>> > > > > > > > > > > >
>> > > > > > > > > > > > And there in lies the rub. The "disconnected" part.
>> > > > > > > > > > > >
>> > > > > > > > > > > > Now, as promised, let's bring this back to technology, AI and most
>> > > > > > > > > > > > certainly, people. Let's talk specifically about autonomous
>> > > > > > > > > > > > vehicles,
>> > > > > > > > > > > > but please avoid the rabbit hole and realize that the concept
>> > > > > > > > > > > > applies
>> > > > > > > > > > > > to just about any where AI is used and people are involved.
>> > > > > > > > > > > > Autonomus
>> > > > > > > > > > > > vehicles (AV) are just one example.
>> > > > > > > > > > > >
>> > > > > > > > > > > > Imagine it's X years from now and AV's are fairly common. Imagine
>> > > > > > > > > > > > that an AV
>> > > > > > > > > > > > is traveling down the road, with its AI in complete control of the
>> > > > > > > > > > > > vehicle.
>> > > > > > > > > > > > The driver is using one hand get a cup of coffee from the built-in
>> > > > > > > > > > > > Keurig
>> > > > > > > > > > > > machine and choosing a Pandora station with the other. He is
>> > > > > > > > > > > > completely
>> > > > > > > > > > > > oblivious to what's happening outside of his vehicle.
>> > > > > > > > > > > >
>> > > > > > > > > > > > Now imagine that a 4 year old runs out into the road. The AI uses
>> > > > > > > > > > > > all
>> > > > > > > > > > > > of the
>> > > > > > > > > > > > data at its disposal (speed, distance, weather conditions, tire
>> > > > > > > > > > > > pressure,
>> > > > > > > > > > > > etc.) and decides that it will not be able to stop in time. It
>> > > > > > > > > > > > checks
>> > > > > > > > > > > > the
>> > > > > > > > > > > > input from its 360° cameras. Can't go right because of the line
>> > > > > > > > > > > > of
>> > > > > > > > > > > > parked
>> > > > > > > > > > > > cars. They won't slow the vehicle enough to avoid hitting the kid.
>> > > > > > > > > > > > Using
>> > > > > > > > > > > > facial recognition the AI determines that the mini-van on the left
>> > > > > > > > > > > > contains
>> > > > > > > > > > > > 5 elderly people. If the AV swerves left, it will push the
>> > > > > > > > > > > > mini-van
>> > > > > > > > > > > > into
>> > > > > > > > > > > > oncoming traffic, directly into the path of a 18 wheeler. The AI
>> > > > > > > > > > > > communicates
>> > > > > > > > > > > > with the 18 wheeler's AI who responds and says "I have no place to
>> > > > > > > > > > > > go. If
>> > > > > > > > > > > > you push the van into my lane, I'm taking out a bunch of Grandmas
>> > > > > > > > > > > > and
>> > > > > > > > > > > > Grandpas."
>> > > > > > > > > > > >
>> > > > > > > > > > > > Now the AI has to make basically the same decision as in my first
>> > > > > > > > > > > > scenario:
>> > > > > > > > > > > > Kill 1 or kill 5. For the AI, it's as easy as it was for us,
>> > > > > > > > > > > > right?
>> > > > > > > > > > > >
>> > > > > > > > > > > > "Bye Bye, kid. You should have stayed on the sidewalk."
>> > > > > > > > > > > >
>> > > > > > > > > > > > No emotion, right? Right, not once the AI is programmed, not once
>> > > > > > > > > > > > the
>> > > > > > > > > > > > initial
>> > > > > > > > > > > > AI rules have been written, not once the facial recognition
>> > > > > > > > > > > > database
>> > > > > > > > > > > > has
>> > > > > > > > > > > > been built. The question is who wrote those rules? Who decided
>> > > > > > > > > > > > it's
>> > > > > > > > > > > > OK to
>> > > > > > > > > > > > kill a young kid to save the lives of 5 rickety old folks? Oh
>> > > > > > > > > > > > wait,
>> > > > > > > > > > > > maybe
>> > > > > > > > > > > > it's better to save the kid and let the old folks die. They've
>> > > > > > > > > > > > had a
>> > > > > > > > > > > > full
>> > > > > > > > > > > > life. Who wrote that rule? In other words, someone(s) have to
>> > > > > > > > > > > > decide
>> > > > > > > > > > > > whose
>> > > > > > > > > > > > life is worth more than another's. They are essentially standing
>> > > > > > > > > > > > on
>> > > > > > > > > > > > a
>> > > > > > > > > > > > bridge
>> > > > > > > > > > > > deciding whether to push the guy or not. They have to write the
>> > > > > > > > > > > > rule.
>> > > > > > > > > > > > They
>> > > > > > > > > > > > are either going to kill the kid or push the car into the other
>> > > > > > > > > > > > lane.
>> > > > > > > > > > > >
>> > > > > > > > > > > > I, for one, don't think that I want to be sitting around that
>> > > > > > > > > > > > table.
>> > > > > > > > > > > > Having
>> > > > > > > > > > > > to make the decisions would be one thing. Having to sit next to
>> > > > > > > > > > > > the
>> > > > > > > > > > > > person
>> > > > > > > > > > > > that would push the guy off the bridge with a gleam in his eye
>> > > > > > > > > > > > would
>> > > > > > > > > > > > be a
>> > > > > > > > > > > > totally different story.
>> > > > > > > > > > >
>> > > > > > > > > > > I reconsidered my thoughts on this one as well.
>> > > > > > > > > > >
>> > > > > > > > > > > The AV should do as it was designed to do, to the best of its
>> > > > > > > > > > > capabilities. Staying in the lane when there is no option to swerve
>> > > > > > > > > > > safely.
>> > > > > > > > > > >
>> > > > > > > > > > > There is already a legal reason for that, that being that the
>> > > > > > > > > > > swerving
>> > > > > > > > > > > driver assumes all the damages that incur from his action,
>> > > > > > > > > > > including
>> > > > > > > > > > > manslaughter.
>> > > > > > > > > >
>> > > > > > > > > > So in the following brake failure scenario, if the AV stays in lane
>> > > > > > > > > > and
>> > > > > > > > > > kills the four "highly rated" pedestrians there are no charges, but
>> > > > > > > > > > if
>> > > > > > > > > > it changes lanes and takes out the 4 slugs, jail time may ensue.
>> > > > > > > > > >
>> > > > > > > > > > http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-80
>> > > > > > > > > > 0
>> > > > > > > > > >
>> > > > > > > > > > Interesting.
>> > > > > > > > >
>> > > > > > > > > Yes, and I've been warned that by my taking evasive action I could
>> > > > > > > > > cause someone else to respond likewise and that I would he held
>> > > > > > > > > accountable for what happened.
>> > > > > > > >
>> > > > > > > > I find the assumption that a fatality involving a robot car would lead
>> > > > > > > > to someone being jailed to be amusing. The people who assert this
>> > > > > > > > never identify the statute under which someone would be jailed or who,
>> > > > > > > > precisely this someone might be. They seem to assume that because a
>> > > > > > > > human driving a car could be jailed for vehicular homicide or criminal
>> > > > > > > > negligence or some such, it is automatic that someone else would be
>> > > > > > > > jailed for the same offense--the trouble is that the car is legally an
>> > > > > > > > inanimate object and we don't put inanimate objects in jail. So it
>> > > > > > > > gets down to proving that the occupant is negligent, which is a hard
>> > > > > > > > sell given that the government allowed the car to be licensed with the
>> > > > > > > > understanding that it would not be controlled by the occupant, or
>> > > > > > > > proving that the engineering team responsible for developing it was
>> > > > > > > > negligent, which given that they can show the logic the thing used and
>> > > > > > > > no doubt provide legal justification for the decision it made, will be
>> > > > > > > > another tall order. So who goes to jail?
>> > > > > > >
>> > > > > > > You've taken it to the next level, into the real word scenario and out
>> > > > > > > of the programming stage.
>> > > > > > >
>> > > > > > > Personally I would assume that anything designed would have to
>> > > > > > > co-exist with real world laws and responsibilities. Even the final
>> > > > > > > owner could be held responsible. See the laws regarding experimental
>> > > > > > > aircraft, hang gliders, etc.
>> > > > > >
>> > > > > > Experimental aircraft and hang gliders are controlled by a human. If
>> > > > > > they are involved in a fatl accident, the operator gets scrutinized.
>> > > > > > An autonomous car is not under human control, it is its own operator,
>> > > > > > the occupant is a passenger.
>> > > > > >
>> > > > > > We don't have "real world law" governing fatalities involving
>> > > > > > autonomous vehicles. The engineering would, initially (I hope) be
>> > > > > > based on existing case law involving human drivers and what the courts
>> > > > > > held that they should or should not have done in particular
>> > > > > > situations. But there won't be any actual law until either the
>> > > > > > legislatures write statutes or the courts issue rulings, and the
>> > > > > > latter won't happen until there are such vehicles in service in
>> > > > > > sufficient quantity to generate cases.
>> > > > > >
>> > > > > > Rather than hang gliders and homebuilts, consider a Globalhawk that
>> > > > > > hits an airliner. Assuming no negligence on the part of the airliner
>> > > > > > crew, who do you go after? Do you go after the Air Force, Northrop
>> > > > > > Grumman, Raytheon, or somebody else? And of what are they likely to
>> > > > > > be found guilty?
>> > > >
>> > > > GlobalHawk drones do have human pilots. Although they are not on board,
>> > > > they
>> > > > are in control via a stellite link and can be thousands of miles away.
>> > > >
>> > > > .<http://www.aviationtoday.com/2017/03/16/day-life-us-air-force-drone-pilo
>> > > > t/
>> > >
>> > > You are conflating Reaper and Globalhawk and totally missing the
>> > > point.
>> >
>> > Could you be more specific? Exactly what is wrong?
>>
>> Reaper is a combat drone and is normally operated manually. We don't
>> let robots decided to shoot people yet. Globalhawk is a recon drone
>> and is normally autonomous. It has no weapons so shooting people is
>> not an issue. It can be operated manually and normally is in high
>> traffic areas for exactly the "what if it hits an airliner" reason,
>> but for most of its mission profile it is autonomous.
>
>So GlobalHawk is autonomous in the same sense as an airliner under autopilot
>during the long flight to and from the theater. It is the human pilot who is
>responsible for the whole flight.
How is any of this relevant to criminal offenses regarding autonomous
vehicles?
.
DerbyDad03 <[email protected]> wrote:
> On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
> > On 11/22/2017 1:20 PM, DerbyDad03 wrote:
> >
> > >
> > > Oh, well, no sense in waiting...
> > >
> > > 2nd scenario:
> > >
> > > 5 workers are standing on the railroad tracks. A train is heading in their
> > > direction. They have no escape route. If the train continues down the tracks,
> > > it will most assuredly kill them all.
> > >
> > > You are standing on a bridge overlooking the tracks. Next to you is a fairly
> > > large person. We'll save you some trouble and let that person be a stranger.
> > >
> > > You have 2, and only 2, options. If you do nothing, all 5 workers will
> > > be killed. If you push the stranger off the bridge, the train will kill
> > > him but be stopped before the 5 workers are killed. (Don't question the
> > > physics, just accept the outcome.)
> > >
> > > Which option do you choose?
> > >
> >
> > I don't know. It was easy to pull the switch as there was a bit of
> > disconnect there. Now it is up close and you are doing the pushing.
> > One alternative is to jump yourself, but I'd not do that. Don't think I
> > could push the guy either.
> >
>
> And there in lies the rub. The "disconnected" part.
>
> Now, as promised, let's bring this back to technology, AI and most
> certainly, people. Let's talk specifically about autonomous vehicles,
> but please avoid the rabbit hole and realize that the concept applies
> to just about any where AI is used and people are involved. Autonomus
> vehicles (AV) are just one example.
>
> Imagine it's X years from now and AV's are fairly common. Imagine that an AV
> is traveling down the road, with its AI in complete control of the vehicle.
> The driver is using one hand get a cup of coffee from the built-in Keurig
> machine and choosing a Pandora station with the other. He is completely
> oblivious to what's happening outside of his vehicle.
>
> Now imagine that a 4 year old runs out into the road. The AI uses all of the
> data at its disposal (speed, distance, weather conditions, tire pressure,
> etc.) and decides that it will not be able to stop in time. It checks the
> input from its 360° cameras. Can't go right because of the line of parked
> cars. They won't slow the vehicle enough to avoid hitting the kid. Using
> facial recognition the AI determines that the mini-van on the left contains
> 5 elderly people. If the AV swerves left, it will push the mini-van into
> oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
> with the 18 wheeler's AI who responds and says "I have no place to go. If
> you push the van into my lane, I'm taking out a bunch of Grandmas and
> Grandpas."
>
> Now the AI has to make basically the same decision as in my first scenario:
> Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>
> "Bye Bye, kid. You should have stayed on the sidewalk."
>
> No emotion, right? Right, not once the AI is programmed, not once the initial
> AI rules have been written, not once the facial recognition database has
> been built. The question is who wrote those rules? Who decided it's OK to
> kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
> it's better to save the kid and let the old folks die. They've had a full
> life. Who wrote that rule? In other words, someone(s) have to decide whose
> life is worth more than another's. They are essentially standing on a bridge
> deciding whether to push the guy or not. They have to write the rule. They
> are either going to kill the kid or push the car into the other lane.
>
> I, for one, don't think that I want to be sitting around that table. Having
> to make the decisions would be one thing. Having to sit next to the person
> that would push the guy off the bridge with a gleam in his eye would be a
> totally different story.
https://pbs.twimg.com/media/Cp0D5oCWIAAxSUT.jpg
LOL!
DerbyDad03 <[email protected]> wrote:
> On Wednesday, November 22, 2017 at 10:32:54 AM UTC-5, Ed Pawlowski wrote:
> > On 11/22/2017 7:52 AM, DerbyDad03 wrote:
> > > On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
> > >> [email protected] wrote:
> > >>
> > >>> I have to say, I am sorry to see that.
> > >>
> > >> technophobia [tek-nuh-foh-bee-uh]
> > >> noun -- abnormal fear of or anxiety about the effects of advanced technology.
> > >>
> > >> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
> > >
> > > I'm not sure how this will work out on usenet, but I'm going to present
> > > a scenario and ask for an answer. After some amount of time, maybe 48 hours,
> > > since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
> > > another answer.
> > >
> > > Trust me, this will eventually lead back to technology, AI and most
> > > certainly, people.
> > >
> > > In the following scenario you must assume that all options have been
> > > considered and narrowed down to only 2. Please just accept that the
> > > situation is as stated and that you only have 2 choices. If we get into
> > > "Well, in a real life situation, you'd have to factor in this, that and
> > > the other thing" we'll never get through this exercise.
> > >
> > > Here goes:
> > >
> > > 5 workers are standing on the railroad tracks. A train is heading in their
> > > direction. They have no escape route. If the train continues down the tracks,
> > > it will most assuredly kill them all.
> > >
> > > You are standing next to the lever that will switch the train to another
> > > track before it reaches the workers. On the other track is a lone worker,
> > > also with no escape route.
> > >
> > > You have 2, and only 2, options. If you do nothing, all 5 workers will
> > > be killed. If you pull the lever, only 1 worker will be killed.
> > >
> > > Which option do you choose?
> > >
> >
> > The short answer is to pull the switch and save as many lives as possible.
> >
> > The long answer, it depends. Would you make that same decision if the
> > lone person was a family member? If the lone person was you? Five old
> > people or one child? Of course, AI would take all the emotions out of
> > the decision making. I think that is what you may be getting at.
>
> AI will not take *all* of the emotion out of it. More on that later.
When do we get to the 'pushing the fat guy off the bridge' part of
this moral dilemma quiz? ;')
https://www.youtube.com/embed/bOpf6KcWYyw?autoplay=1
On Nov 24, 2017, OFWW wrote
(in article<[email protected]>):
> On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
> <[email protected]> wrote:
>
> > On Thu, 23 Nov 2017 20:52:09 -0800, OFWW<[email protected]>
> > wrote:
> >
> > > On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
> > > <[email protected]> wrote:
> > >
> > > > On Thu, 23 Nov 2017 18:44:05 -0800, OFWW<[email protected]>
> > > > wrote:
> > > >
> > > > > On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
> > > > > <[email protected]> wrote:
> > > > >
> > > > > > On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
> > > > > > > On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
> > > > > > > <[email protected]> wrote:
> > > > > > >
> > > > > > > > On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski
> > > > > > > > wrote:
> > > > > > > > > On 11/22/2017 1:20 PM, DerbyDad03 wrote:
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Oh, well, no sense in waiting...
> > > > > > > > > >
> > > > > > > > > > 2nd scenario:
> > > > > > > > > >
> > > > > > > > > > 5 workers are standing on the railroad tracks. A train is heading
> > > > > > > > > > in their
> > > > > > > > > > direction. They have no escape route. If the train continues down
> > > > > > > > > > the tracks,
> > > > > > > > > > it will most assuredly kill them all.
> > > > > > > > > >
> > > > > > > > > > You are standing on a bridge overlooking the tracks. Next to you is
> > > > > > > > > > a fairly
> > > > > > > > > > large person. We'll save you some trouble and let that person be a
> > > > > > > > > > stranger.
> > > > > > > > > >
> > > > > > > > > > You have 2, and only 2, options. If you do nothing, all 5 workers
> > > > > > > > > > will
> > > > > > > > > > be killed. If you push the stranger off the bridge, the train will
> > > > > > > > > > kill
> > > > > > > > > > him but be stopped before the 5 workers are killed. (Don't question
> > > > > > > > > > the
> > > > > > > > > > physics, just accept the outcome.)
> > > > > > > > > >
> > > > > > > > > > Which option do you choose?
> > > > > > > > >
> > > > > > > > > I don't know. It was easy to pull the switch as there was a bit of
> > > > > > > > > disconnect there. Now it is up close and you are doing the pushing.
> > > > > > > > > One alternative is to jump yourself, but I'd not do that. Don't
> > > > > > > > > think I
> > > > > > > > > could push the guy either.
> > > > > > > >
> > > > > > > > And there in lies the rub. The "disconnected" part.
> > > > > > > >
> > > > > > > > Now, as promised, let's bring this back to technology, AI and most
> > > > > > > > certainly, people. Let's talk specifically about autonomous vehicles,
> > > > > > > > but please avoid the rabbit hole and realize that the concept applies
> > > > > > > > to just about any where AI is used and people are involved. Autonomus
> > > > > > > > vehicles (AV) are just one example.
> > > > > > > >
> > > > > > > > Imagine it's X years from now and AV's are fairly common. Imagine
> > > > > > > > that an AV
> > > > > > > > is traveling down the road, with its AI in complete control of the
> > > > > > > > vehicle.
> > > > > > > > The driver is using one hand get a cup of coffee from the built-in
> > > > > > > > Keurig
> > > > > > > > machine and choosing a Pandora station with the other. He is
> > > > > > > > completely
> > > > > > > > oblivious to what's happening outside of his vehicle.
> > > > > > > >
> > > > > > > > Now imagine that a 4 year old runs out into the road. The AI uses all
> > > > > > > > of the
> > > > > > > > data at its disposal (speed, distance, weather conditions, tire
> > > > > > > > pressure,
> > > > > > > > etc.) and decides that it will not be able to stop in time. It checks
> > > > > > > > the
> > > > > > > > input from its 360° cameras. Can't go right because of the line of
> > > > > > > > parked
> > > > > > > > cars. They won't slow the vehicle enough to avoid hitting the kid.
> > > > > > > > Using
> > > > > > > > facial recognition the AI determines that the mini-van on the left
> > > > > > > > contains
> > > > > > > > 5 elderly people. If the AV swerves left, it will push the mini-van
> > > > > > > > into
> > > > > > > > oncoming traffic, directly into the path of a 18 wheeler. The AI
> > > > > > > > communicates
> > > > > > > > with the 18 wheeler's AI who responds and says "I have no place to
> > > > > > > > go. If
> > > > > > > > you push the van into my lane, I'm taking out a bunch of Grandmas and
> > > > > > > > Grandpas."
> > > > > > > >
> > > > > > > > Now the AI has to make basically the same decision as in my first
> > > > > > > > scenario:
> > > > > > > > Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
> > > > > > > >
> > > > > > > > "Bye Bye, kid. You should have stayed on the sidewalk."
> > > > > > > >
> > > > > > > > No emotion, right? Right, not once the AI is programmed, not once the
> > > > > > > > initial
> > > > > > > > AI rules have been written, not once the facial recognition database
> > > > > > > > has
> > > > > > > > been built. The question is who wrote those rules? Who decided it's
> > > > > > > > OK to
> > > > > > > > kill a young kid to save the lives of 5 rickety old folks? Oh wait,
> > > > > > > > maybe
> > > > > > > > it's better to save the kid and let the old folks die. They've had a
> > > > > > > > full
> > > > > > > > life. Who wrote that rule? In other words, someone(s) have to decide
> > > > > > > > whose
> > > > > > > > life is worth more than another's. They are essentially standing on a
> > > > > > > > bridge
> > > > > > > > deciding whether to push the guy or not. They have to write the rule.
> > > > > > > > They
> > > > > > > > are either going to kill the kid or push the car into the other lane.
> > > > > > > >
> > > > > > > > I, for one, don't think that I want to be sitting around that table.
> > > > > > > > Having
> > > > > > > > to make the decisions would be one thing. Having to sit next to the
> > > > > > > > person
> > > > > > > > that would push the guy off the bridge with a gleam in his eye would
> > > > > > > > be a
> > > > > > > > totally different story.
> > > > > > >
> > > > > > > I reconsidered my thoughts on this one as well.
> > > > > > >
> > > > > > > The AV should do as it was designed to do, to the best of its
> > > > > > > capabilities. Staying in the lane when there is no option to swerve
> > > > > > > safely.
> > > > > > >
> > > > > > > There is already a legal reason for that, that being that the swerving
> > > > > > > driver assumes all the damages that incur from his action, including
> > > > > > > manslaughter.
> > > > > >
> > > > > > So in the following brake failure scenario, if the AV stays in lane and
> > > > > > kills the four "highly rated" pedestrians there are no charges, but if
> > > > > > it changes lanes and takes out the 4 slugs, jail time may ensue.
> > > > > >
> > > > > > http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-800
> > > > > >
> > > > > > Interesting.
> > > > >
> > > > > Yes, and I've been warned that by my taking evasive action I could
> > > > > cause someone else to respond likewise and that I would he held
> > > > > accountable for what happened.
> > > >
> > > > I find the assumption that a fatality involving a robot car would lead
> > > > to someone being jailed to be amusing. The people who assert this
> > > > never identify the statute under which someone would be jailed or who,
> > > > precisely this someone might be. They seem to assume that because a
> > > > human driving a car could be jailed for vehicular homicide or criminal
> > > > negligence or some such, it is automatic that someone else would be
> > > > jailed for the same offense--the trouble is that the car is legally an
> > > > inanimate object and we don't put inanimate objects in jail. So it
> > > > gets down to proving that the occupant is negligent, which is a hard
> > > > sell given that the government allowed the car to be licensed with the
> > > > understanding that it would not be controlled by the occupant, or
> > > > proving that the engineering team responsible for developing it was
> > > > negligent, which given that they can show the logic the thing used and
> > > > no doubt provide legal justification for the decision it made, will be
> > > > another tall order. So who goes to jail?
> > >
> > > You've taken it to the next level, into the real word scenario and out
> > > of the programming stage.
> > >
> > > Personally I would assume that anything designed would have to
> > > co-exist with real world laws and responsibilities. Even the final
> > > owner could be held responsible. See the laws regarding experimental
> > > aircraft, hang gliders, etc.
> >
> > Experimental aircraft and hang gliders are controlled by a human. If
> > they are involved in a fatl accident, the operator gets scrutinized.
> > An autonomous car is not under human control, it is its own operator,
> > the occupant is a passenger.
> >
> > We don't have "real world law" governing fatalities involving
> > autonomous vehicles. The engineering would, initially (I hope) be
> > based on existing case law involving human drivers and what the courts
> > held that they should or should not have done in particular
> > situations. But there won't be any actual law until either the
> > legislatures write statutes or the courts issue rulings, and the
> > latter won't happen until there are such vehicles in service in
> > sufficient quantity to generate cases.
> >
> > Rather than hang gliders and homebuilts, consider a Globalhawk that
> > hits an airliner. Assuming no negligence on the part of the airliner
> > crew, who do you go after? Do you go after the Air Force, Northrop
> > Grumman, Raytheon, or somebody else? And of what are they likely to
> > be found guilty?
GlobalHawk drones do have human pilots. Although they are not on board, they
are in control via a stellite link and can be thousands of miles away.
.<http://www.aviationtoday.com/2017/03/16/day-life-us-air-force-drone-pilot/>
Joe Gwinn
On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
<[email protected]> wrote:
>On Thu, 23 Nov 2017 20:52:09 -0800, OFWW <[email protected]>
>wrote:
>
>>On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
>><[email protected]> wrote:
>>
>>>On Thu, 23 Nov 2017 18:44:05 -0800, OFWW <[email protected]>
>>>wrote:
>>>
>>>>On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
>>>><[email protected]> wrote:
>>>>
>>>>>On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
>>>>>> On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
>>>>>> <[email protected]> wrote:
>>>>>>
>>>>>> >On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
>>>>>> >> On 11/22/2017 1:20 PM, DerbyDad03 wrote:
>>>>>> >>
>>>>>> >> >
>>>>>> >> > Oh, well, no sense in waiting...
>>>>>> >> >
>>>>>> >> > 2nd scenario:
>>>>>> >> >
>>>>>> >> > 5 workers are standing on the railroad tracks. A train is heading in their
>>>>>> >> > direction. They have no escape route. If the train continues down the tracks,
>>>>>> >> > it will most assuredly kill them all.
>>>>>> >> >
>>>>>> >> > You are standing on a bridge overlooking the tracks. Next to you is a fairly
>>>>>> >> > large person. We'll save you some trouble and let that person be a stranger.
>>>>>> >> >
>>>>>> >> > You have 2, and only 2, options. If you do nothing, all 5 workers will
>>>>>> >> > be killed. If you push the stranger off the bridge, the train will kill
>>>>>> >> > him but be stopped before the 5 workers are killed. (Don't question the
>>>>>> >> > physics, just accept the outcome.)
>>>>>> >> >
>>>>>> >> > Which option do you choose?
>>>>>> >> >
>>>>>> >>
>>>>>> >> I don't know. It was easy to pull the switch as there was a bit of
>>>>>> >> disconnect there. Now it is up close and you are doing the pushing.
>>>>>> >> One alternative is to jump yourself, but I'd not do that. Don't think I
>>>>>> >> could push the guy either.
>>>>>> >>
>>>>>> >
>>>>>> >And there in lies the rub. The "disconnected" part.
>>>>>> >
>>>>>> >Now, as promised, let's bring this back to technology, AI and most
>>>>>> >certainly, people. Let's talk specifically about autonomous vehicles,
>>>>>> >but please avoid the rabbit hole and realize that the concept applies
>>>>>> >to just about any where AI is used and people are involved. Autonomus
>>>>>> >vehicles (AV) are just one example.
>>>>>> >
>>>>>> >Imagine it's X years from now and AV's are fairly common. Imagine that an AV
>>>>>> >is traveling down the road, with its AI in complete control of the vehicle.
>>>>>> >The driver is using one hand get a cup of coffee from the built-in Keurig
>>>>>> >machine and choosing a Pandora station with the other. He is completely
>>>>>> >oblivious to what's happening outside of his vehicle.
>>>>>> >
>>>>>> >Now imagine that a 4 year old runs out into the road. The AI uses all of the
>>>>>> >data at its disposal (speed, distance, weather conditions, tire pressure,
>>>>>> >etc.) and decides that it will not be able to stop in time. It checks the
>>>>>> >input from its 360° cameras. Can't go right because of the line of parked
>>>>>> >cars. They won't slow the vehicle enough to avoid hitting the kid. Using
>>>>>> >facial recognition the AI determines that the mini-van on the left contains
>>>>>> >5 elderly people. If the AV swerves left, it will push the mini-van into
>>>>>> >oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
>>>>>> >with the 18 wheeler's AI who responds and says "I have no place to go. If
>>>>>> >you push the van into my lane, I'm taking out a bunch of Grandmas and
>>>>>> >Grandpas."
>>>>>> >
>>>>>> >Now the AI has to make basically the same decision as in my first scenario:
>>>>>> >Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?
>>>>>> >
>>>>>> >"Bye Bye, kid. You should have stayed on the sidewalk."
>>>>>> >
>>>>>> >No emotion, right? Right, not once the AI is programmed, not once the initial
>>>>>> >AI rules have been written, not once the facial recognition database has
>>>>>> >been built. The question is who wrote those rules? Who decided it's OK to
>>>>>> >kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
>>>>>> >it's better to save the kid and let the old folks die. They've had a full
>>>>>> >life. Who wrote that rule? In other words, someone(s) have to decide whose
>>>>>> >life is worth more than another's. They are essentially standing on a bridge
>>>>>> >deciding whether to push the guy or not. They have to write the rule. They
>>>>>> >are either going to kill the kid or push the car into the other lane.
>>>>>> >
>>>>>> >I, for one, don't think that I want to be sitting around that table. Having
>>>>>> >to make the decisions would be one thing. Having to sit next to the person
>>>>>> >that would push the guy off the bridge with a gleam in his eye would be a
>>>>>> >totally different story.
>>>>>>
>>>>>> I reconsidered my thoughts on this one as well.
>>>>>>
>>>>>> The AV should do as it was designed to do, to the best of its
>>>>>> capabilities. Staying in the lane when there is no option to swerve
>>>>>> safely.
>>>>>>
>>>>>> There is already a legal reason for that, that being that the swerving
>>>>>> driver assumes all the damages that incur from his action, including
>>>>>> manslaughter.
>>>>>
>>>>>So in the following brake failure scenario, if the AV stays in lane and
>>>>>kills the four "highly rated" pedestrians there are no charges, but if
>>>>>it changes lanes and takes out the 4 slugs, jail time may ensue.
>>>>>
>>>>>http://static6.businessinsider.com/image/58653ba0ee14b61b008b5aea-800
>>>>>
>>>>>Interesting.
>>>>
>>>>Yes, and I've been warned that by my taking evasive action I could
>>>>cause someone else to respond likewise and that I would he held
>>>>accountable for what happened.
>>>
>>>I find the assumption that a fatality involving a robot car would lead
>>>to someone being jailed to be amusing. The people who assert this
>>>never identify the statute under which someone would be jailed or who,
>>>precisely this someone might be. They seem to assume that because a
>>>human driving a car could be jailed for vehicular homicide or criminal
>>>negligence or some such, it is automatic that someone else would be
>>>jailed for the same offense--the trouble is that the car is legally an
>>>inanimate object and we don't put inanimate objects in jail. So it
>>>gets down to proving that the occupant is negligent, which is a hard
>>>sell given that the government allowed the car to be licensed with the
>>>understanding that it would not be controlled by the occupant, or
>>>proving that the engineering team responsible for developing it was
>>>negligent, which given that they can show the logic the thing used and
>>>no doubt provide legal justification for the decision it made, will be
>>>another tall order. So who goes to jail?
>>>
>>
>>You've taken it to the next level, into the real word scenario and out
>>of the programming stage.
>>
>>Personally I would assume that anything designed would have to
>>co-exist with real world laws and responsibilities. Even the final
>>owner could be held responsible. See the laws regarding experimental
>>aircraft, hang gliders, etc.
>
>Experimental aircraft and hang gliders are controlled by a human. If
>they are involved in a fatl accident, the operator gets scrutinized.
>An autonomous car is not under human control, it is its own operator,
>the occupant is a passenger.
>
>We don't have "real world law" governing fatalities involving
>autonomous vehicles. The engineering would, initially (I hope) be
>based on existing case law involving human drivers and what the courts
>held that they should or should not have done in particular
>situations. But there won't be any actual law until either the
>legislatures write statutes or the courts issue rulings, and the
>latter won't happen until there are such vehicles in service in
>sufficient quantity to generate cases.
>
>Rather than hang gliders and homebuilts, consider a Globalhawk that
>hits an airliner. Assuming no negligence on the part of the airliner
>crew, who do you go after? Do you go after the Air Force, Northrop
>Grumman, Raytheon, or somebody else? And of what are they likely to
>be found guilty?
>
>>But we should be sticking to this hypothetical example given us.
>
>It was suggested that someone would go to jail. I still want to know
>who and what crime they committed.
The person who did not stay in their own lane, and ended up committing
involuntary manslaughter.
In the case you bring up the AV can be currently over ridden at
anytime by the occupant. There are already AV vehicles operating on
the streets.
Regarding your "whose at fault" scenario, just look at the court cases
against gun makers, as if guns kill people.
So can we know return to the question or at the least, wood working?
On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
<[email protected]> wrote:
>It was suggested that someone would go to jail. I still want to know
>who and what crime they committed.
Damages would be a tort case, as to who and what crime that would be
determined in court. Some DA looking for publicty would brings
charges.
On Tue, 21 Nov 2017 09:22:05 -0500, Ed Pawlowski <[email protected]> wrote:
>On 11/21/2017 2:04 AM, [email protected] wrote:
>> I have to say, I am sorry to see that.
>>
>> It means that all over the internet, in a high concentration here, and at the old men's table at Woodcraft the teeth gnashing will start.
>>
>> Screams of civil rights violations, chest thumping of those declaring that their generation had no guards or safety devices and they were fine, the paranoids buying saws now before the nanny state Commie/weenies make safety some kind of bullshit issue... all of it.
>>
>> Ready for the first 250 thread here for a long, long time. Nothing like getting a good bitch on to fire one up, though.
>>
>> Robert
>>
>
>There was an suicide by bandsaw. Just think of the lives of depressed
>people it will save.
They'll just buy a meat saw. When bandsaws are outlawed...
On Wed, 22 Nov 2017 12:45:11 -0600, Leon <lcb11211@swbelldotnet>
wrote:
>On 11/22/2017 8:45 AM, Leon wrote:
>> On 11/22/2017 6:52 AM, DerbyDad03 wrote:
>>> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>>>> [email protected] wrote:
>>>>
>>>>> I have to say, I am sorry to see that.
>>>>
>>>> technophobia [tek-nuh-foh-bee-uh]
>>>> noun -- abnormal fear of or anxiety about the effects of advanced
>>>> technology.
>>>>
>>>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>>>>
>>>
>>> I'm not sure how this will work out on usenet, but I'm going to present
>>> a scenario and ask for an answer. After some amount of time, maybe 48
>>> hours,
>>> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
>>> another answer.
>>>
>>> Trust me, this will eventually lead back to technology, AI and most
>>> certainly, people.
>>>
>>> In the following scenario you must assume that all options have been
>>> considered and narrowed down to only 2. Please just accept that the
>>> situation is as stated and that you only have 2 choices. If we get into
>>> "Well, in a real life situation, you'd have to factor in this, that and
>>> the other thing" we'll never get through this exercise.
>>>
>>> Here goes:
>>>
>>> 5 workers are standing on the railroad tracks. A train is heading in
>>> their
>>> direction. They have no escape route. If the train continues down the
>>> tracks,
>>> it will most assuredly kill them all.
>>>
>>> You are standing next to the lever that will switch the train to another
>>> track before it reaches the workers. On the other track is a lone worker,
>>> also with no escape route.
>>>
>>> You have 2, and only 2, options. If you do nothing, all 5 workers will
>>> be killed. If you pull the lever, only 1 worker will be killed.
>>>
>>> Which option do you choose?
>>>
>>
>> Pull the lever, Choosing to do nothing is the choice to kill 5.
>
>Well I have mentioned this before, and it goes back to comments I have
>made in the past about decision making. It seems the majority here use
>emotional over rational thinking to come up with a decision.
>
>It was said you only have two choices and who these people are or might
>be is not a consideration. You can't make a rational decision with
>what-if's. You only have two options, kill 5 or kill 1. Rational for
>me says save 5, for the rest of you that are bringing in scenarios past
>what should be considered will waste too much time and you end up with a
>kill before you decide what to do.
Rational thinking would state that trains run on a schedule, the
switch would be locked, and for better or worse the five were not
supposed to be there in the first place.
So how can I make a decision more rational than the scheduler, even if
I had the key to the lock.
On 11/22/2017 6:52 AM, DerbyDad03 wrote:
> On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
>> [email protected] wrote:
>>
>>> I have to say, I am sorry to see that.
>>
>> technophobia [tek-nuh-foh-bee-uh]
>> noun -- abnormal fear of or anxiety about the effects of advanced technology.
>>
>> https://www.youtube.com/embed/NzEeJca_YaQ?autoplay=1&autohide=1&showinfo=0&iv_load_policy=3&rel=0
>
> I'm not sure how this will work out on usenet, but I'm going to present
> a scenario and ask for an answer. After some amount of time, maybe 48 hours,
> since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
> another answer.
>
> Trust me, this will eventually lead back to technology, AI and most
> certainly, people.
>
> In the following scenario you must assume that all options have been
> considered and narrowed down to only 2. Please just accept that the
> situation is as stated and that you only have 2 choices. If we get into
> "Well, in a real life situation, you'd have to factor in this, that and
> the other thing" we'll never get through this exercise.
>
> Here goes:
>
> 5 workers are standing on the railroad tracks. A train is heading in their
> direction. They have no escape route. If the train continues down the tracks,
> it will most assuredly kill them all.
>
> You are standing next to the lever that will switch the train to another
> track before it reaches the workers. On the other track is a lone worker,
> also with no escape route.
>
> You have 2, and only 2, options. If you do nothing, all 5 workers will
> be killed. If you pull the lever, only 1 worker will be killed.
>
> Which option do you choose?
>
Pull the lever, Choosing to do nothing is the choice to kill 5.