Driverless Cars, Good Idea, Horrible Idea?

Langues: JP EN DE FR
users online
Forum » Everything Else » Culture and Media » Driverless Cars, Good Idea, Horrible Idea?
Driverless Cars, Good Idea, Horrible Idea?
First Page 2 3 4 5 6 7
Offline
Posts: 24505
By Ramyrez 2015-12-18 15:02:08
Link | Citer | R
 
Clinpachi said: »
This is the type of roadway I've delt with all my life. Farmland for forever, SOMETIMES two lanes in each direction and if your *** goes over that speed limit by more than 5 you're going to get a ticket faster than you can blink.

I grew up with that kind, but my experience was not the same as yours in regard to speeding. Not at all.

Also, you can't be ticketed up to 5 MPH. Again, at least in Pennsylvania. They have to give you 5. The vast majority will give you 10, and if they don't, it's a local, not a statey, and it's not because they care about safety, it's because their municipality has a quota to fill and needs the money.
 Asura.Kingnobody
Bug Hunter
Offline
Serveur: Asura
Game: FFXI
Posts: 34187
By Asura.Kingnobody 2015-12-18 15:14:54
Link | Citer | R
 
Ramyrez said: »
Clinpachi said: »
Did you know that in certain areas you can actually get ticketed for going 20 under the speed limit?

Driving 20 under is by far a greater threat to others than driving 20 over in all but the worst of weather conditions.

But Ihina was just displaying some sarcasm.
Most (actually, almost all, some just don't post them) states have a minimum speed limit attached to the maximum.

/themoreyouknow.jpg
Offline
Posts: 3299
By Clinpachi 2015-12-18 15:30:47
Link | Citer | R
 
Ramyrez said: »
Clinpachi said: »
This is the type of roadway I've delt with all my life. Farmland for forever, SOMETIMES two lanes in each direction and if your *** goes over that speed limit by more than 5 you're going to get a ticket faster than you can blink.

I grew up with that kind, but my experience was not the same as yours in regard to speeding. Not at all.

Also, you can't be ticketed up to 5 MPH. Again, at least in Pennsylvania. They have to give you 5. The vast majority will give you 10, and if they don't, it's a local, not a statey, and it's not because they care about safety, it's because their municipality has a quota to fill and needs the money.

The general quote many people have said in my area is "7 You're fine 8 you're mine" in terms of speeding. Pretty much true in my experience locally.
 Asura.Ninjaface
Offline
Serveur: Asura
Game: FFXI
user: Ninjaface
Posts: 163
By Asura.Ninjaface 2015-12-18 15:35:10
Link | Citer | R
 
Clinpachi said: »
"7 You're fine 8 you're mine"
m80, it's "8 you're great, 9 you're mine."
Offline
Serveur: Asura
Game: FFXI
Posts: 525
By Asura.Leonlionheart 2015-12-18 15:45:43
Link | Citer | R
 
I'm 98% certain Elon Musk is from the future and therefore believe this will work with 100% certainty.
 Bahamut.Milamber
Offline
Serveur: Bahamut
Game: FFXI
user: milamber
Posts: 3691
By Bahamut.Milamber 2015-12-19 09:58:51
Link | Citer | R
 
Phoenix.Dabackpack said: »
Altimaomega said: »
Phoenix.Dabackpack said: »
n a real life scenario, you, as a driver, might have to solve moral dilemmas. In an extreme case, you might have to decide between saving your own life and saving a pedestrian's life. In the real world, the decision is on you, the driver.

This was already covered on page one.

Asura.Floppyseconds said: »
The rest is silly to debate about as there is not going to be a school bus of kids and a cliff and such an accident all happening. That is just a ridiculous notion.

I dunno if Floppy lives in the real world however..

I know, I just wanted to state that this literally is not a solved problem in the industry.

And @Floppy, it doesn't matter "if it will never happen", because the agent needs to be able to handle those situations if they do occur. This isn't a trivial point. Autonomous car producers literally need to consult and decide how to prioritize safety between passengers, pedestrians, and other cars. I'm not bullshitting you, this is real life, not theorycrafting.

All of the above is for a truly autonomous vehicle, though.

EDIT: Reiterating that the above is for pure automation.
No, you really don't. That's simply a false dilemma. If you are entering a use case where that is necessary to consider, you've basically been screwed multiple times, in terms of failures in redundancy combined with failures in decision logic, which basically boils down to even if you had to make a decision of that nature, there would be no real means to enact it.
Offline
Posts: 35422
By fonewear 2015-12-19 10:06:23
Link | Citer | R
 
I'm just glad we have an authority figure on driverless cars. Before you know it they will rule us all with their advanced AI and high gas mileage !

Could Google make a car so smart they couldn't possible program it ?

Answer: Toyota beat them to it ladies and gentlemen the Prius !

The bold styling and general good looks screams

"I'm a liberal look at how much I care about the environment !"

[+]
 Bahamut.Milamber
Offline
Serveur: Bahamut
Game: FFXI
user: milamber
Posts: 3691
By Bahamut.Milamber 2015-12-19 10:44:29
Link | Citer | R
 
On topic:
[+]
 Phoenix.Dabackpack
MSPaint Winner
Offline
Serveur: Phoenix
Game: FFXI
Posts: 2011
By Phoenix.Dabackpack 2015-12-19 15:06:00
Link | Citer | R
 
Bahamut.Milamber said: »
Phoenix.Dabackpack said: »
Altimaomega said: »
Phoenix.Dabackpack said: »
n a real life scenario, you, as a driver, might have to solve moral dilemmas. In an extreme case, you might have to decide between saving your own life and saving a pedestrian's life. In the real world, the decision is on you, the driver.

This was already covered on page one.

Asura.Floppyseconds said: »
The rest is silly to debate about as there is not going to be a school bus of kids and a cliff and such an accident all happening. That is just a ridiculous notion.

I dunno if Floppy lives in the real world however..

I know, I just wanted to state that this literally is not a solved problem in the industry.

And @Floppy, it doesn't matter "if it will never happen", because the agent needs to be able to handle those situations if they do occur. This isn't a trivial point. Autonomous car producers literally need to consult and decide how to prioritize safety between passengers, pedestrians, and other cars. I'm not bullshitting you, this is real life, not theorycrafting.

All of the above is for a truly autonomous vehicle, though.

EDIT: Reiterating that the above is for pure automation.
No, you really don't. That's simply a false dilemma. If you are entering a use case where that is necessary to consider, you've basically been screwed multiple times, in terms of failures in redundancy combined with failures in decision logic, which basically boils down to even if you had to make a decision of that nature, there would be no real means to enact it.

It's not a false dilemma. You're assuming that all adverse use cases can be prevented by direct action by the agent. That's simply not true, especially when there are human drivers still on the road.

In your example, this kind of decision-making would act as another layer of redundancy.

I'm repeating that these kinds of dilemmas are actually being considered by academia and industry. (Even though my only source is direct communication with engineers in the field)
 Phoenix.Dabackpack
MSPaint Winner
Offline
Serveur: Phoenix
Game: FFXI
Posts: 2011
By Phoenix.Dabackpack 2015-12-19 15:06:52
Link | Citer | R
 
Bahamut.Milamber said: »
On topic:

The dilemma here is that a human being needs to describe "what the tasks are" and "how to execute them perfectly".
 Phoenix.Dabackpack
MSPaint Winner
Offline
Serveur: Phoenix
Game: FFXI
Posts: 2011
By Phoenix.Dabackpack 2015-12-19 15:12:02
Link | Citer | R
 
This video describes pretty well how human values are inscribed into any piece of technology designed by humans. People like to say, "machines are perfectly logical and infallible" but the truth is that they have moral, ethical, and logical values ascribed to them by the developers and programmers.

This is a huge topic of discussion in human-computer interaction and human-centered computing research.

EDIT: For those interested, this is a seminal work about the notion that technology has internal politics assigned to them.
[+]
 Bahamut.Milamber
Offline
Serveur: Bahamut
Game: FFXI
user: milamber
Posts: 3691
By Bahamut.Milamber 2015-12-20 01:32:46
Link | Citer | R
 
Phoenix.Dabackpack said: »
Bahamut.Milamber said: »
Phoenix.Dabackpack said: »
Altimaomega said: »
Phoenix.Dabackpack said: »
n a real life scenario, you, as a driver, might have to solve moral dilemmas. In an extreme case, you might have to decide between saving your own life and saving a pedestrian's life. In the real world, the decision is on you, the driver.

This was already covered on page one.

Asura.Floppyseconds said: »
The rest is silly to debate about as there is not going to be a school bus of kids and a cliff and such an accident all happening. That is just a ridiculous notion.

I dunno if Floppy lives in the real world however..

I know, I just wanted to state that this literally is not a solved problem in the industry.

And @Floppy, it doesn't matter "if it will never happen", because the agent needs to be able to handle those situations if they do occur. This isn't a trivial point. Autonomous car producers literally need to consult and decide how to prioritize safety between passengers, pedestrians, and other cars. I'm not bullshitting you, this is real life, not theorycrafting.

All of the above is for a truly autonomous vehicle, though.

EDIT: Reiterating that the above is for pure automation.
No, you really don't. That's simply a false dilemma. If you are entering a use case where that is necessary to consider, you've basically been screwed multiple times, in terms of failures in redundancy combined with failures in decision logic, which basically boils down to even if you had to make a decision of that nature, there would be no real means to enact it.

It's not a false dilemma. You're assuming that all adverse use cases can be prevented by direct action by the agent. That's simply not true, especially when there are human drivers still on the road.

In your example, this kind of decision-making would act as another layer of redundancy.

I'm repeating that these kinds of dilemmas are actually being considered by academia and industry. (Even though my only source is direct communication with engineers in the field)
If you are "suddenly" in a situation where this occurs, you already beyond the point of being in trouble. You have either not given yourself sufficient distance/time to act, or suffered sufficient failures to be incapable of acting in the necessary period. If you've screwed up that badly to begin with, it doesn't make a damned bit of difference what ethical choices you program; you probably can't successfully enact it.

This is also where insurance plays a role.

It isn't as if the moral or ethical implications of engineering is a new topic, or special to this field; hell, there are dicussions about things as mundane as distributed vs centralized power systems and their relation to modern sociopolitical structures, or the use of pen and paper instead of a computer, to speed dial versus rote memorization.

This specific topic of the prioritization of safety is already in the automotive field. Crumple zones and bumper heights spring to mind.

It doesn't get more special because a computer is more involved.
 Phoenix.Dabackpack
MSPaint Winner
Offline
Serveur: Phoenix
Game: FFXI
Posts: 2011
By Phoenix.Dabackpack 2015-12-20 01:59:29
Link | Citer | R
 
Bahamut.Milamber said: »
Phoenix.Dabackpack said: »
Bahamut.Milamber said: »
Phoenix.Dabackpack said: »
Altimaomega said: »
Phoenix.Dabackpack said: »
n a real life scenario, you, as a driver, might have to solve moral dilemmas. In an extreme case, you might have to decide between saving your own life and saving a pedestrian's life. In the real world, the decision is on you, the driver.

This was already covered on page one.

Asura.Floppyseconds said: »
The rest is silly to debate about as there is not going to be a school bus of kids and a cliff and such an accident all happening. That is just a ridiculous notion.

I dunno if Floppy lives in the real world however..

I know, I just wanted to state that this literally is not a solved problem in the industry.

And @Floppy, it doesn't matter "if it will never happen", because the agent needs to be able to handle those situations if they do occur. This isn't a trivial point. Autonomous car producers literally need to consult and decide how to prioritize safety between passengers, pedestrians, and other cars. I'm not bullshitting you, this is real life, not theorycrafting.

All of the above is for a truly autonomous vehicle, though.

EDIT: Reiterating that the above is for pure automation.
No, you really don't. That's simply a false dilemma. If you are entering a use case where that is necessary to consider, you've basically been screwed multiple times, in terms of failures in redundancy combined with failures in decision logic, which basically boils down to even if you had to make a decision of that nature, there would be no real means to enact it.

It's not a false dilemma. You're assuming that all adverse use cases can be prevented by direct action by the agent. That's simply not true, especially when there are human drivers still on the road.

In your example, this kind of decision-making would act as another layer of redundancy.

I'm repeating that these kinds of dilemmas are actually being considered by academia and industry. (Even though my only source is direct communication with engineers in the field)
If you are "suddenly" in a situation where this occurs, you already beyond the point of being in trouble. You have either not given yourself sufficient distance/time to act, or suffered sufficient failures to be incapable of acting in the necessary period. If you've screwed up that badly to begin with, it doesn't make a damned bit of difference what ethical choices you program; you probably can't successfully enact it.

This is also where insurance plays a role.

It isn't as if the moral or ethical implications of engineering is a new topic, or special to this field; hell, there are dicussions about things as mundane as distributed vs centralized power systems and their relation to modern sociopolitical structures, or the use of pen and paper instead of a computer, to speed dial versus rote memorization.

This specific topic of the prioritization of safety is already in the automotive field. Crumple zones and bumper heights spring to mind.

It doesn't get more special because a computer is more involved.

It's not new by any means, but it still applies.

You're right, there are plenty of safety prioritization examples in the design of automobiles and equipment. Airbags, traffic signals, and ABS are a few of these. However, people definitely expect more from AI agents, especially since the behavior and "safety prioritization" aren't transparent. People will ask questions. You know they will. Especially if they have to find out the hard way.

The best thing that can happen in this situation is that Google or Tesla will make it very clear what will happen in these kinds of scenarios and they make their value judgments very clear.

I'm still protesting the notion that "if you encounter an unfavorable situation, you can't do anything, you're screwed."

If a deer jumps in front of my car, it's a shitty situation but I have to make some reaction. Humans will understand either scenario: "he hit the deer because he prioritized his own safety", or "he swerved because he thought he could save both lives." These decisions (removing panic from the situation) are value judgments. Humans understand because there is a reason.

Imagine an analogous situation with an autonomous vehicle.

If the autonomous vehicle is a purely rational agent and doesn't "panic", then its decisions in this scenario mirror value judgments. The populace will question "why did the car hit the deer instead of swerving?" and the developers might be compelled to provide an answer if the issue becomes big enough. If the answer is "there is no reason" then people will be dissatisfied.

I don't feel like finding them now, but there are studies in AI that purport that humans tend to feel positively when AI agents are able to justify their reasoning behind decisions, even if the reasoning doesn't totally make logical sense.

An answer like "there was no reason why the agent hit the deer" will not make people happy. People will call it a hole in the agent's reasoning. I still think that the developers need to be ready to answer those kinds of questions.
 Phoenix.Dabackpack
MSPaint Winner
Offline
Serveur: Phoenix
Game: FFXI
Posts: 2011
By Phoenix.Dabackpack 2015-12-20 02:08:46
Link | Citer | R
 
When it comes to safety equipment, the functions and value judgments tend to be clear. "The hard hat protects my head." "The airbag stops my momentum when the car crashes."

When it comes to a decision-making agent, people tend to hold it to human standards. The functions and value judgments will need to be clear in these kinds of situations.

That's all I'm really saying. If people know what to expect, they will be happy.

When it comes to the rules of the road, this is immediately obvious, because rules are rules. Moral judgments are foggier and don't have a correct answer.

However, a transparent answer is important.

TL;DR people tend to be OK if they are given these kinds of information in advance. If you tell them that the car doesn't have airbags, that's OK. If you tell them that the car has airbags and they don't deploy, then you have a problem.
Offline
Posts: 35422
By fonewear 2015-12-20 04:14:41
Link | Citer | R
 
They can't make good AI in Fallout 4 and they want me to get a in a car that drives itself...yea ok !
[+]
 Valefor.Endoq
Offline
Serveur: Valefor
Game: FFXI
user: Endoq
Posts: 6906
By Valefor.Endoq 2015-12-20 04:27:29
Link | Citer | R
 
fonewear said: »
They can't make good AI in Fallout 4 and they want me to get a in a car that drives itself...yea ok !
Driverless Cars brought to you by Bethesda...
Offline
Posts: 32551
By Artemicion 2015-12-20 04:43:54
Link | Citer | R
 
While you programmers and engineers might feel uncomfortable about the underlying nuances and circumstance that goes within the technology of driverless cars, I assure you, to your average layman, feels very much the same about your average human driver.
 Odin.Blizzy
Offline
Serveur: Odin
Game: FFXI
user: Blizzit
Posts: 45
By Odin.Blizzy 2015-12-20 05:04:24
Link | Citer | R
 
"Also, you can't be ticketed up to 5 MPH. Again, at least in Pennsylvania. They have to give you 5. The vast majority will give you 10, and if they don't, it's a local, not a statey, and it's not because they care about safety, it's because their municipality has a quota to fill and needs the money."

Sorry this is wrong in so many ways. First speed limits are set by DOT which is federal. You can be ticketed for "speeding" for going 1 mile over the posted speed limit in any state. Do they normaly do it no. 5 is a normal grace leway given by officers. 11 miles over the posted speed limit can get you a bigger fine and you more screwed if your driving a com vehicle with a cdl. Much bigger fines.
Prior to any shift starting the officer shall perform a standard test on radar to ensure its working properly. That is standard for any state in the USA. And yes it is about safety. Corners, grade in the roads, and the width and speed are all factors set in place by DOT.
Giving out tickets is no way the money maker for a town city, and I have never been to a single town or city that has a Quota. go to any city hall or town hall and ask for one and see how fast you get laughed out.
Most officers don't have to give tickets its their choice Would you give a citation for someone going 30 in a 25 in a school zone? I know I would. or how about 50 in a 45 in a work zone? I would again.
A vehicle in motion hits a stationary object at the point of impact the velocity doubles. Ie hit a kid in a school zone 30 mph is no diffrent then being in a 60 mph acccident vs 50 which decreases the lvl of trama.
Officers fav ticket to give out is the all mighty catch 22. Driving to fast for conditions. if for any reason you lose traction, wreck, ect you can get this.
Most departments make there money from federal grants for seatbelt checks, DWI checkpoints. Money seized from drug related crimes can only go to fund in that department to enhance drug enforcement.

Any traffic stop is the start of an investigation to look for any other possible crimes. And if an officer does give a ticket for 1 mile over posted speed limit and always does the same action it will hold up in court. But if the officer cant justify why out of so much time on the job he only gave out this ticket so many times you might have a chance.

And for multiple offenders ie a group of speeders there is a way this is also enforced. you ever see the sign Speed enforced by Aircraft. find speeder take pic of vehicle and plate mail ticket. Face on drivers face when gets the fine Priceless.

Also several states have already put into force monitoring stations on the ground you go 1-2 miles over, even downhill red flash pic taken ticket mailed to you. no diffrent then cameras at stoplights.
 Garuda.Chanti
Offline
Serveur: Garuda
Game: FFXI
user: Chanti
Posts: 11380
By Garuda.Chanti 2015-12-20 09:37:33
Link | Citer | R
 
Sorry Blizzy, the federal government only sets speed limits in WA DC and on military basses.

Many officers work under a quota system. They HAVE TO give out a certain number of citations per week or month.

Many municipalities rely on their police forces as revenue generators.

Robot speed traps are FAR different than red light cameras. Red light cameras produce unquestionable evidence of an infraction. Robot speed traps do not. Nor can they be cross examined in court.

If you are going to get so much stuff wrong you can at least be amusing. Fone is a good example of amusing while wrong.
 Asura.Ninjaface
Offline
Serveur: Asura
Game: FFXI
user: Ninjaface
Posts: 163
By Asura.Ninjaface 2015-12-20 10:05:01
Link | Citer | R
 
Valefor.Endoq said: »
fonewear said: »
They can't make good AI in Fallout 4 and they want me to get a in a car that drives itself...yea ok !
Driverless Cars brought to you by Bethesda...
definitely wouldn't get into those for a couple years. gotta have the modding community take a crack at it first.
[+]
 Bahamut.Milamber
Offline
Serveur: Bahamut
Game: FFXI
user: milamber
Posts: 3691
By Bahamut.Milamber 2015-12-20 13:21:15
Link | Citer | R
 
Phoenix.Dabackpack said: »
I'm still protesting the notion that "if you encounter an unfavorable situation, you can't do anything, you're screwed."

If a deer jumps in front of my car, it's a shitty situation but I have to make some reaction. Humans will understand either scenario: "he hit the deer because he prioritized his own safety", or "he swerved because he thought he could save both lives." These decisions (removing panic from the situation) are value judgments. Humans understand because there is a reason.
If we look at US interstate standards, which have a minimum lane width of 3.7m, and a minimum external shoulder width of 3m. There should be a minimum of 30m access control from the edge of the shoulder.
Most cars are approximately 1.8m wide (or less).
Let's take a deer at 80km/h (~22m/s) orthogonal to the road. (And a vehicle 130km/h orthogonal to the deer's vector!)
The time that it would take for the deer to move from the edge of the access control to the middle of the lane (assuming a constant velocity) would be in the range of 1.5s.
That's quick.
For the minimum time intercept (again, assuming constant velocity), the vehicle would be 75m away from intercept at first target acquisition. We can see what the expected stopping distance is by using the following formula :

Where the coefficient of friction (mu) is 0.7.
Or you can use this nifty table from Virginia which is in imperial units, but hey. (130kmh is approximately 80mph, which gives 305ft, which is *approximately* 100m).
What does that tell you?

It means that the vehicle was already moving too fast to be able to come to a complete stop in the allotted time. A speed of 100kmh would be more appropriate.

Either that, or breed slower deer.

The point being, things don't magically appear in front of vehicles. It may appear that way, primarily due to human reaction time (the link above gives the average reaction time as 1.5s, which is the entire encounter time in the above scenario).
Phoenix.Dabackpack said: »
Imagine an analogous situation with an autonomous vehicle.
One shouldn't even exist. If such a situation is encountered, you've already screwed up by making a compromise where you shouldn't have. Such as driving faster than you could safely stop within given the geometries and visibility of the situation.
Phoenix.Dabackpack said: »
The populace will question "why did the car hit the deer instead of swerving?" and the developers might be compelled to provide an answer if the issue becomes big enough.
Which is the wrong question in the first place, but anyways...
Phoenix.Dabackpack said: »
I don't feel like finding them now, but there are studies in AI that purport that humans tend to feel positively when AI agents are able to justify their reasoning behind decisions, even if the reasoning doesn't totally make logical sense.
This isn't new, and certainly isn't specific to AI. It pretty much applies to all decision making.


I personally think it is amusing that we are looking at modifying autonomous driving algorithms to play with bad drivers better, rather than waiting for human behavior to modify to accommodate autonomous drivers.

Which arguably says more about ethics and morals than any decision regarding hypothetical buses full of undetectable children.
[+]
 
Offline
Posts:
By 2015-12-27 01:24:12
 Undelete | Edit  | Link | Citer | R
 
Post deleted by User.
Offline
Posts: 4394
By Altimaomega 2015-12-27 02:42:11
Link | Citer | R
 
Asura.Floppyseconds said: »
Self driving cars in two years

As someone who uses the autopilot on the P85. I can see this happening after a few delays.

Quote:
Carmakers developing semi-autonomous or fully-autonomous cars will have to work with government agencies such as the DOT and set up regulations. Autonomous cars could also pose safety risks for those in the car as well as for those around the car, which means that the guidelines regarding insurance of such cars will also have to be established before they could make their way to the streets.

Good Luck with that.

Example: You're cruising at 60mph and there is a blind drive up ahead. Normally you would be paying attention and be ready to slow down or react if someone decides to pull out. That is if you even know it is there.

So car pulls out, you literally have 1/4 of a nanosecond to react, you swerve right and barely miss the jerk.

No way is a semi or fully automated car going to react like that. Their is no way to program it. Even if it was possible to program, it would take too much time for the car to process that information. Otherwise you are going to have cars abruptly stopping or swerving around on the roads.

This isn't even mentioning the fact that when one hits another driver that driver is going to own the driver of that car and the company that designed it. The legality of this concept is astounding.
 
Offline
Posts:
By 2015-12-27 03:01:24
 Undelete | Edit  | Link | Citer | R
 
Post deleted by User.
 Bahamut.Milamber
Offline
Serveur: Bahamut
Game: FFXI
user: milamber
Posts: 3691
By Bahamut.Milamber 2015-12-27 05:46:22
Link | Citer | R
 
Altimaomega said: »
Asura.Floppyseconds said: »
Self driving cars in two years

As someone who uses the autopilot on the P85. I can see this happening after a few delays.

Quote:
Carmakers developing semi-autonomous or fully-autonomous cars will have to work with government agencies such as the DOT and set up regulations. Autonomous cars could also pose safety risks for those in the car as well as for those around the car, which means that the guidelines regarding insurance of such cars will also have to be established before they could make their way to the streets.

Good Luck with that.

Example: You're cruising at 60mph and there is a blind drive up ahead. Normally you would be paying attention and be ready to slow down or react if someone decides to pull out. That is if you even know it is there.

So car pulls out, you literally have 1/4 of a nanosecond to react, you swerve right and barely miss the jerk.
No. We'll go into more detail as to why a little later.
Altimaomega said: »
No way is a semi or fully automated car going to react like that. Their is no way to program it. Even if it was possible to program, it would take too much time for the car to process that information.
Here, you are actually correct, if for the utterly wrong reasons.
The speed of light is approximately 3*10^8 meters per second. A quarter of a nanosecond is 0.25*10^-9 seconds.
Light would travel about 7.5 cm in that timeframe. Neither human nor a computer would be able to react in that span, let alone have any ability to interact with the vehicle.
Offline
Posts: 4394
By Altimaomega 2015-12-27 10:58:03
Link | Citer | R
 
Asura.Floppyseconds said: »
The car does and will react appropriately to this situation.

I don't know why you are insistent that it doesn't, can't, and won't.

I'm insistent because all the articles I read pretty much insinuate the AI isn't near fast enough for high speed problems.

Bahamut.Milamber said: »
Here, you are actually correct, if for the utterly wrong reasons.
The speed of light is approximately 3*10^8 meters per second. A quarter of a nanosecond is 0.25*10^-9 seconds.
Light would travel about 7.5 cm in that timeframe. Neither human nor a computer would be able to react in that span, let alone have any ability to interact with the vehicle.

Thanks for replying Mr. Obvious.
 Bahamut.Milamber
Offline
Serveur: Bahamut
Game: FFXI
user: milamber
Posts: 3691
By Bahamut.Milamber 2015-12-27 11:28:08
Link | Citer | R
 
Altimaomega said: »
Asura.Floppyseconds said: »
The car does and will react appropriately to this situation.

I don't know why you are insistent that it doesn't, can't, and won't.

I'm insistent because all the articles I read pretty much insinuate the AI isn't near fast enough for high speed problems.
Such as? I'd be interested in actually knowing what their rationale is.
Altimaomega said: »
Bahamut.Milamber said: »
Here, you are actually correct, if for the utterly wrong reasons.
The speed of light is approximately 3*10^8 meters per second. A quarter of a nanosecond is 0.25*10^-9 seconds.
Light would travel about 7.5 cm in that timeframe. Neither human nor a computer would be able to react in that span, let alone have any ability to interact with the vehicle.

Thanks for replying Mr. Obvious.
In any scenario where a human can react, it is possible for a computer/automated system to react quicker and more accurately.

The problem with an autonomous system isn't the reaction/computation time. It's handling the ruleset(s)/boundary conditions.
Offline
Posts: 4394
By Altimaomega 2015-12-27 15:08:40
Link | Citer | R
 
Bahamut.Milamber said: »
The problem with an autonomous system isn't the reaction/computation time. It's handling the ruleset(s)/boundary conditions.
So basically what I said, except for we are in disagreement on reaction time. Which without being able to handle that vast amount on rule sets and other conditions is still a problem.
 
Offline
Posts:
By 2015-12-27 21:02:03
 Undelete | Edit  | Link | Citer | R
 
Post deleted by User.
 
Offline
Posts:
By 2015-12-27 21:24:47
 Undelete | Edit  | Link | Citer | R
 
Post deleted by User.
First Page 2 3 4 5 6 7
Log in to post.