But only around your uncle's country place.NoBSU said:
I will putter around in my Red Barchetta.
No one knows about it, though. He says it used to be a farm.whitetrash said:But only around your uncle's country place.NoBSU said:
I will putter around in my Red Barchetta.
I absolutely cannot believe how many $$$$$ damages lawsuits these automakers are setting themselves up for. Boggles the mind.Dia del DougO said:
So now a car can actually kill people without a human operating it, and we want more of those.
What a country.
Is that uncle named Junior?OsoCoreyell said:No one knows about it, though. He says it used to be a farm.whitetrash said:But only around your uncle's country place.NoBSU said:
I will putter around in my Red Barchetta.
FTFYQuote:
This is where I thought self-driving cars were beheaded.
Plaintiff attorney's dream.quash said:
Safer, more efficient, less need for parking lots. Can't happen soon enough.
NoBSU said:Plaintiff attorney's dream.quash said:
Safer, more efficient, less need for parking lots. Can't happen soon enough.
So you hope.quash said:NoBSU said:Plaintiff attorney's dream.quash said:
Safer, more efficient, less need for parking lots. Can't happen soon enough.
Just the opposite: there will be far fewer accidents.
NoBSU said:So you hope.quash said:NoBSU said:Plaintiff attorney's dream.quash said:
Safer, more efficient, less need for parking lots. Can't happen soon enough.
Just the opposite: there will be far fewer accidents.
Electric cars are a great idea and the future changes to semi truck idling are positive ideas that will challenge battery technology. But battery technology was where we needed it. Still isn't. Musk's plant isn't any gain there in new tech. I have seen new types that are in beta testing that may be an answer. But they have not been tested in volume over time. The Navistar bid testing will probably hash this out.
So you hope all the tech is where we need it. That real conditions will match test conditions.
Isn't it amazing to what lengths the world will go to try to exempt human beings from personal responsibility?Mr Tulip said:
In situations like this, we'll just go see what went wrong. In essence, the car has a whole lot of sensors (cameras that take images, radar types that detect objects behind other objects, etc). The video is crummy, but has absolutely nothing to do with the car's ability to navigate.
The car gathers data, then sends it to a computer. The computer is just doing a huge, unending game of "If-then". "If" an object is in the roadway, "then" decide if it's a collision threat. "If" it isn't, "then" keep on trucking. "If" it is, "then" do something else.
Did the car not detect the woman (the sensors provided no data that she was even there), or did the decision tree have a hole in it (her presence was registered, but the logic did not indicate a need to do something different)?
These are situations that can be identified and corrected, forever making self-driving cars safer. The conundrum comes when I'm forced to program the logic for "If" there's a group of kindergarten kids running into the road, "then" do I choose to hit them or do I choose to send the vehicle (at presumably high speeds) into a wall or opposing traffic?
It's one thing for a human to make this decision spontaneously in the micro-second that his reactions allow him in an ugly situation. It's quite another for a logic programmer to ponder the answer, then hard code it into a machine.
haha exactly. We can make the car learn from this accident and improve its ability to make different (better) decisions in the future to prevent the same accident from happening again.bearlyafarmer said:Isn't it amazing to what lengths the world will go to try to exempt human beings from personal responsibility?Mr Tulip said:
In situations like this, we'll just go see what went wrong. In essence, the car has a whole lot of sensors (cameras that take images, radar types that detect objects behind other objects, etc). The video is crummy, but has absolutely nothing to do with the car's ability to navigate.
The car gathers data, then sends it to a computer. The computer is just doing a huge, unending game of "If-then". "If" an object is in the roadway, "then" decide if it's a collision threat. "If" it isn't, "then" keep on trucking. "If" it is, "then" do something else.
Did the car not detect the woman (the sensors provided no data that she was even there), or did the decision tree have a hole in it (her presence was registered, but the logic did not indicate a need to do something different)?
These are situations that can be identified and corrected, forever making self-driving cars safer. The conundrum comes when I'm forced to program the logic for "If" there's a group of kindergarten kids running into the road, "then" do I choose to hit them or do I choose to send the vehicle (at presumably high speeds) into a wall or opposing traffic?
It's one thing for a human to make this decision spontaneously in the micro-second that his reactions allow him in an ugly situation. It's quite another for a logic programmer to ponder the answer, then hard code it into a machine.
According to the National Safety Council there are 1.25 auto accident deaths (including the type here) per 100,000,000 miles driven so, in fact, they are not exponentially safer...they would be just the opposite of that.BaylorBJM said:
First fatality in over 10,000,000 miles driven. That's incredible . Tragic story nonetheless but the numbers behind driverless vehicles are more than clear.
Automated cars are exponentially safer than having a human behind the wheel.