Uber Self-drive Fatality

Discussion in 'Lounge' started by Chris, Mar 28, 2018.

  1. I'm going to ask them to microchip my bell end
     
    • Nuke Post Nuke Post x 1
  2. Oi!

    No chipping the finm!
     
    • Funny Funny x 1
  3. Predicaments like this occur all the time in real life with human drivers. The human has to decide in a split second whether to kill A or B. Whichever they choose, somebody get killed and the driver very likely gets prosecuted for it. There has never been any clear resolution what is the "correct" course of action for the human driver to take, even if he is perfect. The same applies to the self driving car. Uber cannot resolve ethical dilemmas definitively, any more than anyone else can.
     
    • Agree Agree x 1
  4. But they have to, as circumstances need to be predicted, coded, tested and applied.

    One person will run someone over rather than swerve from the road; another will swerve. AI has to do the same, every time, in a repeatable and predictable manner.
     
  5. i used to do this as a kid, i liked the sparks it made, still here brw. no luck eh?
    as for automation, best start thinking about how to deploy a universal wage system.
     
  6. You can actually help out with this if you like? MIT run a site to help with gather data on the general population's moral decision making.

    http://moralmachine.mit.edu/

    It's going to be an interesting time. Humans are actually pretty terrible at individual risk analysis, our brains are more geared towards a world that no longer exists. E.g. You probably know more people scared of flying than climbing in their car. Yet we all know which you're more likely to die in...
     
    • Useful Useful x 1
  7. Tech guys and moralty rarely go hand in hand
     
    • Like Like x 1
  8. I've got to call you on that. what total horse shit.

    Remember Bill Gates? I'd say he's a tech guy. The Gates foundation is one of the largest philanthropic organisations going. He donates more money a day than most of us will earn in a lifetime. Silicon Valley isn't full of people driving Tesla's & Prius' because they are more fun...

    I've worked in several industries & in my limited anecdotal experience, I'd say the people I work with now are generally better in this respect. Tech people tend to be pretty switched on and fairly well educated. Therefore a bit more aware of the world outside of their own tiny bubble.

    But whatever man, let's hear some more about your augmented bell end.
     
    • Like Like x 3
  9. You can call the bingo numbers for all I care, the gates foundation is a tax dodge, that is why people like u2, and most ex presidents set them up in the states even zucks realised that late to the party, foundations also get less scrutiny from the irs. Morality from tech is like Peter Stringfellow being a judge on strictly

    facebbook, nsa, cambridge analytica, google deepmind seeing nhs patients details without the patients even knowing and the list goes on.
     
    • Like Like x 1
    • Agree Agree x 1
  10. Don't confuse big business/government agencies with your average 'tech guy.' Corporate entities & oppressive governments have been fucking us over since way before the internet was even a badly drawn sketch on some DARPA notepad.

    Calling the majority of tech people immoral is like calling all greasy spoon owners twats, because McDonald's made all your kids fat and fucked the Rainforests...
     
    • Like Like x 1
  11. I used to know a girl called rainforest
     
    • Funny Funny x 1
    • Useful Useful x 1
  12. McDonalds has't fucked the rainforests or made children fat. Its the people who've bought what McDonalds sells who've done that.
     
    • Agree Agree x 1
  13. Agreed. I wish everyone upset over the Facebook stuff would realise they signed up and willingly handed over all their personal details to a business that makes money from selling those personal details.

    It's a cliche but not enough people get it... If it's free, you're not the customer, you're the product.
     
    #93 Joooooooosh, Mar 30, 2018
    Last edited: Mar 30, 2018
  14. That’s why they call theirs semi-autonomous. They know it’s not ready and therefore place the onus on the driver to remain in control and treat it like a better version of cruise control.

    Even so, 2 deaths in that many hundreds of thousands of miles. One where the driver should have paid attention and the other where nothing would have changed he outcome.

    Regarding the moral aspect. At least a computer can be programmed to behave the same every time. Humans would make different decisions based on split-second reactions. Some would kill themselves and avoid the bus stop whereas some would opt for self preservation and plough into the bus stop. They wouldn’t necessarily think about the consequences, it would be an instant decision they would probably regret afterwards.
     
  15. Can you imagine autonomous in really busy city centres and not the open roads? Imagine London with motorcycles and scooters shooting in and out, cyclists who think red lights are optional fashion accessories or even pedestrians with thier heads down on the smartphones whilst having their earphones on?
     
  16. Funnily enough, the first person to die in a Tesla in auto pilot mode shared my name, Josh Brown :joy::joy::joy:

    Tesla explicitly states their autopilot mode is not perfect and requires you keep your hands on the wheel at all times.
    It’s not a true ‘level 6’ autonomous system. Both instances cited in that article involved the drivers ignoring those rules...

    For anyone interested I found an actual study into the crash statistics of autonomous cars:
    http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0184952

    The conclusion seems to be that currently the sample size of autonomous cars is still too small to reliably draw a safety conclusion. They estimate reliable data would need 100 cars, operating 24/7 365 for 12.5 years to generate a suitable amount of data.

    But, with the limited data available, they’ve found not a single level 6 test car has been involved in a serious accident with colliding cars driving in opposite directions.

    The huge majority of accidents have been caused by the autonomous car being hit from behind by a regular car and attributed to human error.

    AV’s have been involved in this type of crash approx every 45,000 Miles, where on average, regular cars are involved in a similar crash every 500,000 Miles. Again, it’s a bit unfair to draw conclusions from that as auto cars are mainly tested in busy, urban areas and regular car crash stats are drawn from cars driven nationwide. (In the US)

    These stats are all from 2017, before the uber crash obviously.

    What I took from reading that report though, is that the safety of the cars is rapidly improving as development continues. The safety of human driving has pretty much plateaud, the safety potential of automated cars seems much higher!

    Doesn’t mean I want to give up driving myself mind. I just feel like I’d trust a google car to see me much more than a teenager checking their hair.
     
  17. Actually, this is the kind of situation you’d assume autonomous cars would thrive. A human driver has two eyes and what, 60 degree vision and various levels of attention.

    An autonomous car is effectively 100% alert and has permanent 360 vision all the time.

    I read a while back that some cyclists posed a problem. Your typical bike courier was prone to not taking their feet off the pedals at a stop and the Google car would refuse to move because of this, thinking the bike was about to move off. It caused big traffic flow problems. The auto car was actually being TOO cautious.

    Any fault in driverless cars can be reviewed, improved and an update pushed to a whole fleet and the problem removed. How do you do the same with humans?
    We’ve been making the same repeat mistakes for decades and decades.
     
    • Agree Agree x 1
  18. Meh, the technology is not ready. Its being beta tested on stupid people who trust it. Just IMO of course.
     
    • Agree Agree x 1
  19. But your opinion is wrong.
    Even in this early stage, with a small number of test cars, fully autonomous cars appear to be much safer.
    They still crash but mostly because normal cars drive into the back of them and any injuries are minor.

    Semi autonomous cars like Tesla’s are a different story though, I’d agree that drivers trusting them too much is worrying and I don’t like it.

    I’d still wager that like for like, there are less serious accidents in Tesla’s than cars with no auto features.
    It’s just that someone dying in a Ford Focus doesn’t make the news, because it happens all the time...
     
    • Agree Agree x 1
Do Not Sell My Personal Information