Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Self driving buses, trains, trucks etc

1242527293033

Comments

  • Registered Users, Registered Users 2 Posts: 20,101 ✭✭✭✭cnocbui


    mdebets wrote: »
    An Ai would be even better at it than the train driver, because it could calculate what would be the best solution, break or accelerate and could it do more precisely, also looking ahead on the line. What good would it have been, if the driver accelerated (more than he would have needed) and moved the cow out of his way, but was then derailed on the next bend, because it couldn't handle the higher speed and he couldn't break fast enough for it?

    It couldn't calculate the best solution because the whole situation - an out of the blue scenario not foreseen and allowed for - requires a general intelligence solution and GI is just not on the cards.


  • Posts: 31,118 ✭✭✭✭ [Deleted User]


    cnocbui wrote: »
    It couldn't calculate the best solution because the whole situation - an out of the blue scenario not foreseen and allowed for - requires a general intelligence solution and GI is just not on the cards.
    It's correct to say that AI will never be able to make the right decision in every situation, but humans also don't always make the right decision either.
    I believe that self driving vehicles will be restricted to "controlled environments" where they don't have to interact with random humans.
    So this means that they'll probably be restricted to rail and motorways.


  • Moderators, Category Moderators, Arts Moderators, Sports Moderators Posts: 50,039 CMod ✭✭✭✭magicbastarder


    cnocbui wrote: »
    You can't program in changes to AI that are derived from neural networks as you don't know anything about the 'solution' neural networks come up with.
    so there's no 'baseline' programmable in autonomous driving systems? anything they do is what they've learned to derive themselves?
    btw, i repeated the use of 'AI' because that was the term used in the post i was quoting.
    i would assume - citation required - is that basic rules and actions of driving, are hard coded in, and the benefits of neural net technology is in recognising what situation applies, and deciding what to do based on what has been programmed in.


  • Registered Users, Registered Users 2 Posts: 1,747 ✭✭✭mdebets


    cnocbui wrote: »
    It couldn't calculate the best solution because the whole situation - an out of the blue scenario not foreseen and allowed for - requires a general intelligence solution and GI is just not on the cards.
    But it's not an out of the blue situation, but one of 3 scenarios, the AI would have to check (for this particular situation) (is the track defective, is something on the track, is something next to the track moving and likely to be on the track when the train reaches it). Especially for a train, you don't need general intelligence, for such events, you need reflexes, as the driver has only a split second to make a decision before the train would reach the obstacle. An AI would be able to calculate many different possible scenarios to come to a right decision.


  • Registered Users, Registered Users 2 Posts: 7,354 ✭✭✭plodder


    cnocbui wrote: »
    You can't program in changes to AI that are derived from neural networks as you don't know anything about the 'solution' neural networks come up with.

    It's called the black-box problem and it is of serious concern. If they deploy AI guidance in general and it turns out that they start inexplicably driving straight over people wearing yellow green and pink clothing on crosswalks, there is no way of finding out why and just tweaking the lines of code because there aren't any in human legible form. You would have to try training the neural network to stop doing that and eventually issue a software update, but in the meantime, you would either have to just put up with the deaths, or incapacitate all vehicles with that AI until you had a fix.
    I'm not really sure why a sufficiently powerful artificial neural network would do that, any more than a human would. Sudden changes in behavior are much more likely to be due to a hardware fault rather than a failure of the network data itself. A neural network would probably have to be proven in much the same way that human drivers are today - with a public test. Which does raise a point that I have found strange in all of this - how the authorities seem to be happy to let the manufacturers do all the running, and insurance regulation rather than testing is where they seem to see their role.

    The public have so little visibility of what is going on in these companies like Waymo and Uber. I've always been sceptical about Uber and the recent accident tends to prove that. But, we've only got google's own words to show for their claims that they are much more advanced. The public are much more likely to accept the technology if it can be proven independently that it works. So, for example, if a particular system can be tested hundreds of times by a regular driving test in Dublin say, would people trust the technology at that point? I think it's a lot more likely anyway.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 15,087 ✭✭✭✭loyatemu


    prinzeugen wrote: »
    Rail came up before. There was an accident recently in the UK (Watford tunnel landslide) that proved there will always need to be a human present on heavy rail.

    The outcome of that would have been different if it was not for the quick and non standard thinking of one of the drivers.

    Another case was from a driver doing 100mph on the East Coast route. Spotted a cow on the line ahead. AI would have hit the brakes. Him? He accelerated so when the train hit it, the cow would explode or been thrown clear and prevent a derailment. It worked.

    2 examples, versus how many rail accidents caused by human error over the years? How many SPADs would you expect to see with an automated system?


  • Closed Accounts Posts: 1,857 ✭✭✭TheQuietFella


    I was watching a docu' on this during the week and it showed two guys 'driving'
    a car through the streets of Berlin and it was impressive. The sensor on the roof of the car
    performed one million scans per second as it rotated BUT the driver wasn't comfortable on
    how close another car was to him so he assumed control of the car until he felt the danger had passed.

    I still don't believe that this will ever happen, in my lifetime at least but most
    certainly not in Ireland, whatever about more modern countries!


  • Closed Accounts Posts: 2,891 ✭✭✭prinzeugen


    loyatemu wrote: »
    2 examples, versus how many rail accidents caused by human error over the years? How many SPADs would you expect to see with an automated system?

    About the same number. Most SPADs in the UK are caused by poor rail conditions.

    Drivers hit the brakes, wheels lock up, train slides. Classed as a SPAD. Numbers went up with the increased use of disc brakes. TWPS has almost eliminated driver caused SPADs however.

    And when was a passenger last killed in the UK or Ireland by driver error? Paddington was the last that I can recall.

    Yes there have been fatal accidents but they were 100% outside the train drivers control and no amount of automation could have prevented them.


  • Closed Accounts Posts: 2,891 ✭✭✭prinzeugen


    what a weird point.
    if we know speeding up can prevent a derailment, we program it in. you seem to think AI (or driving autonomy) exists frozen in time and that current challenges can't even be addressed, let alone solved.

    But how will the AI know what is a human, a human on a cycle, a sheep, a cow??

    Speeding up in this case worked. That would not work with a car on the tracks. It would make it 100 times worse.

    You might have vandals leaving shopping carts on the tracks? There are millions of scenarios that any AI would need to be programmed with.

    And there will always be one that nobody could have predicted.. (Great Heck/Selby crash for example).


  • Registered Users, Registered Users 2 Posts: 4,963 ✭✭✭fly_agaric


    Carol25 wrote: »
    I would love to see a self driving car in somewhere like bangalore in India.
    But i do think it will be possible eventually.
    Huge banks of data will need to be collected & fed into AI over time before it would be possible.

    I wonder what the road accident/casualty statistics are like for cities like that...if they record them properly. They probably show that the human beings aren't coping too well with the chaos.
    Joe/Josephine Soap Irish driver would probably struggle badly plonked down to navigate the roads of an Indian city (edit: without running into something/someone)...:pac:


  • Advertisement
  • Moderators, Category Moderators, Arts Moderators, Sports Moderators Posts: 50,039 CMod ✭✭✭✭magicbastarder


    prinzeugen wrote: »
    But how will the AI know what is a human, a human on a cycle, a sheep, a cow??
    i suspect it is already able to tell the difference between a human and a cow. just speculating.


  • Closed Accounts Posts: 1,452 ✭✭✭Twenty Grand


    prinzeugen wrote: »
    About the same number. Most SPADs in the UK are caused by poor rail conditions.

    Drivers hit the brakes, wheels lock up, train slides. Classed as a SPAD. Numbers went up with the increased use of disc brakes. TWPS has almost eliminated driver caused SPADs however.

    And when was a passenger last killed in the UK or Ireland by driver error? Paddington was the last that I can recall.

    Yes there have been fatal accidents but they were 100% outside the train drivers control and no amount of automation could have prevented them.

    While the argument for safety is touted as the main reason, replacing drivers is the main focus for most companies.
    The first haulage company to buy autonomous artics is going to have an unbeatable advantage.


  • Posts: 31,118 ✭✭✭✭ [Deleted User]


    While the argument for safety is touted as the main reason, replacing drivers is the main focus for most companies.
    The first haulage company to buy autonomous artics is going to have an unbeatable advantage.
    The haulage companies are going to wait until a few thousand have been used in Taxis and the like, the cost of the tech will be much cheaper by then.


    But yes, it's all about the bottom line. People are a great expense for most companies. But if you put too many out of work, it will backfire as you lose customers for your products.


  • Closed Accounts Posts: 1,452 ✭✭✭Twenty Grand


    The haulage companies are going to wait until a few thousand have been used in Taxis and the like, the cost of the tech will be much cheaper by then.
    How much does a driver cost? 40k a year maybe? And he can drive for 10 hours a day max?

    So an automated system driving 20 hours a day will save the company 80k a year maybe. So nearly a quarter million over 3 years?

    A quick google says an automated system costs anywhere from 15k to 75k (sure they're not for sale yet:rolleyes:) Less than a year payback.
    But if you put too many out of work, it will backfire as you lose customers for your products.

    Yeah, but who really cares about that? If its a case that your haulage/taxi/delivery/limo/farm/whatever company either adapts or dies, no one is going to be worrying about long term societal impact.


  • Posts: 0 [Deleted User]


    Railroads should have seat at the autonomous vehicle table, AAR says

    The Association of American Railroads today called on the federal government to “include our industry” in the national conversation over the future of autonomous vehicles, saying the Department of Transportation’s efforts to develop policies governing the safety of AVs should be extended to the rails.

    Full article at https://www.freightwaves.com/news/railroads/rails-should-have-seat-at-av-table-aar-says


    Info on the AAR
    The Association of American Railroads (AAR) is an industry trade group representing primarily the major freight railroads of North America (Canada, Mexico and the United States).[citation
    needed] Amtrak and some regional commuter railroads are also members.

    AAR Full Members
    Alaska Railroad[6]
    Railroads owned by Anacostia & Pacific Company, Inc.
    BNSF Railway Company
    Canadian National Railway: US operations
    Canadian Pacific Railway: US operations
    CSX Transportation
    Florida East Coast Railway
    Railroads owned by Genesee & Wyoming Inc.
    Indiana Rail Road
    Iowa Interstate Railroad
    Railroads owned by Iowa Pacific Holdings
    Kansas City Southern Railway
    Metra
    National Railroad Passenger Corporation (Amtrak)
    Norfolk Southern Railway
    Pan Am Railways, Inc
    Union Pacific Railroad
    Vermont Railway
    Railroads owned by WATCO Companies
    Wheeling and Lake Erie Railway (1990)
    AAR Special Members
    Canadian Pacific Railway: Canadian operations
    Ferrocarril del Istmo S.A. de C.V.
    Ferrocarril del Sureste
    Ferrocarril Mexicano
    Ferrocarril Y Terminal del Valle de Mexico, S.A. de C.V.
    Kansas City Southern de Mexico, S.A. de C.V.

    https://en.m.wikipedia.org/wiki/Association_of_American_Railroads


  • Posts: 0 [Deleted User]


    The autonomous buses are nearly here! Kinda, sorta

    This is the 2nd of 4 rollouts by this particular company
    Driverless Mini-Buses Headed to Providence Next Year

    PROVIDENCE — First there were electric bikes, then electric scooters, and next on the way are autonomous electric vehicles.

    In a pilot project sponsored by the Rhode Island Department of Transportation (RIDOT), a small fleet of six-person shuttle buses is expected to begin circulating through a stretch of the city this spring.

    Full article - https://www.ecori.org/transportation/2018/12/3/driverless-mini-buses-heading-to-providence


  • Posts: 0 [Deleted User]


    The autonomous revelation moves on. No "back up driver, totally autonomous

    Self Driving Cars to Deliver Groceries

    https://usaherald.com/self-driving-cars-deliver-groceries/


  • Posts: 0 [Deleted User]


    As for trains, totally autonomous too

    Australian Autonomous Train Is Being Called The "World's Largest Robot"

    https://www.sciencealert.com/australian-autonomous-train-is-being-called-the-world-s-largest-robot


  • Posts: 0 [Deleted User]


    Next is the aviation industry, with Boeing investing in autonomous flights

    Self-flying planes could be ready for take-off within a decade

    https://www.smh.com.au/business/companies/self-flying-planes-could-be-ready-for-take-off-within-a-decade-20181218-p50mxi.html


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 20,101 ✭✭✭✭cnocbui


    That's so reassuring given their 'we know better than you, Mr stupid pilot' software just killed 189 people.

    How does an autonomous system even know of, let alone decide on, whether to divert to a nearby airport due to an on-board medical emergency?

    It might happen for an air taxi as mentioned in the article, but even that I doubt very much. I usually avoid absolutes like saying 'never', but this is one case where I think never would apply to autonomous commercial flights carrying large numbers of passengers.


  • Closed Accounts Posts: 1,452 ✭✭✭Twenty Grand


    cnocbui wrote: »
    That's so reassuring given their 'we know better than you, Mr stupid pilot' software just killed 189 people.
    .

    You might want to call air crash invesigates with your information because theres no evidence so far of a software fault.
    Last report is that it was a faulty sensor that gave incorrect information, to pilots who were inadequatly trained.


  • Registered Users, Registered Users 2 Posts: 20,101 ✭✭✭✭cnocbui


    You might want to call air crash invesigates with your information because theres no evidence so far of a software fault.
    Last report is that it was a faulty sensor that gave incorrect information, to pilots who were inadequatly trained.

    And you might want to stow your ill informed sarcasm. There is a desperate attempt on the part of Boeing to paint this as a maintainance failure by the airline because of the damages they are potentially facing, not to mention the reputational and considerable potential sales damage, which is likely in the many hundreds of millions.

    The sensor looks to have mis-reported the planes angle of attack, causing the computers to suddenly and severely nose the plane down in order to prevent the imminent stall it thought was about to occur due to the wrong angle of attack being reported by the sensor.
    If the sensor fails to send correct information, it can confuse both the plane's computer and its pilots, causing an aircraft to take a sudden dive.
    And from the moment they retracted the wing flaps at about 3,000 feet, the two pilots struggled -- in a 10-minute tug of war -- against a new anti-stall flight-control system that relentlessly pushed the jet's nose down 26 times before they lost control.

    Though the pilots responded to each nose-down movement by pulling the nose up again, mysteriously they didn't do what the pilots on the previous day's flight had done: simply switched off that flight-control system.

    So, in the absence of human pilots trying to override the computer, a completely AI system, given the malfunctioning sensor, would have driven the plane into the ground on the first of the 26 nose downs.

    There appears to have been pilot error in not disengaging the autopilot after it originally malfunctioned, but perhaps there is some reason they didn't/couldn't that will emerege.


  • Closed Accounts Posts: 1,452 ✭✭✭Twenty Grand


    cnocbui wrote: »
    And you might want to stow your ill informed sarcasm. There is a desperate attempt on the part of Boeing to paint this as a maintainance failure by the airline because of the damages they are potentially facing, not to mention the reputational and considerable potential sales damage, which is likely in the many hundreds of millions.

    There appears to have been pilot error in not disengaging the autopilot after it originally malfunctioned, but perhaps there is some reason they didn't/couldn't that will emerege.
    My sarcasm is pointing out the incredible jumps to conclusions you're making based on the scant evidence available.

    If I had to put money on it, I'd place the blame on the aircraft mechanics who fitted the sensor, as the plane also had malfuncting airspeed sensors on previous flights that were not fixed for some reason.

    Time will tell anyways.


  • Registered Users, Registered Users 2 Posts: 20,101 ✭✭✭✭cnocbui


    My sarcasm is pointing out the incredible jumps to conclusions you're making based on the scant evidence available.

    If I had to put money on it, I'd place the blame on the aircraft mechanics who fitted the sensor, as the plane also had malfuncting airspeed sensors on previous flights that were not fixed for some reason.

    Time will tell anyways.

    I'm not jumping to conclusions, I am going on a commercial aviation oriented sites reporting of what the flight data recorders revealed. These are facts, not conjecture. I even provided a direct quote, which seems to have slipped your attention: https://www.aviationpros.com/news/12437907/lion-air-pilots-struggled-against-boeings-737-max-flight-control-system-black-box-data-shows

    The relevant issue here is that an AI flight control system relies wholly on what the various flight attitude sensors report to it. in this case it's response was to kill 189 people. The humans pilots were fighting the system to counteract it's complete failure. Had they disconnected the autonomous system, they could have saved the aircraft and everyone on it. With no human pilots to discern the AI was malfunctioning, the plane would have crashed far sooner.

    The failed sensors and perhaps faulty maintenance highlight how dangerous wholly AI flight control systems would be, because maintenance glitches and faulty sensors happen, which is only one of many reasons why completely autonomous flight control systems in charge of commercial passenger aircraft is a bad idea.


  • Posts: 31,118 ✭✭✭✭ [Deleted User]


    Which does of course beg the question, "who or what is likely to make more mistakes, a human or an AI device?"
    In the case of the air crash it appears the AI failure was compounded by the human error of not switching off the autopilot.


  • Registered Users, Registered Users 2 Posts: 20,101 ✭✭✭✭cnocbui


    Which does of course beg the question, "who or what is likely to make more mistakes, a human or an AI device?"
    In the case of the air crash it appears the AI failure was compounded by the human error of not switching off the autopilot.

    The pilot's didn't 'compound' the AI failure - make the situation worse - they failed to mitigate it. Without the pilots there wouldn't have been any attempt at mitigation.


  • Posts: 31,118 ✭✭✭✭ [Deleted User]


    cnocbui wrote: »
    The pilot's didn't 'compound' the AI failure - make the situation worse - they failed to mitigate it. Without the pilots there wouldn't have been any attempt at mitigation.
    You're being oversensitive about the words used!!!
    There were two failures, one by the AI and one by the pilots, does it matter is the word was compound rather than mitigate, the end result is the same


  • Closed Accounts Posts: 1,452 ✭✭✭Twenty Grand


    From the same link.
    Note the most of the report you linked blames human error, either the pilots or maintenance. It does say there are "potential" design flaws, but more with there being no backup than the actual sensor or software.

    The data points to three factors that seem to have contributed to the disaster:

    * A potential design flaw in Boeing's new anti-stall addition to the MAX's flight-control system and a lack of communication to airlines about the system.

    * The baffling failure of the Lion Air pilots to recognize what was happening and execute a standard procedure to shut off the faulty system.

    * And a Lion Air maintenance shortfall that allowed the plane to fly repeatedly without fixing the key sensor that was feeding false information to the flight computer on previous flights.


    Their failure to shut off the automated tail movements is baffling.

    "No one would expect a pilot to sit there and play tag with the system 25 times" before the system won out, said Lemme. "This airplane should not have crashed. There are human factors involved

    Lion Air has a very poor safety record and has been accused of skimping on maintenance to cut costs.

    One would image the reason for no redundancy is there are pilots to take control if anything happens.

    The pilots failed here.


  • Advertisement
  • Moderators, Category Moderators, Arts Moderators, Sports Moderators Posts: 50,039 CMod ✭✭✭✭magicbastarder


    Bikes put spanner in works of Dutch driverless car schemes
    Report highlights problems bicycles cause to self-driving cars’ detection systems
    https://www.theguardian.com/world/2019/feb/13/bikes-put-spanner-in-works-of-dutch-driverless-car-schemes


Advertisement