There is a argument against such help, because the drivers will learn to rely solely on the warnings and stop properly monitoring the road. As soon as the system fails, accidents will start to happen.
That is different from my experience. If an automatic system that replaces a human is adopted, it has to be almost perfect in the eyes of the public and regulators. For example; plane autopilot is more reliable that the pilot, yet we still have them( pilots ).
Yes, and autopilots are absolutely terrible at dealing with unexpected events. See: UA232, BA38, US1549, AC143. In each of those accidents, passengers survived who would have died in the hands of an autopilot.
1) In the BA case, the autopilot blindly commanded full thrust and then pitched up in spite of decreasing airspeed. Humans correctly determined that the engines were not producing thrust and elected to make an off-runway landing instead of stalling the plane into airport-adjacent houses.
2) In the UA case, an engine failure disabled all three hydraulic systems, leaving the plane without primary flight controls. The autopilot had no provision to command differential thrust from the two working engines, and would have driven the plane into the ground.
3) In the AC case, humans flew the plane at best L/D after running out of fuel, which enabled the airplane to land at a closed airport (any airport database would have told the autopilot it couldn't land there).
4) In the US case, humans determined that making an airport landing would be impossible and elected to land the airplane on the Hudson river, which has (and never will have) a published approach procedure.
Please explain how you expect autopilot programmers to anticipate these scenarios and correctly chose a solution.