On March 10th 2018 a one-kiloton bridge was privately constructed across an 8-lane freeway in Miami in a record 6 hours.
On March 15th it established several new records, failing catastrophically after only 5 days, killing 6 (the toll may rise) by crushing them and their cars on the freeway below. The blame game and finger pointing has now begun.
As author of a book on crash rate theory, I realized I should make a comment. So I waited for the first signs of data. No, I will not wait for every last scrap of data. For two reasons:
- You do not have every last scrap of data when designing and constructing the bridge, and that is the time to act, before the disaster not after.
- No NTSB investigation ever prevented a crash. Something else does, and it is not operative here.
I realize my #2 statement is shocking, but bear with me. I’m going to take #1 first.
Crashes can only be prevented beforehand. There are no dead bodies and crying relatives and grandstanding politicians (Marco Rubio in this case) giving urgency to the cause. There are only accountants taking away duly budgeted funds for margin and testing, and managers and customers calling for the highest value at the least impact. Solid data to prove there will be a crash is non-existent. Designers, constructors and customers need to have methods that survive the accountants and managers, in short the economic pressures.
A dramatically valuable technology like 6-hour construction of an 8-lane spanning bridge might itself increase risk of a crash, or virtually guarantee it, and countermeasures have to be deployed. If you leave valuables on the street, thieves will take them. If you use a lot of energy, entropy will increase. If you have great ideas, they will fail. These are laws of nature, despite the insistence by our culture that no idea is bad and no feature or cost-saving has consequences. Read the book to see what you can do about it. In fact, the company that understands this will reach the optimal risk operating point first, and make the most money. That is why the book has the awkward title of Economic Optimization of Innovation and Risk. Yes, I linked my book twice in one blog post. This is a book blog, in case you didn’t get that.
The book emphasizes test and inspection as cost effective means of achieving the optimal risk goal, much cheaper than paying for crashes or just paying too much for capabilities. It mentions continuing to test after deployment, and that is what prompted this blog post. The bridge in question seems to have failed during a stress test applied after it was built and the road underneath opened to traffic.
What I said in the book was to leave test data gathering systems in place, and continue to look at the data and make appropriate adjustments. Not to place artificial test loads on an operational vehicle or structure or system while endangering the public. There is a very great difference. And I didn’t want my statement to be misinterpreted.
Ok, enough of that, on to my inflammatory statement about the NTSB. An NTSB investigation cannot prevent a crash because they only investigate afterward. Duh.
Hold on. I know you think they make recommendations and airline operators and manufacturers implement those recommendations, and that prevents future crashes.
It certainly does not. Quite frequently recommendations, even required ones, are not implemented. Failure to implement old recommendations is a frequent finding in future crash investigations. But the airline industry does have a better safety record by some measures than other forms of transportation. What gives?
What prevents future crashes is fear. Fear on the part of the public of traveling with operators or on equipment that has had a couple of major crashes. This fear causes the bad operators or manufacturers from Value Jet to de Havilland to go out of business one way or another. (Value Jet’s assets were acquired by AirTran, which was later acquired by SouthWest. Thank about that for a minute.)
Inspections that happen before a crash can prevent a crash. But only if the results are heeded. Inspections revealed erosion of the O-ring seals of Shuttle booster rockets, but no corrective action was taken, and the missions were deemed so valuable one was launched on a cold day in January. That is how feature or mission value impacts crash rate.
In fact, NASA’s three worst disasters in its manned space program, the Apollo I fire, Challenger explosion killing the “Teacher in Space,” and Columbia failure due to ice hitting tiles which had been known for a long time, occurred on January 27th, January 28th and February 1st respectively.
Ice? February 1st? So the investigation into Challenger produced what? Nothing, apparently. No insight that cold weather affects high performance space vehicles.
Post-crash investigations make people feel better, but they do not prevent crashes.
Regulatory action penalizing operators and manufacturers do not prevent crashes either. My good friend’s wife was killed by a drunk driver whose license had been suspended for … wait for it … drunk driving.
Test and inspection works. Breathalyzers that lockout the car’s ignition are imposed by many judges in Texas, but they do not seem harsh enough for MADD, or some Texas judges, and weren’t used in the case of the driver who killed my friend’s wife. So she is, well, dead.
For quantitative answers, even lacking precise data, about how much to do, read the book. Why are quantitative answers needed? Why don’t you just do everything possible?
Try that with any engineering organization. Engineers and managers quickly figure out they can raise a safety issue and get funding, promotions, etc. This is why “everybody stay vigilant” doesn’t work. People only stay vigilant if their vigilance is heeded. Heeding all vigilance produces a cost bloat and you go out of business that way instead of from public reaction to a crash. So a mathematical (quantitative) theory is essential.