What Really Happened Aboard Air France 447

Popular Mechanics has a fascinating and terrifying look at the decision-making failure in the Air France crash:

“We now understand that, indeed, AF447 passed into clouds associated with a large system of thunderstorms, its speed sensors became iced over, and the autopilot disengaged. In the ensuing confusion, the pilots lost control of the airplane because they reacted incorrectly to the loss of instrumentation and then seemed unable to comprehend the nature of the problems they had caused. Neither weather nor malfunction doomed AF447, nor a complex chain of error, but a simple but persistent mistake on the part of one of the pilots.

Human judgments, of course, are never made in a vacuum. Pilots are part of a complex system that can either increase or reduce the probability that they will make a mistake. After this accident, the million-dollar question is whether training, instrumentation, and cockpit procedures can be modified all around the world so that no one will ever make this mistake again—or whether the inclusion of the human element will always entail the possibility of a catastrophic outcome. After all, the men who crashed AF447 were three highly trained pilots flying for one of the most prestigious fleets in the world. If they could fly a perfectly good plane into the ocean, then what airline could plausibly say, “Our pilots would never do that”?”

Read full story.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Tempo


  1. What I find lovely is the way that BEA basically covered up the pilot error, treating it as a technical malfunction. Regulatory capture at its finest.

  2. I read the Popular Mechanics article as well as the full BEA report. A real WTF moment for me was the Airbus design decision to __average__ (!!) wildly disagreeing control inputs. Apparently Boeing does force-feedback so that other users physically feel intra-seat disagreements.

    As a guy that writes code for a living, I can’t imagine feeling comfortable with that. “Uh, hey, guys, uh, if we’re going to go this way, then shouldn’t we alarm/indicate if they disagree more than some percentage?” With AF447 there was more than one instance where they were commanding pitch up and pitch down simultaneously. If pitch down would have won, they might all have lived.

    It got me thinking in general about discoverability. Here, you’d think that the optimum would be the software equivalent of a debugging console. Some kind of zen-master overview.

    Airspeed historical graph, GPS speed historical graph, altitude historical graph, angle of attack historical graph, input control positions, etc, etc.

    In this particular case, you’d think that if this had been available, then somebody would have figured out “hmm… hey, airspeed goes off a cliff at instant X, GPS speed didn’t, hey why the fuck are we increasing angle of attack, why the fuck does rookie have the stick pulled back (KNOCK THAT OFF!), pitch down, ignore airspeed, use GPS speed, lose altitude, ensure we’re de-iced, divert away from storm, emergency land to refuel…”

    The reason this issue was fatal is because the causes were, in some vague way, “discoverable” enough.

    • End of last paragraph should be: “NOT ‘discoverable’ enough.”

    • Interesting that Boeing uses force feedback. That’s definitely the most intuitive mechanism. In my answer to a Quora question about this, I suggested a flashing red light to signal the difference.

      This is a pretty complex FCS design problem, and I touched on some of the complexities in my answer, but in the end there is no way to prevent such things. You kinda have to be hit by a black swan and then react. Perhaps you’re right that a different designer in the system would have questioned the averaging design upfront, but it is hard to think in their shoes, since hindsight is hard to get rid of.

      That said, this, the Shuttle crash and many other aerospace failures all came from the same place: strange work cultures where risk management was put into bureaucratic rules/procedures/systems instead of being programmed into the engineering organizations via a culture of risk sensitivity and ability of individuals to challenge assumed consensus and express dissent. In the NASA case, the final report documented entire meetings where managers would start by stating an assumed conclusion that the meeting was supposed to get to.

      It takes a pretty combative type of person to still express dissent in such cultures. Most just go along with the flow comfortable in the knowledge that the system has been designed to make sure nobody gets blamed so long as they follow the rules.