THE GREAT SUCKING SOUND IN ENGINEERING • Part 1 of 4
Ross Perot was right. Unfortunately.
Rules-Based Design: Formalizing What We Couldn’t Fully Explain
THE GREAT SUCKING SOUND IN ENGINEERING • PART 1 OF 4
Rules-Based Design: Formalizing What We Couldn’t Fully Explain
Herbert Roberts, PE
The Directive
During the second 1992 presidential debate on October 15, 1992, against George H.W. Bush and Bill Clinton, third-party candidate Ross Perot warned of a “giant sucking sound” of U.S. jobs moving to Mexico due to lower wages and fewer regulations. Most Americans pictured factory floors. They imagined assembly lines disassembled, crated up, and shipped south. Few imagined the sound would reach into the engineering offices where gas turbine components were designed—where structural engineers analyzed variable vanes, compressor disks, and turbine cases for the next generation of jet engines. But it did not take long for aviation corporations to hear that sound and begin calculating how to lower their operational costs.
At a major engine OEM, the response arrived as a corporate directive. For “ISO 9000” and “quality” reasons, every design team was required to document each step of its engineering process. The instruction was straightforward: write out, in a cookbook-recipe style, exactly how you perform your work. Step one, step two, step three. From concept through analysis through final release. The stated purpose was standardization and quality assurance. The unstated purpose was portability.
If the work could be written down, the work could be moved.
What Lived in the Notebooks
Up until that directive, the “how” of structural engineering in gas turbine design existed as tribal knowledge. It lived in lab notebooks—handwritten records passed from senior engineer to junior engineer across generations. Each notebook contained empirical rules, lessons learned from hardware failures, calibration notes from engine tests, and the accumulated judgment of engineers who had watched parts succeed and fail under real operating conditions.
There were, of course, documented procedures that the organization maintained in its technical library. But like most technology-based organizations, those procedures began aging the moment they were written down. The engineers who paid attention to manufacturing feedback and engine test data would make quiet modifications to capture tribal knowledge as it revealed itself—pencil notes in margins, revised assumptions, updated boundary conditions that never made it back into the official document. The formal procedure was a snapshot. The real methodology was a living system.
Consider a simple analogy. If I asked my great-grandparents how to make spaghetti and meatballs, the recipe card would read: gather flour, yeast, tomatoes, garlic, and a pound of beef; get the rolling pin and the meat grinder; start from scratch. My parents’ version would be: buy a box of pasta, a jar of the “good” sauce, and some pre-rolled meatballs. In my generation, the instruction might be: head to the deli for the pre-made meal, or just go to a restaurant. And if you outsource the task entirely, the meal you receive is likely coming out of the frozen food section.
At every step, the output is spaghetti and meatballs. The costs in time and money shift—one could argue whether for better or worse—but as you move from one end of that spectrum to the other, something critical changes. If the flavor is not to your liking, the ability to apply continuous improvement grows increasingly limited. By the time the process has been fully abstracted, the only variables left to optimize are time and cost. The recipe is gone. The judgment that made it work is gone. And no amount of procedural documentation will bring either one back.
The methodology itself was not mysterious. A senior engineer would teach a new hire how to size a compressor disk by walking through the process: (a) define the load conditions from the engine cycle, (b) estimate stress concentrations using references like Peterson’s Stress Concentration Factors and Roark & Young’s Formulas for Stress and Strain, and (c) apply material property allowables that carried built-in conservatism reflecting decades of correlation between prediction and test. The method was learned by doing, which meant the engineer absorbed not just the calculation sequence but the reasoning behind each assumption.
Equally important was what the notebooks did not contain. No one wrote down why a particular safety factor was chosen, or why certain boundary conditions were applied the way they were. Those decisions had been made by engineers who retired years or decades earlier. The rationale lived in institutional memory—in the space between the steps, not in the steps themselves.
The Tool Transition
As new computational tools became available, the nature of the tribal knowledge shifted. Hand calculations using Peterson and Roark & Young gave way to two-dimensional finite element analysis (FEA) in NASTRAN and ANSYS, which in turn gave way to full three-dimensional FEA models. Hand-drafted drawings gave way to two-dimensional computer-aided design (CAD), then to 3D parametric solid models. At each transition, the “how” was updated—new steps replaced old ones—but the underlying “why” remained undocumented.
The irony was structural. Senior engineers who had learned by hand methods understood the conservatism built into those methods, even if they could not always articulate it. The next generation learned the updated process—FEA modeling, CAD geometry, automated meshing—and followed the revised step-by-step procedures faithfully. But the procedures were a moving target, and not many engineers across any generation could give a definitive explanation for why the analysis process was structured the way it was. Those who had developed the original logic had long ago retired.
Accuracy Without Calibration
The original hand-calculation methods carried enough conservatism in material property allowables and simplifying assumptions that older designs had been flying successfully for years. The introduction of CAD and FEA moved new engineers’ analysis results away from those conservative estimates and closer to actual structural and manufacturing limits. The higher accuracy of computer-based models was real, but those models were not regularly re-calibrated against the empirical test data that had informed the original assumptions.
As the conservatism of the old methods faded, new structural issues began appearing—issues that the adaptation of new analysis tools did not flag. Parts that had never cracked before were not meeting design life goals. Vibration analyses that had evaluated a single component in isolation were now encountering high-cycle fatigue issues driven by proximity effects and operating conditions the model never considered. The new tools were not less effective. They were more accurate. But the operational assumptions embedded in the old methods were no longer valid, and no one had catalogued what those assumptions were.
The margin that hand calculations provided by being “wrong” in the right direction had quietly protected designs from failure modes that the new, more accurate tools now exposed without flagging.
The Illusion of Portability
This was the fundamental flaw in the executive directive to formalize rules-based design. The assumption was that engineering methodology could be captured in a Microsoft Word document—that if you wrote down the steps, anyone could follow them. The assumption confused process with understanding.
Beyond the documentation challenge, three deeper problems made the effort structurally unsound. First, the conservatism embedded in older methods was never explicitly identified, which meant it could not be deliberately preserved when transitioning to new tools. Second, the feedback loops that connected design engineers to manufacturing data and engine test results—the mechanisms by which tribal knowledge was generated in the first place—were not part of the documentation. You cannot write down a feedback loop. Third, the “why” behind each analytical choice had already been lost before the documentation effort began, which meant the formalized process was capturing an incomplete picture of an already degraded methodology.
None of this stopped the executives from pursuing their objective. The small cracks in the system—the parts not meeting life, the HCF issues no one predicted, the growing gap between analysis and test—were treated as individual problems rather than symptoms of a systemic failure in knowledge transfer. The documentation proceeded. The recipes were written. And the stage was set for those recipes to be handed to engineers who would have no access to the kitchen where they were developed.
What the Notebooks Knew
The lab notebooks of an earlier generation of structural engineers were never just step-by-step instructions. They were records of calibrated judgment—the residue of thousands of decisions made by engineers who could see their parts manufactured, tested, and returned from service. Every empirical correction, every conservative assumption, every handwritten note in the margin represented a data point in a feedback system that spanned design, build, test, and fly.
Formalizing that knowledge into a rules-based methodology was not impossible. But it required something the directive never demanded: understanding why the rules worked. Without the “why,” the documentation effort produced procedures that looked complete but were structurally hollow. They could be followed, but they could not be adapted. They could be transferred, but they could not be trusted.
In the next post, we’ll examine what happened when those hollow procedures met real hardware—specifically, the art and science of vibration and stress analysis for turbine vanes, where the gap between documented method and engineering judgment proved most consequential.
Please add your experience in the comments, I would like to compare our experiences or hear other suggestions.
© 2026 Herbert Roberts, PE • Engineering Mindset Blog • engineeringmindsetblog.com

