THE BIG WHY — March 2026
When the System Kills Its Own: Four Decades of Friendly Fire and the IFF Problem That Won’t Die
THE BIG WHY — March 2026
When the System Kills Its Own: Four Decades of Friendly Fire and the IFF Problem That Won’t Die
Herbert Roberts, P.E. | Inventor’s Mind Blog
On the night of March 1, 2026, three U.S. Air Force F-15E Strike Eagles were flying combat missions over Kuwait in support of Operation Epic Fury — the American-led campaign against Iran. The airspace was saturated: Iranian drones, ballistic missiles, and manned aircraft were striking targets across the Persian Gulf.¹ Kuwaiti air defense batteries were active, scanning for threats. And then those batteries found one — three fast-moving radar returns that matched the profile of something hostile.
They fired. All three Strike Eagles went down.
All six aircrew ejected safely, which is the good news.² The bad news is that each of those jets costs upward of $100 million fully equipped — with the total per-aircraft cost including EPAWSS, targeting pods, and IRST reaching as high as $117 million — which means coalition air defenses destroyed more than $300 million in allied combat power without the enemy firing a shot.³ The worse news — the news that should keep every defense systems engineer awake at night — is that this has happened before. And before. And before. The same failure chain, driven by the same unresolved engineering contradiction, has killed allied personnel in every major Western military engagement since 1982.
This is not a story about incompetent operators. This is a story about a systems-level defect that four decades of investigations, reforms, and technology upgrades have failed to eliminate.
The Pattern
Falklands, 1982: HMS Sheffield
The popular account holds that HMS Sheffield’s electronic warfare systems recognized the incoming Exocet missile as a NATO-origin weapon and therefore failed to trigger defensive countermeasures. The reality, which emerged through decades of Board of Inquiry reports — some suppressed until 2017⁴ — is more complex and more damning.
Sheffield was a Type 42 destroyer operating as part of the anti-aircraft picket west of the main British task force. On the morning of May 4, two Argentine Navy Super Étendards launched AM39 Exocet missiles from roughly 20 to 30 miles out, flying at 98 feet above sea level.⁵ Argentine pilots had spent two weeks practicing attack profiles against their own Type 42 destroyers, which meant they knew the radar horizon, detection distances, and reaction times of the very ship they were about to hit.⁶ Sheffield’s radar operators had been struggling for days to distinguish between Mirage and Super Étendard aircraft on their scopes. The ship may have lacked effective Identification Friend or Foe capability or radar jamming.⁷ Despite intelligence briefings identifying the Exocet threat as credible, Sheffield’s command had assessed the threat as overrated for two consecutive days and had dismissed a previous warning as a false alarm.⁸
At the exact moment of the attack, Sheffield was transmitting on her SATCOM system, which blinded the UAA1 masthead sensor — the one piece of equipment that could detect the electronic emissions of incoming missiles and aircraft.⁹ Sister ship HMS Glasgow detected the threat at 45 nautical miles and broadcast the warning codeword “Handbrake.”¹⁰ On HMS Invincible, radar operators detected the Argentine aircraft a full 19 minutes before impact. The senior officer responsible for air defense of the entire task force classified the contact as “spurious” — because the preceding days had been plagued by false contact reports.¹¹
The Exocet struck Sheffield amidships, creating a hole roughly 1.2 by 3 meters in the starboard hull. The missile ruptured the fire main and knocked out electrical systems, which meant the crew lost both the water supply and the power needed to fight the resulting fire.¹² Twenty men died. The ship burned for days and sank under tow on May 10.
The failure chain: (a) sensor degradation from routine operations — the SATCOM transmission blinding the UAA1 at the worst possible moment, (b) an inability to discriminate threat aircraft types on radar, (c) normalized dismissal of warnings after repeated false alarms, and (d) institutional complacency about a known threat vector. No single failure was lethal. The sequence was.
Beyond the immediate tactical failures, the Sheffield exposed glaring institutional gaps in Royal Navy ship survivability. The fleet lacked effective Close-In Weapons Systems, with the exception of the new Type 22 frigates armed with Sea Wolf.¹³ Sheffield’s backup weapons consisted of two manually aimed 20mm Oerlikon cannons dating from World War II. The ship lacked basic electronic jammers that could confuse missile radars.¹⁴ Fire-fighting equipment was inadequate, portable fire pumps were unreliable, and interior design features including formica panels created lethal flying shrapnel under blast conditions.¹⁵
Iraq, 2003: The Patriot Fratricide
Twenty-one years later, the weapon system had changed but the failure architecture had not.
During the opening weeks of Operation Iraqi Freedom, U.S. Army Patriot missile batteries deployed under the assumption that they would face heavy ballistic missile attack, which meant the system needed to operate with significant autonomy — the engagement timeline against a ballistic missile is measured in seconds, not minutes.¹⁶ What the batteries actually faced was something entirely different: 41,000 allied aircraft sorties in the first 30 days, against only nine ballistic missile attacks. A 4,000-to-1 friendly-to-enemy ratio — precisely the environment where misidentification becomes statistically inevitable.¹⁷
On March 23, a Patriot battery shot down a Royal Air Force Tornado GR4 returning to base in Kuwait. Both crew members — Flight Lieutenant Kevin Main and Flight Lieutenant David Williams — were killed instantly. The Tornado’s IFF system was malfunctioning. The Patriot’s automated threat classification identified the returning fighter-bomber as an Iraqi missile.¹⁸ The U.K. Ministry of Defence investigation concluded that contributing factors included the Patriot’s threat classification criteria, rules of engagement, firing doctrine, crew training, IFF procedures, and the nature of autonomous battery operation.¹⁹ The engagement happened so fast that human operators never had a meaningful opportunity to intervene.
The next day, an F-16 pilot over An Najaf detected a Patriot radar locking onto his aircraft. He fired a HARM anti-radar missile that destroyed the Patriot’s sensor dish. The Air Force called it accidental. Pilots were less diplomatic. “Those guys were locking us up on a regular basis,” one F-16 pilot said.²⁰
Ten days after the Tornado shootdown, on April 2, a Patriot battery shot down a U.S. Navy F/A-18C Hornet near Karbala, killing Lieutenant Nathan White. The investigation revealed a confirmation cascade: one Patriot battery misidentified the Hornet as an Iraqi missile, a second battery independently reached the same erroneous conclusion, and the corroborating — but equally wrong — reports gave operators at the Information Coordination Center increasing confidence that they were tracking a legitimate threat. Two missiles were authorized. No disciplinary action was taken.²¹
Allied pilots were shaken. Navy Lieutenant Commander Ron Candiloro described the fear bluntly: although he remained wary of the Iraqi SAM threat, he was actually more afraid of the Patriot batteries.²² Benjamin Lambeth, in his Iraq war monograph The Unseen War, documented the broader sentiment among allied aircrews: many pilots believed the Patriot posed a greater threat to them than any surface-to-air missile in Iraq’s inventory.²³
The Defense Science Board Task Force, reviewing the Patriot’s performance, identified the core problem with precision: the system had deployed under assumptions about the threat environment that no longer applied, and the soldiers operating it were not in a position to question what the automated sensors were telling them.²⁴ The DSB issued a warning that proved prophetic: future wars “will likely be more stressing.”²⁵
An Army Center for Lessons Learned briefing was even more direct. It found that positive electronic means of identifying airborne objects — the entire foundation of automated IFF — had been demonstrated to have “low reliability.” The briefing urged a shift toward procedural methods of identification, which rely on tactics, techniques, and predetermined safe corridors rather than electronic interrogation.²⁶
Red Sea, 2024: USS Gettysburg
The Defense Science Board’s warning about more stressing future conflicts arrived 21 years later, in the Red Sea.
On December 22, 2024, the guided-missile cruiser USS Gettysburg — the air defense commander for the Harry S. Truman Carrier Strike Group — fired a Standard Missile-2 at what its crew believed was a Houthi anti-ship cruise missile. The target was an F/A-18F Super Hornet from Strike Fighter Squadron 11, which had just launched from the Truman to provide air defense against an incoming barrage of Houthi drones and cruise missiles.²⁷
The Navy’s 152-page command investigation, published in December 2025, revealed a cascading series of systemic failures.²⁸ In the weeks leading up to the incident, Link 16 — the tactical datalink connecting ships and aircraft — had been “noticeably degraded,” with regular outages. The Identification Friend or Foe system had sustained a series of failures that went essentially unaddressed by the strike group. The E-2D Hawkeye airborne early warning aircraft overhead had radar problems. The carrier and cruiser were giving aircrews conflicting information. And the Gettysburg’s SPY-1 radar coverage had been reduced shortly before the engagement because the ship’s helicopter was landing.²⁹
The pilot — callsign “Fig” — was on approach to land on the Truman when he saw the SM-2 launch from the Gettysburg. He initially thought the missile was targeting a Houthi drone. Then the missile reached apogee and changed course — toward him. “Are you seeing this?” the pilot asked his weapons systems officer. “Yeah, I’m watching it,” the WSO responded. The pilot pulled the ejection handle without a radio call. He later described the ejection as the most violent five seconds of anything he had ever experienced.³⁰
Both aviators survived. The Gettysburg’s commanding officer was relieved of command in January 2025.³¹ The investigation attributed the shootdown to a convergence of degraded equipment, inadequate training, and operational tempo that exceeded the system’s capacity to function reliably.³² The investigator recommended the report be distributed to dozens of Navy commands as a cautionary document.³³
Kuwait, 2026: The Pattern Repeats
Which brings us back to March 1, 2026, and three burning F-15Es in the Kuwaiti desert.
The specific circumstances of the Kuwait shootdown remain under investigation. But the environmental conditions are already familiar: a saturated threat environment with Iranian drones, ballistic missiles, and aircraft flooding the airspace simultaneously. Coalition air defense batteries making split-second decisions under extreme operational stress. And a critical detail — the F-15E is not equipped with Missile Warning Sensors for infrared-guided missiles, which means the crews would have received no warning if an IR-guided SAM was inbound. They were flying over friendly territory, where they would not have expected a surface-to-air threat.³⁴
A retired U.S. Air Force brigadier general called it “a classic case of the fog and friction of war,” noting that coordination with allies is difficult in congested, threatened airspace.³⁵ A former Air Force colonel observed that it would be nearly impossible for Iran to score a hit against a fighter jet at that range, confirming the friendly-fire assessment.³⁶
Four incidents. Four decades. Three different weapon-system families. Different services, different nations, different theaters. The same result.
The Engineering Failure Chain: Why It Keeps Happening
The reason this pattern persists is not operator error, inadequate training, or insufficient technology investment. The reason is an unresolved technical contradiction at the architectural level of combat identification systems — a contradiction that no amount of procedural reform can eliminate.
The Core Contradiction
In the language of TRIZ — the Theory of Inventive Problem Solving — a technical contradiction exists when improving one parameter of a system necessarily degrades another. The IFF problem contains a textbook example:
Improving parameter: Speed of engagement. Modern threats — cruise missiles, hypersonic weapons, drone swarms — compress the decision timeline from minutes to seconds. Air defense systems must engage fast or they cannot engage at all. This drives automation, autonomous engagement modes, and reduced human oversight.
Worsening parameter: Accuracy of identification. Correct identification requires time — time to interrogate IFF transponders, time to correlate radar tracks with flight plans, time to visually confirm. Every second spent identifying a target is a second the target spends closing on its objective.
As threat speed increases, decision time compresses. As decision time compresses, identification reliability degrades. This is the fundamental contradiction, and it has remained unresolved since the Exocet era. Every post-incident reform has attempted to optimize within this contradiction rather than eliminate it.
The Assumption Cascade
Applying Anticipatory Failure Determination — the TRIZ-derived methodology for proactively identifying how systems can fail — reveals a sequential and self-reinforcing chain of assumptions that collapse under combat stress:
First, the system assumes sensor data is reliable. But sensors degrade under operational tempo — Sheffield’s SATCOM blinded her UAA1; the Gettysburg’s Link 16 and IFF had been failing for weeks; the E-2D’s radar was malfunctioning.
Second, the system assumes IFF interrogation produces correct results. But IFF transponders malfunction (the Tornado GR4 in 2003), IFF systems sustain unaddressed failures (the Gettysburg in 2024), and some aircraft lack relevant warning sensors entirely (the F-15E in 2026).
Third, the system assumes the threat environment matches design parameters. But combat changes the ratio — from an expected missile-heavy environment to a 4,000-to-1 friendly aircraft ratio in Iraq, or from routine patrol to simultaneous saturation attack by Iranian drones, missiles, and aircraft over Kuwait.
Fourth, operators assume automation is correct because the automation was built by engineers who understood the physics. But the automation was designed for a different scenario, and when the scenario changes, the automation’s confidence does not. Corroborating reports from multiple degraded systems create false certainty — as two Patriot batteries did when they independently misidentified Lieutenant White’s Hornet.
Fifth, each assumption failure enables the next. The chain is not merely sequential; it is reinforcing. Degraded sensors feed bad data to automated classifiers, which produce high-confidence erroneous threat assessments, which are corroborated by other degraded systems, which give human operators no reason to doubt the machine. By the time anyone recognizes the error, a missile is already in flight.
The Automation Paradox
Engineering psychologist John Hawley, who was involved in the U.S. Army’s study of the 2003 Patriot fratricide incidents, identified the deeper problem: when an automated control system has been developed because it presumably performs better than a human operator, but the operator is retained to monitor performance and intervene when the system errs, the operator is placed in an impossible position.³⁷ The system is automated because humans cannot react fast enough. When the automation errs, the human monitor cannot override fast enough either — because monitoring an automated system requires maintaining expertise that the operator never gets to practice, precisely because the system is automated. The operator is simultaneously too slow to act and too trusting to intervene.
After the 2003 fratricide incidents, commanders ordered Patriot crews to switch from automatic to manual engagement mode.³⁸ This meant a human operator had to authorize every launch. The change provided comfort to pilots, but it did not resolve the contradiction — it merely shifted the failure mode from “automation fires before a human can stop it” to “a human must decide in seconds whether to trust sensors that may be wrong.” Three lives were lost to get that change made.³⁹
The Path Forward: Why the Lowest-Tech Solution May Be the Highest-Reliability One
The reflexive answer to the IFF problem — the answer you will hear from defense contractors, program managers, and congressional briefing slides — is artificial intelligence. Feed multi-spectral sensor data into a neural network. Train it on thousands of engagement profiles. Let the AI fuse radar, infrared, electronic emissions, and behavioral analysis into a single identification confidence score in milliseconds. Problem solved.
Except it is not solved. It is relocated.
The AI Mirage
Artificial intelligence does not eliminate the speed-versus-certainty contradiction. It abstracts the contradiction upward into a computational layer that is faster, yes — but also more opaque, more brittle under adversarial conditions, and fundamentally unauditable in real time by the human operators who are nominally responsible for the engagement decision.
Consider what an AI-driven identification system actually requires. It requires sensor inputs — the same radar feeds, the same IFF interrogations, the same infrared signatures, the same datalink tracks that have been failing and degrading in every incident we have examined. The AI does not generate new information. It processes the same information faster. When the underlying sensor data is corrupt — when Link 16 has been degraded for weeks, when IFF has sustained serial failures, when the E-2D overhead has radar problems, when SATCOM is blinding your emission detector — the AI is processing garbage at machine speed. It will produce a high-confidence classification that is confidently wrong, and it will do so faster than any human can question it.
Beyond the sensor-integrity problem, AI-driven combat identification introduces three new attack surfaces that did not exist in the procedural era: (a) adversarial inputs — machine learning classifiers are vulnerable to deliberate manipulation of the input space, which means an adversary who understands the classifier’s training data can craft radar signatures or flight profiles specifically designed to cause misclassification;⁴⁰ (b) cyber penetration — a networked identification mesh that fuses data across platforms creates a single logical system that can be compromised at any node, cascading false data through the entire architecture; and (c) electromagnetic jamming — the AI depends on the same electromagnetic spectrum that every military on earth is learning to deny, degrade, and manipulate.
The contradiction is not resolved. It is made more opaque. And when the system fails — when the AI-driven identification mesh misclassifies a coalition F-15E as a hostile cruise missile in a jammed, degraded, saturated battlespace — no one will understand why in time to matter, because the decision was made inside a black box operating on fused sensor data that no human can audit at engagement speed.
This is not an argument against AI in defense applications broadly. It is an argument against the specific belief that AI resolves the IFF contradiction. It does not. It optimizes within the contradiction at higher speed, which is useful right up until the moment it is catastrophic.
The Inversion: Kill the Friendly Weapon, Not the Friendly Crew
The real path forward requires inverting the default assumption that has governed air defense doctrine since the missile age began.
Current doctrine, particularly for high-speed threats, operates on a default-engage posture: the system assumes incoming objects are hostile unless positively identified as friendly. When the system works, threats are neutralized before they reach their targets. When the system fails — when sensors are degraded, when IFF is malfunctioning, when the threat environment has shifted from the design assumptions — the default-engage posture kills friendly aircraft.
The inversion is this: it is better to destroy a friendly weapon by accident than to destroy a friendly crew by accident.
That statement requires unpacking, because it sounds like an acceptance of vulnerability. It is not. It is a recognition of cost asymmetry — the same kind of cost-benefit analysis that engineers perform in every other domain where human life intersects with automated systems.
A cruise missile that strikes a friendly position is a tragedy. But a surface-to-air missile that destroys a friendly aircraft — with crew aboard — is also a tragedy, and in the calculus of repeated incidents, it is the far more frequent one. The historical record is unambiguous: from Sheffield in 1982 through Kuwait in 2026, the dominant mode of IFF failure is not “hostile weapon slips through and kills friendlies.” The dominant mode is “friendly weapon system kills friendly personnel.” Lieutenant White. Flight Lieutenants Main and Williams. Twenty sailors on Sheffield. The six aircrew over Kuwait who survived only because ejection seats worked as designed. The pattern is not adversary weapons defeating our identification systems. The pattern is our identification systems defeating our own people.
If the default posture were inverted — if the system assumed that an unidentified track is friendly unless proven hostile through multiple independent confirmations, at least one of which is non-electronic — the immediate effect would be that some hostile weapons get closer before engagement. That is a real cost, and it would need to be mitigated through other means: hardened defenses, distributed force postures, redundant point-defense systems. But the offsetting benefit is that the dominant fratricide mode is eliminated at the architectural level, not managed at the procedural level.
A missile that you choose not to fire is recoverable. A missile that destroys a friendly aircraft is not. A hostile cruise missile that penetrates deeper into defended airspace before engagement may still be killed by layered defenses — point-defense systems, CIWS, electronic countermeasures, decoys, hardened structures. A friendly F-15E with two crew members aboard, struck by a surface-to-air missile over allied territory with no warning because the aircraft lacks infrared missile warning sensors, has no second layer of defense. The crew’s only option is ejection — if they see it coming at all.
Default-engage doctrine optimizes for the scenario where every incoming object is hostile. The historical record tells us that in coalition operations, the vast majority of objects in defended airspace are friendly. Iraq 2003 demonstrated a 4,000-to-1 friendly-to-enemy ratio.⁴¹ Even in the saturated Iranian threat environment over Kuwait in 2026, the number of friendly sorties vastly exceeded the number of hostile tracks. Default-engage optimizes for the rare case while maximizing risk to the common case. This is an engineering design error, not a fog-of-war inevitability.
Procedural Solutions That Actually Work
The 2003 Army Center for Lessons Learned briefing pointed directly at this path when it found that positive electronic means of identifying airborne objects had demonstrated “low reliability” and urged a shift toward procedural methods.⁴² The insight was correct then. It remains correct now — and it gains force with every additional fratricide incident.
Separation in space. TRIZ’s Separation Principle instructs us to separate contradictory requirements in space when they cannot be reconciled within a single system. Applied to IFF: instead of asking a SAM battery to electronically distinguish friend from foe among dozens of simultaneous overhead tracks, you establish hard geographic boundaries. Friendly aircraft transit through designated corridors at designated altitudes. Surface-to-air engagement zones are physically separated from those corridors. If your jets are not transiting over your SAM batteries, your SAM batteries cannot kill them. A 1995 modeling study at White Sands Missile Range found that hostile aircraft violated “friendly” flight paths by at least 90 seconds — meaning geography alone provides a discrimination window that no electronic system has matched in reliability under combat conditions.⁴³
Separation in time. Deconflict by scheduling rather than identification. Friendly strike packages transit a corridor during a defined window. Air defense batteries are weapons-free outside that window, weapons-tight during it. This approach predates electronic IFF entirely — it is how militaries managed airspace before transponders existed — and it imposes zero dependence on any electronic system that can be jammed, spoofed, or degraded.
Hard gates, not soft monitors. The automation paradox demonstrates that a human assigned to “monitor” an automated system will defer to it. The Patriot incidents proved this: operators watched the automated classification, saw the high-confidence threat assessment, and authorized engagement. The solution is not better monitoring. It is a hard architectural gate where the engagement sequence physically cannot proceed without an affirmative human input derived from non-automated data. Not “press cancel within 3 seconds to abort” — which is the current paradigm and which has failed repeatedly — but “the missile does not leave the rail until you provide confirmation based on information that did not originate from the engagement radar.” Voice confirmation from an AWACS controller. Correlation with a published air tasking order. A second sensor modality operated by a different service or coalition partner. The gate must require positive action to fire, not positive action to abort. The distinction is critical: one architecture defaults to safety and requires effort to engage; the other defaults to engagement and requires effort to stop. Every fratricide incident we have examined occurred under the second architecture.
Operational tempo caps. The Gettysburg investigation documented a ship operating beyond what its degraded systems could sustain — weeks of Link 16 outages, serial IFF failures, radar problems overhead, and crew training deficiencies, all compounding under high-tempo combat operations.⁴⁴ A non-technical solution is doctrinal: when multiple identification systems are simultaneously degraded, the correct response is to reduce sortie rates, widen engagement corridors, elevate engagement authority to higher command echelons, or temporarily pull air defense batteries offline. Continuing to operate in a mode where the probability of fratricide exceeds the probability of successful hostile engagement is not “accepting risk.” It is engineering malpractice.
The Cost That Hides Behind “Fog of War”
Every one of these procedural solutions costs something. Airspace corridors reduce tactical flexibility. Time deconfliction slows operational tempo. Default-deny doctrine accepts the possibility that a hostile weapon penetrates deeper before engagement. Hard gates add seconds to the kill chain. Operational tempo caps mean fewer sorties per hour.
These costs are real, and they are the reason that commanders — under pressure to maximize sortie rates, maintain air superiority, and demonstrate decisive action — resist procedural constraints. The objection is always the same: we cannot afford to slow down.
But three F-15Es burning in the Kuwaiti desert is also a cost — north of $300 million in airframes alone, plus the combat power those aircraft would have delivered for the remainder of the campaign, plus the operational disruption of a fratricide investigation during active hostilities, plus the strategic damage to coalition trust when an ally shoots down your jets. Lieutenant Nathan White’s family bears a cost that no appropriations bill can quantify. Flight Lieutenants Main and Williams left behind families who were told their husbands were killed by the same alliance they served.
The current architecture does not avoid costs. It converts them from reduced flexibility and slower tempo into destroyed aircraft, dead aircrew, shattered coalition confidence, and hundred-million-dollar debris fields — and then labels the result “fog of war” as though it were an atmospheric condition rather than a design failure.
The Question That Matters
The fog of war is real. But the engineer’s obligation is not to accept fog as inevitable. It is to design systems that remain safe when visibility drops to zero. Every other safety-critical domain — aviation, nuclear power, chemical processing, medical devices — has internalized this principle. The combat identification architecture has not.
The Big Why this month is not “why did Kuwait shoot down allied jets?”
The Big Why is this: Why, after four decades and hundreds of investigations, does the same failure chain produce the same result?
The answer: because no amount of procedural reform resolves a technical contradiction, and no amount of computational sophistication resolves a contradiction that it merely relocates into a faster, more opaque decision layer. The contradiction must be eliminated — through architectural inversion of the engagement default, through physical separation in space and time, through hard gates that demand non-electronic confirmation before a weapon leaves the rail.
The Defense Science Board’s 2005 warning is now 21 years old. They told us future wars would be more stressing. They were right about 2024. They were right about 2026. The clock is still running. The question is whether the next investigation will read like all the others — or whether someone, somewhere, will finally stop optimizing within the contradiction and start eliminating it.
Until then, the system will continue to kill its own.
Thank you for following me on this journey. I would be very interested in reading about your experiences, opinions and feedback, good and bad. Please leave a comment, especially if you are outside the US or work in a different engineering field.
Notes
U.S. Central Command statement, March 2, 2026. Confirmed by Defense Secretary Pete Hegseth and Joint Chiefs Chairman Gen. Dan Caine in Pentagon briefing.
CENTCOM statement: “All six aircrew ejected safely, have been safely recovered, and are in stable condition.”
Cost analysis based on F-15EX production lot data reported by Air & Space Forces Magazine, October 2023. Lot costs ranged from $80.5M (Lot 1, excluding EPAWSS) to $97M (Lot 3), with full combat-ready configuration reaching approximately $117M per aircraft.
HMS Sheffield Board of Inquiry report: summary released 2006; full uncensored version released 2017. The Guardian reported that information had been suppressed from the 2006 summary, attributed to concurrent British Government efforts to sell remaining Type 42 destroyers.
Wikipedia, “HMS Sheffield (D80),” citing Board of Inquiry records and Ministry of Defence reports.
Ibid. Argentine pilots had spent two weeks practicing attack profiles against their own Type 42 destroyers, including the ARA Hércules.
Ibid. “Sheffield’s radar operators had been experiencing difficulty distinguishing Mirage and Super Étendard aircraft, and the destroyer may have lacked effective IFF or radar jamming.”
Ibid. “Despite intelligence briefings that identified an Exocet attack by Super Étendards as possible, Sheffield had assessed the Exocet threat as overrated for the previous two days.”
“In Perspective: The Loss of HMS Sheffield,” Navy Lookout, February 2025.
Wikipedia, “HMS Sheffield (D80).” Glasgow detected the aircraft at 45 nautical miles and communicated the warning codeword “Handbrake.”
Navy Lookout, op. cit. Radar operators on HMS Invincible detected the Argentine aircraft 19 minutes before impact; the senior officer classified the contact as “spurious” after days of false reports.
Wikipedia, “HMS Sheffield (D80).” The missile created a hole roughly 1.2 by 3 meters, ruptured the fire main, and damaged electrical systems.
Navy Lookout, op. cit. The Royal Navy fleet lacked effective CIWS except for Type 22 frigates with Sea Wolf.
Ibid. Sheffield lacked basic electronic jammers.
Ibid. References to formica panels, unreliable Rover portable fire pumps, inadequate fire-fighting equipment, and insufficient attention to smoke dangers in ventilation design.
“Understanding the Errors Introduced by Military AI Applications,” Brookings Institution, November 2022.
Ibid., citing 2005 Defense Science Board Task Force report: 41,000 aircraft sorties versus nine ballistic missile attacks in the first 30 days, a 4,000-to-1 friendly-to-enemy ratio.
“Friendly-Fire Incidents Are Nothing New in Modern Air Warfare,” The War Zone, March 2, 2026, citing U.K. MOD investigation findings.
Ibid. U.K. MOD investigation concluded contributing factors included threat classification criteria, rules of engagement, firing doctrine, crew training, IFF procedures, and autonomous battery operation.
“Why a U.S. Air Force Pilot Intentionally Fired on a Patriot Missile Battery,” The National Interest, November 2024, citing Benjamin Lambeth, The Unseen War.
The War Zone, op. cit. Also: “Aviation History: The 2003 Patriot Missile Friendly Fire Incident That Downed a US Navy F/A-18 in Iraq,” SOFREP, February 2025.
“Blue-On-Blue! The Story of the U.S. Navy F/A-18 That Was Shot Down by a U.S. Army PAC-3 Patriot Missile Battery During OIF,” The Aviation Geek Club, April 2020, quoting Lt Cdr Ron Candiloro.
Benjamin Lambeth, The Unseen War, cited in The National Interest, op. cit.
Brookings Institution, op. cit. The system deployed under assumptions about the threat environment that no longer applied, and operators were not positioned to question automated assessments.
Ibid., citing 2005 Defense Science Board Task Force report.
“Army Describes Patriot Missile Friendly Fire Problems,” Government Executive, July 2003, citing Army Center for Lessons Learned briefing.
“Navy Jet Shot Down in ‘Friendly Fire’ Incident Was Responding...,” Stars and Stripes, December 23, 2024.
“Five Minutes of Chaos: How the Navy Shot Down Its Own Jet,” Military Times, December 8, 2025.
Ibid. Link 16 degradation, IFF failures, E-2D radar problems, conflicting information between carrier and cruiser, and SPY-1 radar reduction during helicopter operations all documented in the 152-page command investigation.
Ibid. Pilot’s account, including dialogue with WSO and ejection narrative, from accounts shared with squadron mates and republished in the investigation.
“Commander of Navy Ship Involved in F/A-18 Friendly Fire Incident Turns Over Command,” Military.com, February 4, 2025.
Military Times, op. cit.
Ibid. Investigator Hakimzadeh recommended distribution to dozens of Navy units and commands.
“Three U.S. F-15E Strike Eagles Shot Down in Apparent Friendly Fire Incident in Kuwait,” The Aviationist, March 2, 2026.
Ibid., quoting retired USAF Brig Gen Marty France.
“US F-15 Friendly Fire Incident in Kuwait, All Pilots Safe,” Military.com, March 2, 2026, quoting retired USAF Col Jeffrey Fischer.
Brookings Institution, op. cit., citing John Hawley, 2017 report on human factors in automated air defense systems.
The Aviation Geek Club, op. cit. The Army changed Patriot engagement parameters from automatic to manual mode following the fratricide incidents.
Ibid. Lt Cdr Candiloro: the change cost three lives to implement.
For an overview of adversarial machine learning in military contexts, see Brookings Institution, op. cit.
Defense Science Board Task Force on Patriot System Performance, 2005, cited in Brookings Institution, op. cit.
Government Executive, op. cit.
“Feature: The Patriot’s Fratricide Record,” UPI, April 24, 2003, citing 1995 modeling study at White Sands Missile Range.
Military Times, op. cit.

