No Man in the Machine: Has the Time Come for Artificial Intelligence Assessment?

by November 2021
Illustration. Photo credit: Mike MacKenzie/Flickr

Deadline. This column must be completed by midnight. There is a problem, however. I must first get home and watch an exciting soccer match in the English Premier League before I sit to write. It is rush hour though, and the roads are jammed. Guessing the fastest route may or may not be useful. Inevitably, the solution is a navigation and live-traffic application such as Waze. The more people are behind the wheel for their livelihood, the more they rely on such apps and do not try to second guess them or beat the odds.

>> Inside Intelligence: Read more from Amir Oren

Once I am safely on the couch, the match quickly turns into endless disputes. A goal. A penalty. An off-side. Several decisions are challenged. The referee reconsiders. He consults, as he now must, the Video Assistant Referee (VAR). The human eye and the traditional judgment call have again given way to technology, especially in fast-paced and partly obscured action.

And now, as the deadline looms for writing this column . . . 

Waze and VAR are just two of many improvements to the everyday tasks of data mining and decision making. Mankind intelligently farmed out these jobs to computerized systems using artificial intelligence (AI). Humans are employing non-human tools (computers, sensors, networks, robots) to accomplish super-human missions. This is most evident in austere environments such as space, Mars, ocean bottom, enemy territory. Drones of various sizes and uses are edging out manned aircraft. Why train, pay, and risk humans in the cockpit—and have them land after several hours—when the same results or better ones can be achieved by remote control?

In the ancient art of espionage, cyber penetrations from remote neon-lit rooms have taken over the adventurous infiltration of an individual into the target’s core surroundings. While not totally supplanting the spy (and his case officer), it is nevertheless more tempting—professionally, politically, and diplomatically—to collect from afar. After all, a captured spy can be tortured, executed, or used in a prisoner exchange, incurring political costs. 

Applying AI to data collection purposes is thus relatively straightforward. But what about research and assessment—taking raw data and refining it into digestible intelligence? Then comes the production—editing, publishing, and distributing the finished product to its consumers. Finally, the work of intelligence agencies comes full circle through tasking—the duty of giving the collectors prioritized requests—which in turn determines how they will invest their resources, based on the assessments gleaned from what they collected earlier.

In real life, this is much too neat to be applicable. Moreover, there is no separate laboratory compartment for intelligence, where it is kept pure and unsoiled. Rather, it is inevitable, and, if done ethically, it is even highly valuable for intelligence to interact with both strategy and tactics; otherwise, the cost can be high. For example, the Israeli Air Force, basking in the glory of its victories in 1967 and of its continuous dogfighting dominance thereafter, missed the ominous significance of Egypt’s ability (with Soviet assistance) to build a surface-to-air missile belt and fight the IAF to a draw in the War of Attrition along the Suez Canal in 1969–1970. The IAF paid a heavy price for this oversight again in October 1973.

Emerging technology is a priceless resource available to those who need to answer “how,” “when,” and “where.” But the fundamental question remains “whether” to fight. On this ultimate question, the AI algorithms may fall short.

One lesson learned was the need to fuse intelligence, operations, and command and control, as demonstrated by Operation Mole Cricket 19 against Syria in June 1982. The collaboration between the IDF’s most high-tech-oriented services, the Air Force and Intelligence, grew even tighter over the years, whereas intelligence support for the ground forces lagged behind. 

The Second Lebanon War in 2006 was a wake-up call in this respect, and in the years since, measurable progress has been noted. Reforms under former Directorate of Military Intelligence Chief Aviv Kochavi (currently the IDF’s chief of general staff) and his successors broke down barriers within the Directorate of Military Intelligence (DMI) as well as between it and the operational stakeholders. The DMI established an all-source architecture, fusing data from Unit 8200 (Israel’s equivalent of the NSA and Cyber Command) with HUMINT (including the civilian Israel Security Agency, charged with Palestinian affairs), GEOINT, VISINT, and open sources. This flow, in turn, is linked to the General Staff’s Operations Division—the military’s nerve center—as well as to the Air Force, Navy, territorial commands, and frontline divisions. 

In both the Northern Command, facing Syria and Lebanon, and the Southern Command, watching Gaza, a new intelligence-based industry was set up, the so-called “target factories.” These commands not only painstakingly collect and analyze but also match thousands of potential targets with compatible fire units—aircraft, manned and unmanned; tanks, artillery, missile boats—and then train and simulate their missions. In real time, of course, the scheme may go awry: targets keep moving around or are destroyed, or pop up unannounced; hence the need to speed up the sensor-to-shooter cycle. 

Thus, when Operation Guardian of the Walls in Gaza ended, the DMI revealed one of the secrets behind its success in hitting Hamas’s extensive tunnel network. AI was apparently, for the first time, embedded in the data-to-destruction continuum. There are simply too many incoming pieces of information and outgoing orders and authorizations for the human mind to digest and prioritize, in the minutes and seconds before it becomes useless. Only with AI can the “target factories” operate at what is termed in the US as “the speed of relevance.”

There is no inoculation against intelligence mistakes, however. Even with AI, mistakes will inherently be made by humans. If a pattern is fed into the system, errors of interpretation are still possible and natural—until the deviation sets off an alarm. Even when based on a review of AI-generated options and recommendations, Go/No Go decisions are still matters of human policy. While these emerging systems and procedures are obviously priceless resources available to those who need to answer “how,” “when,” and “where”—questions regarding the conduct of battles—the fundamental question remains “whether” to fight. On this ultimate question, the AI algorithms may fall short.

In a notional case of game theory, one can construct a set of indicators activating an alarm—or even a war warning. It is a sort of smoke siren, or a tripwire that has been stepped on. Surely, such an alarm should trigger automatic, reflexive reactions, leaving the decision maker no choice; but such stark situations are atypical. When the weather forecaster on the evening news projects a “low probability” of rain—the term famously used by the DMI in 1973, when they misread Anwar Sadat’s inclination to wage war—the forecaster may even translate “low” into percentages: 30%. That ends her part of the process. The responsibility now lies on the viewer’s shoulders. What should the viewer do with this assessment?

Take an umbrella, one is tempted to shout, even if there is a 70% chance of its staying dry and folded. Compared to the risk of singing in the rain, it is a no-brainer, rather than a no-rainer. Yet some “umbrellas”—such as a massive reserve call-up—are neither that cheap nor that useful. 

 No algorithm, then or now, would have been compelling enough to override human judgement. Israeli artillery forces during the 1973 war.
No algorithm, then or now, would have been compelling enough to override human judgement.
Israeli artillery forces during the 1973 war.

The cadence of deter-detect-defeat rhymes nicely (also in Hebrew: harta’ah, hatra’ah, hakhra’ah), but the detection or early warning mechanism was never automatic. Back in the 1950s, Ben-Gurion, by then a seasoned politician, explained to fellow ministers why he was deviating from his own doctrine of “early warning”: While Jordan was being goaded by Egypt to threaten an invasion of vulnerable Israel, he nevertheless argued that he could not act on the assumption that hostilities were imminent. A reserve call-up would be very costly and could turn into a situation of attrition, bleeding Israel dry, without a shot being fired.

He was right at the time; but in 1973 the same chain of reasoning left Israel ill-prepared for Sadat’s decision to go to war (and undo Ben-Gurion’s doctrine piece by piece—the Egyptian president even wondered aloud, in a mid-war speech, what Israel’s old and dying former leader would have done). Egypt and Syria shattered the concept of deterrence, avoided detection, and sought—less successfully—to deflect the IDF from the goal of scoring a decisive outcome. The logic that proved right in the 1950s proved wrong in 1973. 

No algorithm, then or now, would have been compelling enough to override human judgement. After all, Sadat’s threat to go to war was no secret. The entire Israeli political and military leadership was aware that without some diplomatic progress, war could come—but not, they reasoned, when the Egyptian Armed Forces still lacked some hardware (planes, missiles), and not on the eve of Knesset elections. In 1975, perhaps, but not in the first week of October 1973.

Some five months earlier, a similar debate, based on similar intelligence reports, unsettled Prime Minister Golda Meir, Defense Minister Moshe Dayan, and the IDF Chief of Staff David “Dado” Elazar. In contrast, Eli Zeira, then director of the DMI, boldly and correctly predicted that Sadat would find some pretext to abort his own war. Zeira’s prediction remained unchanged in October, but not Sadat’s decision. Sadat had outsmarted the best minds in Israel of 1973; could another leadership group in today’s context act otherwise, helped by AI at the strategic level? 

The jury—consisting of humans, not robots—is still out, because there is no way of telling a future Golda what would be her best option. An intelligence assessment may be based on science, but it is still an art, as in artful—not artificial—intelligence. Can AI still be elevated from a command technology to a cabinet tool? Perhaps it can, by upending the process and making the AI product the default option, which the leadership will be urged to adopt, unless convincing counterarguments prevail. War is never inevitable. It is a human endeavor, and the assessment of warlike trends is too important to be left to machines.

>> Inside Intelligence: Read more from Amir Oren

Amir Oren
Columnist
Amir Oren has been covering national security, intelligence, and foreign affairs as a combat correspondent and commentator for decades. He is a regular lecturer at defense colleges and intelligence and diplomacy fora in Israel, Canada, the EU, and NATO. @Rimanero
Read the
print issue
Download
Get the latest from JST
How often would you like to hear from us?
Thank you! Your request was successfully submitted.