Translating the Air Force's "Cyber Vision 2025"

Translating the Air Force’s “Cyber Vision 2025”

A de-jargoned report shows USAF hoping to have a working version of Skynet in ten years

Written by and
Edited by Michael Morisy

Early last year, we wrote about getting a copy of the Air Force’s “Cyber Vision 2025,” which outlines where it wanted the military’s tech to be in the next ten years. Back then, we joked about how hard it was to understand. Today, however, we have Tim Cooper, our Official Jargon Translator take a crack at it, and in plain English, it’s pretty terrifying.

We’ve embedded the doc below, and you can follow along with Tim’s notes.

Page 1 - 9

Form letters. Well, not so much form letters as Secretary of the Army Eric Fanning sending copies of the report to the committee members. There’s still a little jargon in there, so:

  • autonomy” in this case means “things that operate without human interference.” When you think of a modern drone like a Predator, those are mostly remote-piloted. They are not autonomous. Super-modern things like the Global Hawk and the Predator B and C are capable of flying patterns and between waypoints on their own, usually only requiring pilots to land and fire weapons. These are ‘semi-autonomous.’
  • decision support technologies” are what it says on the tin: whatever helps people make decisions. In context it’ll probably be things like various metrics, simulations that analyze possible decisions, and the like.
  • connectivity and dissemination technologies” are again what it says on the tin: communications methods like satellite links and encryption.
  • processing and exploitation technologies” is an interesting one. Processing is easy, it’s CPU stuff. “Exploitation” in this context regards teasing out information from data, which are two different things. It’s sort of like submarine movies where the sonar guy is looking at this sound spectrum and saying “that’s an enemy submarine.” The spectrum is data and the presence of the submarine is information. Exploitation technology is for automatically finding the submarine, as it were, from the spectrum.

The rest is the usual “this is what we’re gonna do” song and dance.

Page 12

Executive Summary: if they wrote this right, a lot of my de-jargoning would be unnecessary.

  • High-payoff” is short form for “high return on investment.” This is what they think will maintain the American military technological edge, which is often called an “offset.”
  • This also defines their “near-, mid-, and far-term” concepts. Near-term is stuff that could be seen in the next Third World intervention. Mid-term is budgeting. Long-term is another ‘offset’ that will have strategic effect over decades.
  • Core technical competency” is fancy-talk for “specialties.”
  • Cyber science and technology” means “computers and networks, with an emphasis on the Internet.”
  • Core function master plans” means “what we plan on being really good at and how we’re going to be really good at them.” An example of a USAF core function would be Prompt Global Strike (“hit anywhere on the planet quickly”); a core function master plan around this would be modernization of the ICBM missile force and/or development and deployment of the B-21 Long-Range Strike Bomber.
  • Formal and informal methods” means “traditional plodding research and shit some guy just came up with.” The latter is more valuable than people might think, though it’s also where military research into weird stuff comes from.
  • Transition pathways” means “upgrade cycles and methods.”
  • Agile” in this context always means “flexible,” not fast. That’s why it’s usually paired with “rapid.”

Page 13

Now we’re getting into some meat.

  • Security environment” means “threats and how we can deal with them.”
  • The use of “superiority” is telling. It does mean what it says: our technologies and capabilities are superior. If they’re so superior to the point that they’re almost uncontestable, the buzzword is “dominance.” Reading between the lines, there’s a distinct concern that our superiority is at risk, which is why they’re aiming to “sustain superiority” and not something like ‘achieve dominance.’
  • Air superiority is conventional air combat, traditionally having to do with fighters being able to control airspace. Bombers are now a part of air superiority because they take out ground-based air defenses. Bombers hitting non-air-defense targets (like, say, tanks) are not part of air superiority.
  • Space superiority can be summed up as “GPS, satellite-based communications, and spy satellites; how we protect those; and how we can keep others from having things as good as we do.” We’re against anti-satellite missile technology because the resulting debris clouds can deny entire orbits to everyone. Whenever we (or the Chinese) announce some experiment about close-formation spaceflight or on-orbit repair, this doubles as demonstrating the offensive capability of flying close to a satellite and disabling it, which can be as simple as smashing an antenna or just turning it around the wrong way.
  • Cyber superiority is about hacking, counter-hacking, and anti-hacking defenses like encryption and firewalls. Plain and simple.
  • Quick-reaction capability support” means “stuff we can turn around fast to do our current jobs better.” An example would be better firewall software.
  • Revolutionary technologies that address far-term warfighting needs” would be something like “fully autonomous drones that can make reconnaissance and surveillance at least as accurately as human experts, reducing political cost because we’re not risking pilots and reducing actual cost because we’re not paying for drone operators’ PTSD therapy.” Yes, drone pilots are prone to PTSD.
  • Capability-based” means that they want to orient their research money to achieving specific ends rather than broadly exploring technological avenues. In sci-fi terms, they want to unlock Hunter-Killers rather than just have learning compyootahs.
  • Support the current fight while advancing breakthrough S&T for tomorrow’s dominant warfighting capabilities” translates to “Make how we fight now better while still keeping in mind how we expect to fight in the future.” An example would be better automated identification technology now to simplify the job of drone pilots and intelligence analysts when it comes to figuring out what those few grainy pixels over there are, and making the response to the identification autonomous so as to either reduce that pilot/analyst manpower or to level them up (i.e. they’re managing/overriding a bunch of drones that have the autonomy and initiative to mostly look at what piques their interest).
  • Balanced, integrated” just means both that they don’t want to dump all their money into one thing and all the projects have to talk to each other. This is actually important; consider the 50s, when we were researching ICBMs and Project Pluto atomic-powered cruise missiles and atomic-powered bombers and whatnot all at the same time, with no real consideration for each other.
  • Retain and shape the critical competencies needed” is a workforce and infrastructure statement. They don’t want to sell off all the wind tunnels, say, like NASA did in the 90s, and they need to make sure all the old scientists teach new scientists their tribal knowledge … while not losing new scientists to private industry. At the same time, electromechanical inertial guidance system designers aren’t a thing they need so much of, so if those guys can be replaced with computer scientists …
  • Reducing cyber vulnerabilities” is pretty obvious: “make our things less likely to be hacked.” “Emphasizing mission assurance” is a bit more interesting: “mission assurance” is the confidence that the task can be attempted and achieved. When it comes to the tech development world, “mission assurance” is all about making sure that the new gizmo actually works. The Pentagon Wars exaggerated things for laughs, but it had more than a grain of truth in it–this is saying that the AFRL is working really, really hard to avoid the excesses of the 80s and 90s (and the Joint Strike Fighter) in making expensive doodads that don’t work like they’re supposed to.
  • This is emphasized in “assure and empower the mission.” That “assure” comes first is meaningful: reliability comes before increased potency.
  • Develop next-generation cyber warriors” is a cooler way to say “train better hackers, coders, and security experts.”

Page 14

  • Enhance agility and resilience” means that USAF offensive and defensive cyber capabilities have to both be more flexible–probably via a bigger toolkit–and more damage-tolerant. “Resilience” is a meaningful word here, suggesting ‘durable’ but with an emphasis on being able to spring back from attack, damage, or disadvantage. A guy who never gets sick is durable; one who gets over sicknesses quickly is resilient. This bullet point suggests that the AFRL is making the assumption that defenses will occasionally fail, so it’s extremely important that they pop back up or have redundancies that keep them from losing the farm, so to speak.
  • Invent foundations of trust.” That’s a weird and interesting one. It suggests that no one trusts current USAF cyber technologies and capabilities, and they have to build that trust from the ground up. But who are they building that trust with? Private industry? End users (‘warfighters’)? The civil government? The general population? All of the above?
  • firm, trustable foundation in cyberspace with the ability to incorporate assured mission capabilities that are more agile and resilient.” Translation: “we’ll have people, hardware, and software that can reliably do the job they’re expected to in any number of ways and won’t buckle when the first thing goes wrong.”
  • Leap-ahead” is how one gets an ‘offset:’ getting better than adversaries.
  • Art-of-the-possible” is a meaningful shift from “state of the art.” State-of-the-art suggests technologies already in play. Art-of-the-possible is more sporty (which makes sense, seeing how they’re supposed to be developing new technology).
  • Force multiplying military capabilities” is exactly what it sounds like: something that literally multiplies the quantitative effect of whatever military force is applied. A classic example is precision-guided munitions: in WW2, to destroy a given ball-bearing factory, you’d send a formation of at least several dozen bombers, each carrying several dozen bombs, to maybe hit a single industrial production plant enough to knock it out. With PGMs, you send one fighter and drop one 2000-lb JDAM (maybe two if you want to make sure) and the job’s done. Bombers got their force multiplied from “sorties per target” to “targets per sortie.”
  • The AFRL’s four groups (that they’ve talked about at length) include scientists performing general research to open up new ideas and technology, engineers finding ways to apply the technology, and developers getting the technology ready for prime-time.
  • These groups work with the major commands (individual large units responsible for overall missions; for example, Air Combat Command is responsible for the stereotypical flying of fighters and bombers) to prioritize their work. The “core function lead integrator” is probably just a fancy title for a systems engineer responsible for making sure that the MAJCOMs and the scientists are speaking the same language and aiming for the same things. Hence ensuring “the investment strategy employed is aligned to the critical operational needs” and so forth.
  • Achieve dominance in cyberspace” is a new twist from the above. We feel our superiority is at risk but we want to become dominant.
  • Embedded, tactical components” would be something like a headset or a sensor pod: the stereotypical dingus.
  • Enterprise systems” are bigger things: servers, networks, that sort of thing.
  • Mission-aware routing of information based on user, priorities and mission objectives” is all about transferring data through a network while keeping in mind who’s using it and why. An example is a “kill-chain” scenario: remote-piloted drone spots some guys in a pickup truck; this information has to be relayed to intelligence specialists who determine if the guys are a threat or a previously designated target, tacticians who know the local terrain and the likelihood of collateral damage, strategists who are aware of local sensibilities and the probable consequences of atomizing these guys, mission planners (or airborne control) who can divert assets to atomizing these guys, and finally the trigger-puller who does the atomization. Everyone in this “kill-chain” has needs for information but not the same information and they have information to offer but not all the same information. What this mission-aware routing is all about is making sure that everyone gets what they need but only what they need, with the mission in mind: the trigger-puller gets coordinates and maybe a visual of the truck to bomb; the analysts get pictures of the people to check against records; the strategists and mission planners/controllers have overall force distributions that no one else really needs to know, and so forth. If the network is aware of the mission itself, then it can prevent too much information leading to too much action. In this same kill-chain example, let’s assume that the mission is surveillance and as such no atomization is authorized, even if one of the guys in the truck is Monster McBadguy, the most hated war criminal east of Peoria. If the network knows this, then it can cut the mission planners, tacticians, strategists, and trigger-pullers out of the loop; the info just goes to the intel guys to collate.
  • Cyber agility by autonomously changing the internet addresses of assets” is a cool trick: let’s say someone decides to DDoS some Department of Defense website. The DoD’s servers detect this and then prevents the attack by automatically - without human input - changing the address of the site. The DDoS continues to ping a location that no longer exists. While this would prevent external authorized users from accessing the site since local Internet DNS nodes presumably wouldn’t be updated in real-time (since that would defeat the purpose of evading the attack), the targeted servers wouldn’t be overloaded or suffer other adverse effects like unauthorized entry or data loss.
  • Delivery of cyber collection/analysis capabilities to programs of record” means ‘giving programs that already exist the functions they need to do their job when it comes to gathering data and figuring out what it means.’ Reading between the lines, this is probably coordination with the NSA or military intelligence.
  • Scalable cyber mission framework” is talking architecture. I’d imagine we’re talking about tools that could, say, provide security for a wireless GPS device or a DoD server on the defensive side or, say, hack an enemy website or an air defense radar network on the offensive side.

Page 15

  • Resilient architectures capable of adapting and utilizing yadda yadda yadda” is about having systems that can monitor their own traffic and, ‘knowing’ what they’re supposed to be for, automatically identify unauthorized or nonstandard traffic and figure out what kind of traffic it is (is it a hacker? some guy dorking about on SIPRNET?).
  • Foundational mathematical basis for mission assurance and trust” sheds some light on that ‘trust’ bit above. The “trust” is about hacking. This is talking about developing mathematically unhackable systems. DARPA did something like that with a helicopter last year, IIRC; they gave the red-team hackers root access to the flight control computers and they weren’t able to make the helicopter do anything it wasn’t supposed to.
  • Cross-Service framework for information operations” is just a network the entire military can use. Standardization stuff.
  • Next generation Air Force infrastructure based on a provably correct design” probably means ‘making a new network architecture that is mathematically proven to be impossible to hack.’
  • Autonomic response and recovery with self-healing code and modular, composable security and network architectures” is saying that a given network and software on it should be able to detect code injection attacks that would force it to act differently and automatically, without human influence, restore themselves and fix any problems that momentary out-of-nominal operation caused.
  • Integrated/synchronized execution of effects in cyber, air, and space” is all about expanded combined arms theory: imagine, if you will, an airstrike against Country Red. In space, assets are positioned to observe Red’s forces and readiness while simultaneously neutralizing Red’s space assets, making them blind. In cyber, Red’s air defense and other command-and-control networks are jammed/shut down/rerouted/put into a loop/whatever, rendering Red blind and unable to respond in any way other than locally. In air, a stealth bomber drops a bomb on a thing. If done completely correctly, the only evidence that anything was ever going on would be after the operation is over and Country Red discovers a crater wherever that thing used to be (since, done perfectly, air defenses and command and control should be spoofed and not jammed; jamming lets them know something’s up).
  • Paragraph #2 is all about making new systems that will let people make better decisions faster. Key non-intuitive jargon is “unified planning and execution,” which means that the people who do the planning are also responsible for making it happen, and “continuous assessment enabled by mission-focused autonomy capabilities”: this last comes from intelligence assessment working in cycles - send up a recon sortie, get data, analyze it, make an assessment, act - but with long-endurance autonomous drones being a thing now, they want to be able to constantly update the intelligence information. Those few pixels there go from being a smudge to an object, a truck, a truck with a missile launcher on it, all in real time from the same orbiting sortie.
  • Automated indications and warning tool capable of using current intelligence yadda yadda” is precrime stuff: ‘we want a computerized dingus that, based on all the intel we have now, can tell us what the enemy is going to do.’ This is somewhat dangerous territory; the Soviets were convinced in the late Cold War that we were going to hit them with an atomic first strike and so they came up with Project RYaN, a signals and human intelligence operation watching various parts of our command-and-control structure with the basic logic of “if this stuff happens, they’re about to hit us” - the idea being that if they knew we were going to hit them, they could hit us first. As it turns out, the Able Archer ‘83 exercises tripped off pretty much all of the RYaN alarms, which is why the Soviets increased their readiness to just-short-of-war. The only thing that kept them from being convinced that we weren’t going to hit them was that when they escalated the NATO general in charge of forces in West Germany intentionally kept his forces, which were not part of the exercise, stood down.
  • This line suggests that we basically want a RYaN-type system, but made of computers. On the plus side, RYaN was trigger-happy because it was made of KGB agents absolutely certain that we were going to nuke them. On the minus side, Robo-RYaN’s intelligence bias would be dependent on whoever programs it.
  • Optimize use … for small-scale and limited complexity engagements” is about things like blowing up Houthi rebels in Yemen. It’s not politically complex and our involvement consists of the occasional Predator atomizing the occasional SUV (or wedding party). How can we not be wasteful (be less wasteful?) in these sorts of operations? We want software and procedures that will help us reduce waste.
  • Integrate non-kinetic assets/effects on par with kinetic options” is another interesting one. “Kinetic” in military terms means ‘violent’ in a classical shooty-pokey-stabby sense. Atomic missiles, bullets, and assegai spears are all ‘kinetic’ assets. “Non-kinetic assets/effects” would be “hacking someone’s network so they can’t see or talk.” To make the non-kinetic on par with the kinetic, both have to neutralize assets: you could neutralize an air-defense radar both by, say, hitting it with a bomb or by hacking its control motors to burn it out. You can neutralize centralized air defenses by using a stealth bomber to drop a bunker-buster on the central command post (the old model) or by hacking the network and putting the central command post’s screens on a constant “nothing happening” loop.
  • Integrate” is the odd word out here, since there’s no “with” to go with it. If the non-kinetic were being integrated with the kinetic, that’d make more sense. How this reads, though, is that non-kinetic assets that are just as effective as kinetic ones already exist, they’re just not in a readily usable state–they need to be integrated with the force as it exists.
  • Identification of command-and-control information system anomalies in seconds and develop mitigation plans in minutes” is the counter to the above: they want software that can tell when the command-and-control system has been hacked and doing weird things, which means someone’s doing something so we’d better have a way to fix it.
  • Demonstrate self-organization among multiple autonomous information gatherers etc etc” is really interesting. They want to show - not deploy, just show - that with a bunch of “autonomous information gatherers” - say, automated robot drones - they can quickly answer tactical questions and figure out what else they need to find out (“identify gaps”) with the massive amount of data said drones produce. Modern intelligence is usually drowning in data; they want to show that a system, probably as automated as possible, can simultaneously winnow it down and point out where even more data is actually needed. This means that rather than staring at everything all the time, such a system could take a quick look at everything and then go ‘this stuff is extraneous, but this is important and we need to look at it harder.’
  • Anticipate adversary intentions weeks ahead of time via automated data extraction, manipulation, and reason capabilities to enable timely mitigation of evolving threats” is more Robo-RYaN stuff, with the addition that in this case it’s not just a data collator, it’s a data collector as well. What they’re talking about here is an automatic robot cyberspy that’s also its own analyst. Now that’s the cyberpunk future I was promised.
  • Proactive, resilient C2 service-based architectures able to forecast instabilities and recommend corrective courses of action” is the next step from that anomaly-detecting system above. This describes a network that’s able to predict, if not attacks, at least things that might make it go wonky like solar weather or ionospheric interference or regular communication channel use cycles or… whatever. It’d depend on what exactly the system is. At the same time, it can predict that, say, satellite communications are going to get clogged so users should switch to terrestrial station-to-station relays or the like.
  • Plan, integrate, synchronize, and optimize global and theater air, cyber, and space assets to achieve maximal effects in dynamic, contested environments” is a doozy, and the next step from that optimizer mentioned above. Were the word ‘autonomous’ included here, it’d be Skynet. Instead, it means that they want software and procedures to assist in making worldwide and local forces work together as efficiently as possible. “Dynamic, contested environments” may as well mean ‘on a war footing,’ since they want it to work when the situation is always changing and when it’s “contested,” which means we’re fighting for it against an adversary roughly equivalent to ourselves
  • Adaptive interfaces that increase situational awareness across all domains” is very broad. These would be user interfaces that change based on the user and the situation so the user can better understand what’s going on.

Page 16

  • Autonomous teaming for understanding new data and tasks, decreasing uncertainty and increasing the speed of analysis and decision support” is getting closer to Skynet. “Autonomous teaming” is talking about networks, platforms, drones, dinguses, whatever automatically and without human interference talking to each other in order to collate their data and figure out what to do next - it’s the next stage past all that swarming research that you can find YouTube videos on. Say you’ve got a satellite, a long-duration drone, and some robotic cyberspies; with this they can find each other, share their information - the satellite sees movement in a region that look like fighting positions being dug and directs the drone to look closer, the drone sees Mr McBadGuy in his truck driving around between the positions and confirms them as positions, and the cyberspies use signals intelligence to hack into McBadGuy’s dark-web networks to find that he’s planning on doing something. Some financial hacking later, front groups associated with McBadGuy have bought sarin precursors in bulk and human intelligence in a different country reports that an entirely different group has just sold off some old Katyushas capable of carrying as much sarin as McBadGuy could be making. Conclusion reported to intelligence analysts: McBadGuy is planning on firing Katyusha rockets filled with sarin nerve gas from these fighting positions and, given his position, he’s going to do it soon, and here’s the chain of evidence. (And this is mid-term. That’s either sporty or scary. Maybe both.)
  • Accurately and rapidly etc etc” is the next step of pre-crime. “Adversary hostile options” are basically the tactical choices available to the enemy given the situation; taken together, they want a system that, given the known intelligence, can determine what actions an enemy will probably take so they can be preemptively countered.
  • “Conduct agile operations by recognizing, seizing, and assessing opportunities in dynamically changing environments in real time” is that countering; they want systems that can make the military more flexible and more responsive even when things change. The “real time” is important for the same reasons mentioned above; intelligence usually has a cycle time. Really, they want military operations to work more like real-time strategy computer games.
  • Comprehensive in-depth understanding etc etc” is what it says on the tin: they want to know how cause and effect are linked in the military sphere. Believe it or not, nowadays it’s still a matter of tribal knowledge. Warfare is still, tactically and strategically an art; military doctrine is the state of that art. With cause and effect known, they want ‘military science’ to become a reality.
  • Autonomous execution and reconfiguration of C2 information systems in contested environments” isn’t as Skynet as it sounds; it means that it wants command and control networks to set themselves up automatically and readjust themselves during combat when nodes are neutralized. Think of it as a flexible tactical Internet.
  • Self-adaptive and self-configuring for increased resilience and mission assurance” just emphasizes this; these combat networks need to be damage-tolerant and reliable, and do so by rerouting and reconfiguring themselves as necessary.
  • The translation of sensory data into actionable information” means ‘turning what we see into information we can act on.’ In human terms, what you see with your eyes is shapes and colors and motion; the optic centers of your brain interpret it into actionable information of cars and stop signs and children chasing balls into the street.
  • The next sentence takes some dissection: “command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR)” is a catch-all military term that’s supposed to cover the entire information back-end of combat, from acquiring information (the ISR part) to planning, ordering, and coordination (the C4 part).
  • As such, “assuring tailored dissemination from the [C4ISR] enterprise to the tactical edge” means ‘making sure the people and machines that kill people and break things (“the tactical edge”) get only the information they need to do their jobs (“tailored dissemination”).
  • Interdependent physical and logical layered networking” means networks that work together and are organized both by space-and-time and by mission requirements.
  • Anti-access/area denial environments” (or A2AD) is the buzz-phrase that means ‘places where we can’t go indiscriminately because other people have put up systems to keep us out.’ Most of the Middle East and Africa, for example, are not A2AD. China, central Iran, Russia, et al are because they have loads of air defense infrastructure. Being able to get into A2AD was the entire point of research into stealth.
  • Low probability of intercept/low probability of detection communications” means what it sounds like; “intercept” means ‘listening’ and “detection” means ‘hearing,’ with the associated connotations.
  • Hybrid radio frequency/optical data links to provide reliable and resilient communications” is all about damage tolerance again. If you use radio, your frequencies can be jammed. If you use “optical data links” (think ‘communication lasers’ or, less fancy, those message lights ships at sea use), someone can throw smoke in the way. If you use both, that means the enemy also has to try and jam both, which makes their job more difficult.
  • Cross-security domain” means ‘between different systems with different security protocols’. They want different systems to talk to each other and share information via VOIP. They’re probably specifically looking for cross-system voice communications.
  • Pod-based tactical information management” is interesting. “Pod-based” is a special sort of modular; this is the sort of thing you’d sling under an airplane’s wing in lieu of more bombs or missiles or whatever. They want a plug-and-play pod they can stick on legacy aircraft that will aid in “tactical information management,” which is about networking. Think of it as a strap-on cell tower and you wouldn’t be far from wrong, I bet.
  • Mission-assured operations in a cloud environment” means, most probably, ‘moving communication systems onto distributed networks.’ This only really makes sense given the C2 stuff above; yes, the cloud is just somebody else’s computer, but it’s a lot of somebody else’s computers so if one gets blown up you don’t lose everything.
  • Automated information sharing of full motion video across security boundaries” is the same as that VOIP stuff above, but for video too. (Due to a lot of history, the Army, Navy, and Air Force all do things differently. This has lead to a lot of trouble in trying to get their independently developed systems in talking to each other.)
  • Mission-prioritized, on-demand managed information objects” means ‘instantly-generated reports that immediately apply to the mission.’ An everyday example would be, say, the ‘mileage remaining’ readouts that can be requested with the push of a button on most modern-day cars.

Page 17

Almost there!

  • Global interference-tolerant, spectrum-efficient, and agile networks including new, uncontested spectrum frontiers” means that they want the flexible and damage-tolerant networks of the future to use only small portions of bandwidth that no one uses, preferably on bits of the spectrum no one’s even heard about yet, and they have to work when the entire planet is being jammed. If I want to put my tinfoil hat on, ‘global interference’ sounds a lot like the results of, say, a high-altitude EMP weapon detonation or similar ionospheric interference.
  • Space-qualified communications at higher bands” means, simply, ‘satellites that use laser links rather than high-frequency radio like they do now.’
  • Multi-level virtualization platforms supporting over 30 security domains” is interesting. “Virtualization platform” means ‘dashboard’ or, say, ‘real-time strategy map’ to me; “multi-level” means ‘it can work at different scales.’ If any of you have ever played Supreme Commander, it was an RTS game that let you fluidly zoom between individual units and entire landmasses. It sounds like they want that, and they want it to be able to transfer information between all of the various security protocols we have now.
  • Mission responsive information management services within dynamic resource constraints” means that they want their network to be able to change what they offer based on what the mission is and be able to do it despite the fact that nodes may be appearing and disappearing all the time (due to jamming, destruction, reinforcement, whatever).
  • Processing and exploitation” means ‘crunching numbers and then using the results for useful ends.
  • migrate from ‘sensing’ to ‘information’” means that they want to move from collecting a bunch of data to actually knowing what that data needs. As I may have mentioned already, modern intelligence analysts are flooded with data from all sorts of sensors. Most of it is probably extraneous.
  • Information exploitation capabilities” just means ‘stuff what can get useful information out of data.’
  • ‘Platforms’ to ‘capabilities’” means that the military is moving from having specific dinguses that are expected to do things–say, F-16s that are supposed to be light fighter-bombers that get sent on specific missions–to spreading missions across multiple dinguses–say, a target gets identified and the first thing with a bomb that’s within range is used to smash it.
  • Performance-based analysis” means that stuff is getting measured based on results.
  • Modern radar signal and signature detection and exploitation” basically means ‘seeing and identifying stealthy things.’
  • Layered analysis” is having analysts come to conclusions based on multiple levels and kinds of data. “Activity-based intelligence” means concentrating intel resources on things that are actually happening, rather than the panopticon SEE EVERYTHING strategy that we’ve been trying for a while now.
  • Forensic analysis” is basically ‘determining the legitimacy of evidence.’ In this case, it’s about separating signal from noise.
  • Adaptive signal characterization for advanced spectrum radars” means ‘being better able to interpret radar results based on environmental conditions.’
  • Automatic fusion of ISR data with cyber and text sources” means ‘taking what we get from recon platforms and matching it with uploaded network data.’ Using our McBadGuy example, if a drone sees his face and he’s identified, military Google immediately brings up everything they know about McBadGuy - just like in the movies.
  • Embedded real-time processing for onboard exploitation” means that they want their recon platforms (e.g. drones) to have the computing power to interpret their own data, rather than sending it all back for analysis on the ground. Think ‘onboard facial recognition software’ - the drone can tell that who it’s looking at is McBadGuy without having to send the image to the cloud.
  • Extremely low power autonomous pattern recognition” means ‘we want systems that see trends without the use of a supercomputer.’
  • Petascale” means ‘petabytes.’ “Cognitive functions for embedded ISR autonomy” means they want drones to think* with onboard systems and thus be able to operate independently. (Not ‘think’ in terms of Cartesian cogito ergo sum, ‘think’ in terms of ‘here’s what I got, what do I do with it?’ Think intelligence equivalent to small mammals or lizards.)
  • Electronic intelligence” or ELINT is signals information; “adaptive/cognitive radar countermeasures” suggests that this ELINT is supposed to be used so a given doohickey can modify itself, probably to become more stealthy (or, in reverse, modify how it uses its radar to better see something that’s trying to be stealthy).
  • Autonomous operations reasoning on uncertain, incomplete data” means ‘we want things that can intuit and take risks despite having gaps in what it knows.’ The next bullet point, having to do with A2AD, suggests that they want drones that can operate independently from ground control in places where adversaries desperately want to keep us out of.

Page 18

  • The leveraging of resources creates cross-collaboration across the government” means ‘we use what we have to get people to work together.’
  • technical competencies” means ‘what we’re good at.’
  • Cyber” generally means ‘hacking.’

The rest of this page basically translates to ‘the Air Force figures out what it needs to do, then figures out what it has to research in order to make things to meet those needs.’

Page 19

The first half of this page is all about acquisitions. It’s worth noting that rapid cyber acquisition (RCA) is looser than normal government acquisitions policies, which makes it simultaneously faster and less prone to oversight.

  • Innovation and integration of critical interim cyber capabilities” means ‘jury-rigging from what we have on hand already.’
  • Network attack and cyber exploit capabilities” means hacking and similar (e.g. denial of service).

Page 20

  • Payback commitment” - if the government, especially the military, pays for your degree, you have to work for it for a set amount of time. An 88% retention rate after the commitment is pretty impressive.

Page 21

This page is about how the Air Force trains current members in cyberwar.

  • Computer science-cyber warfare” - the kind of degree you could only get at a service academy. It’s probably a BS in Computer Science with a specialization in cyber warfare.
  • Cyberspace operators” probably means ‘network administrators’ for the most part. Could also mean ‘hackers.’

Page 22 - 23

Doesn’t appear to have much in the way of overwhelming jargon. It’s about how training works, where it happens, and where they do research.

Page 24

A distribution list and a copyright notice. It is (functionally) in the public domain and unclassified.


Image via AirForce.com