One of my former minions just got done with his Ops Research master's degree out at Naval Postgraduate School and I thought I'd share his work for critique/comment. Note the use of wargaming/game theory in this thesis. The author is recently assigned to the J9 at Joint Forces Command.

The work asks some fundamental questions and attempts to provide one way to answer them:

"Information superiority is a leading concept driving joint future force
development. Proponents view it as a force multiplier; given forces of equal size and
ability, the one that possesses information superiority can achieve superior results to that of the other. Research suggests that this is, in fact, the case. Yet, what are the risks associated with units relying on information superiority? How can we measure the degree of superiority that an information advantage provides? How much is enough? In a world constrained by budgets, these are important questions to be answered so that a proper balance can be made between equipment meant to destroy our adversaries and equipment that facilitates information superiority."

My question is fairly fundamental--can one ever ASSUME information superiority? This is one of the arguments that John Keegan tackles in his recent work, INTELLIGENCE IN WAR--he would argue that you can't. Therefore, any attempt to economize on more traditional concepts of mass because it can be made up for by such information superiority should be taken with extreme care. The logic that works for bomb/missile payload size compared to bomb/payload accuracy (due to supporting sensor and guidance packages) may not work for larger aggregates of forces, although it's tempting to think that it will.

Views: 60


Replies to This Discussion

Gee - this sounds like a question out of the Joint Warfare School....
Hi Eric,

I sat down and read the thesis last night in its entirety. Let me start by saying that the paper is very thorough in the handling of the experiment, and the way it is set up is quite intriguing. I'd like to congratulate your former minion on some fine work.

The following thoughts and critiques are not necessarily a commentary on the paper itself, but on certain methodologies in general.

Hard to pick a place to start, but I'll start with the statistics. I didn't see in the paper what statistical method was used to generate the coefficients and significance values, and I have some concern there. Because of the deterministic results we get from expected value systems, the outputs from each model run should be forming fairly distinct bell curves. Furthermore, although results should be non-linear over various control variables, they should be highly trended in the dependent variable nevertheless. As a result, it comes as no surprise that the statistical results are completely significant and explain almost all the variation. How could they not? The system is closed and has fairly little stochastic process going on. To tease out any real variation, a fair amount of data manipulation would need to occur, and I didn't see any application of that in the paper. That said, I think doing those manipulations would likely strengthen the result owing to the nature of the data, which in turn, still wouldn't change the fact that these are stats being performed on data generated by a closed deterministic system.

The paper goes on to point out that the stochastic processes that were introduced, namely time delays and information accuracy, seemed to have little effect on the outcomes. I think this is due largely to just how closed the system really is. Given that there is only three attack paths, what I lack in information, I can make up for by spreading my forces to minimize Red gain. Because of this, the variation of making bad decisions based upon poor information is quite restrained. Suppose the blue player could only really defend two of the three points of approach (owing to doctrine, landscape, or any other justification you care to use), then you might see a bigger variation in successes and failures, which might add more meaning to that particular bit, and thus have greater impact on outcomes.

Another part I found a bit obvious was the finding that sensor effectiveness topped out at 5 (of the nine possible approaches). Knowing where Red isn't is just as valuable as knowing where he is. If I can cover 4.5 of those 9 points, then, baring the stochastic processes, this will give me maximum effectiveness. I didn't really need a complex model to tell me that, but at least it puts some rigor to it. Nevertheless, the point can be proven easily enough with some basic mathematics. So that particular conclusion sort of falls into the "no kidding" category. Any variation caused by limited info accuracy and time delays is largely washed away by force size. There's no real way to make catastrophic mistakes in this system.

One thing that I've noticed in my various travels and discussions among military academia is that game theory is very popular in analyzing these sorts of questions. I think this is overall a very serious limitation in the way these problems are approached. Game theory is a great way to model decision processes, but it's not well suited to model dynamic environments. I would contend that applying game theoretical methods to something as dynamic as information gathering on the battlefield can only provide very limited insight. It is the structure of the game that determines the outcomes. As a result, game theory can pretty much tell you anything you want it to. As a result, it's not often telling you the truth. It can vet out a likely decision process given a specific set of circumstances, but it can't help you vet out whether the assumptions about the circumstances that were made are of any real use. Thus, the conceptual model of the game in the paper is very useful in the sense that it allows us to see the effectiveness of intelligences vs. force size given a specific environment structure, but it tells us very little about what it might look like in other structures, especially ones more dynamic, and where Red is actually trying to manipulate information space.

All that said, the paper is very correct to say that the problem is very difficult to address. It shows us that some of our intuition about information vs size may be correct, but it certainly doesn't prove it.

If I were going to suggest an avenue of approach, I'd say that the game theoretical work in this paper could profitably be coupled with agent based modeling, where individual agents might use some game theoretical structure to manage their own decisions, but that these decisions would then be tempered by environmental factors. The environment could then be set up to be dynamic, reacting to forces overall, by utilizing a dynamic system of equations. Blue player (or Red) could then interact with this environment at both the agent level and at the environment level. All that, however, is a tall order.

Anyway, hope this is of some use. Please pass on my regards.


Help Center

Latest Activity

Jeff Duffy updated their profile
Mike Palmer posted a discussion

Cora, The Normandy Campaign Decision Games

Hi, I'm currently learning the rules for Cobra, The Normandy campaign by Decision Games. I hope…See More
Zachary Miller left a comment for Eric Walters
"Thanks for the hearty welcome, Eric. I really like COH 3rd edition. The extra roll really adds that…"
Dave Smith posted a blog post

A bit of a mystery.... who can solve this one?

When I returned to board wargaming in 2000 I started buying up used games and magazines from our…See More
Eric Walters left a comment for Zachary Miller
"Zachary, welcome to the CONSIMWORLD Social website!  Am guessing you are quite happy with COH…"
Eric Walters left a comment for Frank E Watson
"Frank, as a fellow RAGEr, I'm personally glad to see you here!  Welcome to the…"
Profile IconFrank E Watson and Zachary Miller joined ConsimWorld
Joseph replied to John Kranz's discussion What are you playing?
"Played: Decision Games "RAF" intro twice. Brits hung on. Good game. Nuts! Publishing game…"



CSW Related Links

© 2020   Created by John Kranz.   Powered by

Badges  |  Report an Issue  |  Terms of Service