WSC 2005

WSC 2005 Final Abstracts


Military Applications Track


Monday 10:30:00 AM 12:00:00 PM
Military Keynote

Chair: Raymond Hill (Wright State University)

New Paradigms and New Challenges
Paul K. Davis (RAND)

Abstract:
This paper lays out provocative assertions about major challenges for the modeling and simulation community. One relates to building M&S for the purpose of assisting the search for strategies that are flexible, adaptive, and robust despite uncertainty. A key aspect of this search is exploratory analysis, coupled with selective zoom. These, in turn, require multiresolution modeling with sound models (albeit, with uncertain data). But sound models must be adaptive models, since humans are adaptive. And rigorous analysis with adaptive models, such as those involving agents, requires new methods and attitudes, as well as new tools.

Monday 1:30:00 PM 3:00:00 PM
Data Farming Agent Models Related to Military Questions

Chair: Gary Home (MITRE Corp)

Marine Corps Applications of Data Farming
Adam J. Forsythe, Gary E. Horne, and Stephen C. Upton (Referentia Systems Incorporated)

Abstract:
Project Albert is a modeling and simulation initiative of the United States Marine Corps that combines the rapid proto-typing of agent-based distillations with the exploratory power of data farming to rapidly generate insight into mili-tary questions (Fry 2002). Data farming focuses on the complete landscape of possible system responses, rather than attempting to pinpoint an answer. This “big picture” solution landscape is an invaluable aid to the decision maker in light of the complex nature of the modern battle-space. And while there is no such thing as an optimal deci-sion in a system where the enemy has a vote, data farming allows the decision maker to more fully understand the landscape of possibilities and thereby make an informed decision. The goal of data farming is that decision makers will no longer be surprised by surprise. This paper outlines some data farming explorations conducted over the past few years.

Data Farming: Discovering Surprise
Gary E. Horne (Referentia Systems Inc.) and Theodore E. Meyer (The MITRE Corporation)

Abstract:
Data Farming is a methodology and capability that makes use of high performance computing to run models many times. This capability gives modelers and their clients the enhanced ability to discover trends and outlier in results, do sensitivity studies, verify and validate over extended ranges of input parameters, and consider modeling and analyzing non-linear phenomena with characteristics that can not be precisely defined. As high performance computing, in the form of distributed computing capabilities and commodity node systems becomes more pervasive and cost effective, Data Farming can become more available to modelers. In this paper the authors summarize Data Farming and the processes and data architecture of Data Farming systems that make high performance computing readily available to modelers.

Simulation Environment to Assess Technology Insertion Impact and Optimized Manning
Niraj Srivastava and Frank Pietryka (The Sevaan Group LLC), Gary E. Horne (Referentia Systems Inc) and Mark Theroff (NSWC Crane)

Abstract:
The reduction in life-cycle costs for Naval vessels is critical for operating a cost efficient and robust Navy. Computer based simulations are an effective tool for human system integration optimization, as well as for studying the risks associated with complex interaction between crew and systems. The proposed modular simulation environment empowers analysts to choose and integrate the best combination of agent, discrete event, and physics based simulations to address questions of manning. The environment embraces advances in complexity theory for simulating non-linear systems, knowledge discovery for data analysis and distributed computing for execution environment.

Using Agent Models and Data Farming to Explore Network Centric Operations
Henrik Friman (Swedish National Defense College) and Gary E. Horne (Referentia Systems Incorporated)

Abstract:
Network Centric Operations are difficult to quantify in many respects with models or other methods. Data Farming is a methodology and capability that makes use of high performance computing to run models many times. In the case of agent-based models that are relatively small, many runs can be performed in a short period of time. This capability gives modelers and their clients the enhanced ability to discover trends and outlier in results in a variety of areas. In this paper the authors discuss some notional efforts to begin to explore questions in the area of network centric operations using the agent model MANA and Data Farming. By observing the network behaviors and the output for traditional and Information Age warfare we have created comparisons that illustrate when networked forces outfight non-networked forces.

Monday 3:30:00 PM 5:00:00 PM
Issues in Human Performance Modeling for Military Systems

Chair: Janet Miller (Air Force Research Laboratory)

The Pairwise Escape-g Metric: A Measure of Air Combat Maneuvering Performance
Antoinette M. Portrey (Lockheed Martin), Brian Schreiber (S&D Statistical Consulting Services) and Winston Bennett, Jr. (AFRL/HEAS)

Abstract:
The Air Force Research Laboratory, Warfighter Readiness Research Division, is continuously researching tools to measure performance of knowledge and skills from an individual level to the Command and Control (C2) level, within both high fidelity distributed simulation environments and live training environments. Using the Performance Effectiveness Tracking System (PETS), we ran preliminary testing of a metric called Pairwise Escape-G that uses a concept called the Theoretical Instantaneous Probability of Weapon Intercept (TIPWI). TIPWI takes into ac-count the current geometry of one aircraft against another for each given weapon (i.e., the physics-based envelope parameters) and is the weapon’s probability of threat intercept at any instant during an engagement. This paper will describe the initial application of the Escape G metric within the Distributed Mission Operations Testbed (four high-fidelity F-16 simulators, one Airborne Warning and Controller System console, and Instructor Operator Station), preliminary outcomes, and suggested applications for this metric.

Simulating Scenarios for Research on Culture & Cognition Using a Commercial Role-play Game
Rik Warren (Air Force Research Laboratory), David E. Diller, Alice Leung, and William Ferguson (BBN Technologies) and Janet L. Sutton (Army Research Laboratory)

Abstract:
Most research on culture and cognition uses self-report tasks such as paper and pencil questionnaires. Such tasks are inexpensive, quick, and easy to score, but they are vulnerable to response bias and manipulation effects. Action-based or performance tasks can be more absorbing and permit more of someone’s natural behavior to emerge but are rarer due to increased costs, lower experimenter control, and difficult logistics. Computer games can potentially regain the benefits of real performance and immersive play while retaining experimenter control and keeping costs low. Properly constructed, computer games can simulate action-demanding scenarios which embed opportunities for personality and culturally-conditioned behaviors to manifest themselves. This is especially true when computer-simulated non-player characters are included which exhibit carefully modeled behaviors. However, such simulations are not themselves panaceas. This paper examines some of the concepts we have tried, the challenges we have faced, and the lessons we have learned.

An Approach to Human Behavior Modeling in an Air Force Simulation
Brooke H. McNally (ASC/XREM SIMAF)

Abstract:
This paper presents a multi-level approach to incorporating more realistic human behavior models into military simulation environments. The Air Force is incorporating different levels of intelligent agents within the Enhanced Air-to-air Air-to-Ground Linked Environment Simulation (EAAGLES) to represent the human decision making processes required in military simulations. This will provide user’s the ability to determine at what level of fidelity they need to represent human behavior to achieve their study objectives. EAAGLES is currently incorporating two mental models - Situational Assessment Model for Pilot in the Loop Evaluation (SAMPLE) and Soar. This paper will present an introduction to these mental models and discuss how they can be used in the EAAGLES environment. This paper will also introduce and discuss the difficulties associated with validating human behavior models that are used in military simulations.

Tuesday 8:30:00 AM 10:00:00 AM
Advance Military Modeling I

Chair: Brooke McNally (ASC/XR)

Sports Analogy for Modelling of Combat in the Air Domain
Alan Cowdale (Air Warfare Centre)

Abstract:
Aggregated models of Air Warfare invariably rely on a user input value for probability of success (kill) or ‘exchange ratio’ in Air-to-Air Combat. There is limited historical data available to validate these parameters for engagements between non-peer opponents. This paper explores the potential for gaining insights to non-peer Air-to-Air outcomes from the world of sport, and examines the results from Association Football competitions in England.

On Using SPEEDES as a Platform for a Parallel Swarm Simulation
Matthew A Russell, Gary B. Lamont, and Kenneth Melendez (Air Force Institute of Technology)

Abstract:
Unmanned Aerial Vehicle (UAV) research is an increasingly important pillar of national security and military interest. A high fidelity discrete event simulation is prerequisite to any systems implementation. The Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) is a versatile and powerful tool that can be used for realization of this objective. A suite of five experiments measures the efficiency a parallel UAV swarming SPEEDES application. Results indicate that the conservative time management produces more than twice the speedup as optimistic time management.

Simulation Validation with Historic Outcomes
Raymond R. Hill (Wright State University) and Lance E. Champagne (Logistics Management Agency)

Abstract:
Combat, unlike many real-world processes, tends to be singular in nature. That is, there are not multiple occurrences from which to hypothesize a probability distribution model of the real world system. Mission-level models may offer more flexibility on some measures due to their extended time frame. Additionally, the parameters involved in the mission-level model may be unchanged for significant stretches of the total simulation time. In these cases, time periods may be devised so that the periods hold sufficiently similar traits such that the incremental results may be assumed to come from a common distribution. This paper details a new statistical methodology for use in validating an agent-based mission-level model. The test is developed within the context of the Bay of Biscay agent-based simulation and uses the monthly data from the extended campaign as a basis of comparison to the simulation output.

Tuesday 10:30:00 AM 12:00:00 PM
Effects-Based Military Modelings

Chair: Angela Dowling (ARA)

Implementation of a Framework for Vulnerability/lethality Modeling and Simulation
Kim J. Allen and Craig Black (Applied Research Associates)

Abstract:
The Department of Defense (DoD) has employed Modeling and Simulation (M&S) tools in Vulnerability and Lethality (V/L) assessments of weapon/target systems for many years. A wide variety of simulation tools exist that are used to conduct specific aspects of analyzing weapon systems effectiveness and/or target susceptibility to blast, fragment penetration, hardened target penetration, etc. Previously, a somewhat natural separation of domains existed for these models among surface mobile, ground fixed and airborne target classes. However, it has become evident that many of the methods implemented as part of their respective simulations have applicability across domains. Where applications provide a concrete solution for a particular problem, frameworks are meant to provide a generic solution mechanism for a set of similar or related problems. This simple concept was the key to the implementation of a reusable, extensible architecture known as the Endgame Framework.

A Validation of First-order Detonation Shock Dynamics Theory
David E. Lambert (Air Force Research Laboratory, Munitions Directorate) and D. Scott Stewart and Sunhee Yoo (University of Illinois, Urbana-Champaign)

Abstract:
High energy explosives are used in a variety of applications, from military to industrial processes. The use of embedded, inert material “wave shapers” is a primary method to customize the detonation front for desired explosive applications. These systems create detonation states that do not follow the simple line of sight, or Huygens model and, hence, advanced detonation physics with associated theory are required. The theory of detonation shock dynamics (DSD) is one such description used to provide high fidelity modeling of complex wave structures. A collection of experiments using ultra-high speed cameras is presented as a means of obtaining spatial and temporal characteristics of complex detonation fronts that validate the DSD descriptions. The method of test, operational conditions and results are given to demonstrate the use of high-rate imaging of detonation events and how this validates our understanding of the physics and the capability of advanced detonation wave tracking models.

Simulation-based Performance Modeling for War Fighter in Loop Minefield Detection System
Abhilash Rajagopal and Sanjeev Agarwal (University of Missouri Rolla) and Sreeram Ramakrishnan (University of Missouri-Rolla)

Abstract:
There has been significant recent interest in airborne reconnaissance for target detection using high resolution air-borne images collected from an Unmanned Aerial Vehicle (UAV). Even if Automatic Target Recognition (ATR) al-gorithms are able to produce satisfactory results in terms of probability of detection for certain false alarm rate, there is a need for a Warfighter-in-the-Loop (WIL) to reduce false alarms further and verify and validate detections to attain the operational performance requirements. We develop a simulation model to assess effectiveness of the warfighter in decision loop for airborne minefield detection. The war-fighter effectiveness is measured in terms of average waiting-time, number of minefield segments in queue and the expected false alarms and missed detection. Various parameters which potentially affect the warfighter performance are identified with the help of prior studies with hu-man operator in laboratory settings. Simulation trials were conducted to evaluate the dependence of these parameters on warfighter performance.

Tuesday 1:30:00 PM 3:00:00 PM
Distributed Modeling for Military Applications

Chair: Deborah Hileman (Air Force Material Command)

Enabling 1,000,000-Entity Simulations on Distributed Linux Clusters
Gene Wagenbreth (Information Sciences Institute, Univ. of So. Calif.), Ke-Thia Yao ( University of Southern California), Dan M. Davis (Information Sciences Institute, Univ of So. Calif), Robert F. Lucas (Information Sciences Institute, Univ. of So. Calif.) and Thomas D. Gottschalk (Center for Advanced Computing Research, Caltech)

Abstract:
The Information Sciences Institute and Caltech are enabling USJFCOM and the Institute for Defense Analyses to conduct entity-level simulation experiments using hundreds of distributed computer nodes on Linux Clusters as a vehicle for simulating millions of JSAF entities. Included below is the experience with the design and implementation of the code that increased scalability, thereby enabling two orders of magnitude growth and the effective use of DoD high-end computers. A typical JSAF experiment gen-erates several terabytes of logged data, which is queried in near-real-time and for months afterward. The amount of logged data and the desired database query performance mandated the redesign of the original logger system’s monolithic database, making it distributed and incorporating several advanced concepts. System procedures and practices were established to reliably execute the global-scale simulations, effectively operate the distributed computers, efficiently process and store terabytes of data, and provide straightforward access to the data by analysts.

A Framework for Fault-Tolerance in HLA-Based Distributed Simulations
Martin Eklöf (Swedish Defence Research Agency (FOI)), Farshad Moradi (Swedish Defence Research Agency) and Rassul Ayani (Royal Institute of Technology (KTH))

Abstract:
The widespread use of simulation in future military systems depends, among others, on the degree of reuse and availability of simulation models. Simulation support in such systems must also cope with failure in software or hardware. Research in fault-tolerant distributed simulation, especially in the context of the High Level Architecture (HLA), has been quite sparse. Nor does the HLA standard itself cover fault-tolerance extensively. This paper describes a framework, named Distributed Resource Management System (DRMS), for robust execution of federations. The implementation of the framework is based on Web Services and Semantic Web technology, and provides fundamental services and a consistent mechanism for de-scription of resources managed by the environment. To evaluate the proposed framework, a federation has been developed that utilizes time-warp mechanism for synchronization. In this paper, we describe our approach to fault tolerance and give an example to illustrate how DRMS behaves when it faces faulty federates.

Language Based Simulation, Flexibility, and Development Speed in the Joint Integrated Mission Model
David W. Mutschler (NAVAIR)

Abstract:
The Joint Integrated Mission Model (JIMM) uses generic system components and a simulation language that allows developers to program specific system, platform, and player characteristics, tactics, and doctrine. This permits great flexibility in simulation design and rapid modification of system types in complex simulations. However, the time and expense of developing complex simulations can be longer than desired. These costs can be mitigated by constructing scenarios for reuse and providing example scenarios for common use. In addition, a graphics user interface (GUI) can also facilitate reuse and perform some functions faster and more easily than can be achieved directly through simulation language text editing. This paper will discuss efforts in simulation construction, simulation reuse, and GUI development currently undertaken by the JIMM Model Management Office (JMMO).

Tuesday 3:30:00 PM 5:00:00 PM
Military Acquistion and Employment Modeling

Chair: Raymond Hill (Wright State University)

An Approach to Design and Development of Decentralized Data Fusion Simulator
Chandresh Mehta, Govindarajan Srimathveeravalli, and Thenkurussi Kesavadas (State University of New York at Buffalo)

Abstract:
This paper discusses the ongoing efforts on development of a Decentralized Data Fusion (DDF) simulator for analysis and design of a distributed fusion-based tracking system. We have identified the requirements for a DDF simulator and have developed a fully interactive, graphical user interface based scenario generation tool called SceneGen (Srimathveeravalli, Subramanian and Kesavadas 2004) for creating battlefield scenarios, and a simulation tool called VizSim for running various DDF algorithms on scenarios created in SceneGen and displaying the simulation results in an easy to understand fashion. SceneGen and VizSim have been designed with a full compliment of user utilities, including an efficient terrain database generation module, a sensor report generation module and the database connectivity to store and retrieve scenarios and simulation results. The innovative visualization techniques used in the simulator helps in displaying the data in a fashion that transfers maximum information to a user.

Exploring C4ISR Employment Methods
Terri G. Chang (Center for Army Anaysis)

Abstract:
We will investigate several employment schemes for Command, Control, Intelligence Surveillance and Reconnaissance (C4ISR) collection assets in a simulated Force combat model. These collection assets include Unmanned Aerial Vehicles (UAV) and any ground platforms, normally part of a conventional coalition force lay down. Samples of ground assets include: armored personnel carriers (APC), helicopters, tanks, trucks, binoculars and eyes. Collection asset performance characteristics along with obtained sensor scans enable probabilistic identification of participating adversaries or their weapon systems. Comparative analysis focuses on the time to initial enemy observation, threshold of commander’s critical information requirements met, and prevention of collection asset loss rate. The analyst controls all thresholds via the user interface. Additionally, a paradigm for information management, i.e. intelligence fusion, is presented. We explore procedures for reducing data volume within this paradigm. We will also discuss implications for the coordination of simulation, analysis, and acquisition activities.

Acquisition-Based Simulation
Grant Martin, Jeffrey Schamburg, Michael K. Kwinn, and Jr. (United States Military Academy)

Abstract:
The Army acquisition community requires high-resolution simulations that represent the dismounted infantry soldier in enough detail to conduct an analysis of alternatives (AOA) for individual weapons and equipment. These models must also be capable of assessing future, proposed capabilities and technologies. Previous work established a detailed, representative set of soldier functions which should be modeled, as well as proposed coordination among three different models. This paper describes the technique used for implementing that coordination on behalf of the acquisition community. It does so in two parts. First, we discuss the methodology used to transforming the needs of the acquisition community into analysis needs. Second, we describe how we integrated the soldier functions into those analysis needs to derive simulation requirements. We will conclude with a discussion of how effective the technique has been in practice.

Wednesday 8:30:00 AM 10:00:00 AM
Advance Military Modeling II

Chair: John Gilmer (Wilkes University)

The HITVICE VV&A Environment
Fang Ke, Yang Ming, and Wang Zicai (Harbin Institute of Technology)

Abstract:
VV&A process needs to be systematically organized and costs heavy workload. Conventional VV&A tools are mostly designed for special simulation needs, and are lack of flexible workflow processing, CSCW support and integrated resource managing etc. The simulation community needs a synthetic environment more than scattered tools to speed the process of VV&A. In this paper, we introduce our HITVICE VV&A environment for totally facilitating VV&A. The system can realize workflow automation, hierarchical evaluation, CSCW, lifecycle data management and flexible authority control etc. External VV&A tools can also be integrated into HITVICE and interchange data with the environment. HITVICE has been applied by a large different-placed simulation coded 40301. The system proved to be effective in VV&A assistance, and it’s being improved to satisfy new requirements from prospective users.

The Application of Evaluation Method Based on Hmm for Results Validity of Complex Simulation System
Hengjie Song, Ping Ma, and Ming Yang (Harbin Institute of Technology)

Abstract:
According to the characteristic of randomization and sequential logic in complex simulation systems, a new evaluation method based on hidden markov model (HMM) is presented, which applies multivariate statistical theory to quantificationally evaluate the results validity of complex simulation system. By importing matrix of observed state vector, the method enhance the clarity of describing scenario and running states of simulation systems, and ultimately implement an exploring approach to quantitative analysis for the results validity. Furthermore, quantificational evaluation criterion of results validity is given and the critical algorithm adopted in the process of quantificational evaluation is discussed in detail.

Issues in Event Analysis for Recursive Simulation
Frederick J. Sullivan, John B. Gilmer and Jr. (Wilkes University)

Abstract:
Recursive simulation allows decisionmaking entities within a simulation to themselves use simulation as a way of projecting their situation into the future. In these imagined futures, events occur that can significantly affect the entity, and if the information about those events can be captured and related to the entity's present, better decisionmaking may result. This paper explores this concept, and some of the issues that arise, in the context of force on force military simulation.