Exploratory Analysis Enabled by Multiresolution,
Paul K. Davis (RAND and the RAND Graduate School)
The objective of exploratory analysis is to gain a broad understanding of a problem domain before going into details for particular cases. Its focus is understanding comprehensively the consequences of uncertainty, which requires a good deal more than normal sensitivity analysis. Such analysis is facilitated by multiresolution, multiperspective modeling (MRMPM) structures that are becoming increasingly practical. A knowledge of related design principles can help build interfaces to more normal legacy models, which can also be used for exploration.
Circumstance Descriptors: A Method for Generating Plan
Modifications and Fragmentary Orders
John B. Gilmer, Jr. (Wilkes University)
Circumstance Descriptors are offered as a way to organize spatial and other military knowledge that may be difficult to formulate, particularly the kinds of details that are most often illustrated by example. The goal is better modeling of military command elements in simulations. These Circumstance Descriptors are applied to assimilate features, both terrain objects and units, into a frame based Understanding of the Situation that organizes these into roles oriented around the decisionmaking unit's plan. A Circumstance represents a configuration of objects that may be present on the battlefield. If recognized, the effect is to splice new roles into the frame, extending it to cover the new features. A prototype has been built which demonstrates the use of these Circumstance Descriptors in both the context of planning and execution.
The ARGESIM-Comparisons on Discrete Simulation:
Results and Evaluation
Felix Breitenecker and Martin Lingl (Vienna University of Technology) and Erwin Rybin (Austrian Research Center Seibersdorf)
This paper describes how to set up courses in (advanced) simulation techniques based on ARGESIM/EUROSIM Comparisons. SNE has defined 13 Software Comparisons, of which 6 concern discrete models, and collected solutions over the last 8 years. These solutions have now been evaluated and made accessible via the world wide web. This evaluation may be used as basis for a course on modeling and simulation. Finally there is a brief introduction of ETCA and it is shown how it uses the ARGESIM/EUROSIM Comparisons for giving advice which simulators to use in the field of environmental technologies.
Informing and Calibrating a Multiresolution
Exploratory Analysis Model with High Resolution Simulation: The Interdiction
Problem as a Case History
Paul K. Davis, James H. Bigelow, and Jimmie McEver (RAND)
Exploratory analysis uses a low-resolution model for broad survey work. High-resolution simulation can sometimes be used to inform development and calibration of such a model. This paper is a case history of such an effort. The problem at issue was characterizing the effectiveness, in interdicting an invading army, of long-range precision fires. After observing puzzling results from high-resolution simulation, we developed a multiresolution personal-computer model called PEM to explain the phenomena analytically. We then studied the simulation data in depth to assess, adjust, and calibrate PEM, while at the same time discovering and accounting for various shortcomings or subtleties of the high-resolution simulation and data. The resulting PEM model clarified results and allowed us to explore a wide range of additional circumstances. It credibly predicted changes in effectiveness over two orders of magnitude, depending on situational factors involving C4ISR, maneuver patterns, missile and weapon characteristics, and type of terrain. The insights gained appear valid and a simplified version of PEM could be used for scaling adjustments in comprehensive theater-level models.
Abstract Modeling for Engineering and Engagement
Robert M. McGraw and Richard A. MacDonald (RAM Laboratories, Inc.)
While modern simulation infrastructures address many cost-related issues, they do not fully address issues related to model re-use. Simulations that utilize model re-use may result in large complex system models comprised of a diverse set of subsystem component models covering varying amounts of detail and fidelity. Often, a complex simulation that re-uses high fidelity subcomponent models may result in a more detailed system model than the simulation objective requires. Simulating such a system model results in a waste of simulation time with respect to addressing the simulation goals. These simulation costs, however, can be reduced through the use of abstract modeling techniques. These techniques can reduce the subcomponent model complexity by eliminating, grouping, or estimating model parameters or variables at a less detailed level without grossly affecting the simulation results. Key issues in the abstraction process involve identifying the variables or parameters than can be abstracted away for a given simulation objective and applying the proper abstraction technique to replace those parameters. This paper presents approaches for both identifying and replacing these candidate variables.
Model Abstraction for Discrete Event Systems
Using Neural Networks and Sensitivity Information
Christos G. Panayiotou and Christos G. Cassandras (Boston University) and Wei-Bo Gong (University of Massachusetts)
Simulation is one of the most powerful tools for modeling and evaluating the performance of complex systems, however, it is computationally slow. One approach to overcome this limitation is to develop a ``metamodel''. In other words, generate a ``surrogate'' model of the original system that accurately captures the relationships between input and output, yet it is computationally more efficient than simulation. Neural networks NN) are known to be good function approximators and thus make good metamodel candidates. During training, a NN is presented with several input/output pairs, and is expected to learn the functional relationship between inputs and outputs of the simulation model. So, a trained net can predict the output for inputs other than the ones presented during training. This ability of NNs to generalize depends on the number of training pairs used. In general, a large number of such pairs is required and, since they are obtained through simulation, the metamodel development is slow. In DES simulation it is often possible to use perturbation analysis to also obtain sensitivity information with respect to various input parameters. In this paper, we investigate the use of sensitivity information to reduce the simulation effort required for training a NN metamodel.
System Dynamics Modelling in Supply Chain
Management: Research Review
Bernhard J. Angerhofer and Marios C. Angelides (Brunel University)
The use of System Dynamics Modelling in Supply Chain Management has only recently re-emerged after a lengthy slack period. Current research on System Dynamics Modelling in supply chain management focuses on inventory decision and policy development, time compression, demand amplification, supply chain design and integration, and international supply chain management. The paper first gives an overview of recent research work in these areas, followed by a discussion of research issues that have evolved, and presents a taxonomy of research and development in System Dynamics Modelling in supply chain management.
Analysis of the Virtual Enterprise Using Distributed
Supply Chain Modeling and Simulation: An Application of
Michael W. Barnett and Charles J. Miller (Gensym Corporation)
Supply chains are large systems consisting of many entities interacting in complex ways. The challenge faced by companies is how to design and manage such systems. Modeling and simulation enables analysis of complex systems but as the model increases in size and realism, or when it is necessary to locate model components geographically, a distribution capability is needed. The High Level Architecture (HLA), developed by the Department of Defense provides the infrastructure needed for large-scale distributed simulation. The supply chain management field is characterized by a lack of standards and definitions. The Supply Chain Council has established a standard way to examine and analyze supply chains with their Supply Chain Operations Reference, or SCOR model. The SCOR model provides a standard way of viewing a supply chain, a common set of manipulateable variables and a set of accepted metrics for understanding the dynamic behavior of supply chains. The e-SCOR modeling and simulation environment is based on SCOR and adds discrete event simulation capabilities. This paper describes the architectural components used to implement a distributed supply chain modeling tool (e-SCOR) and applications of e-SCOR that demonstrate how enterprises are modeled and analyzed to determine the validity of alternative, virtual business models.
Distributed Supply Chain Simulation in
Rajeev Sudra, Simon J. E. Taylor, and Tharumasegaram Janahan (Brunel University)
Amongst the majority of work done in Supply Chain Simulation, papers have emerged that examine the area of model distribution. The executions of simulations on distributed hosts as a coupled model require both coordination and facilitating infrastructure. A distributed environment, the Generic Runtime Infrastructure for Distributed Simulation (GRIDS) is suggested to provide the bonding requirements for such a model. The advantages of transparently connecting the distributed components of a supply chain simulation allow the construction of a conceptual simulation while releasing the modeler from the complexities of the underlying network. The infrastructure presented demonstrates scalability without loosing flexibility for future extensions based on open industry standards.
PERT Scheduling with Resources Using Qualitative
Ricki G. Ingalls (Compaq Computer Corporation) and Douglas J. Morrice (The University of Texas at Austin)
The Qualitative Simulation Graph Methodology (QSGM) is well suited to address the PERT scheduling with resources problem. The coverage property of QSGM has two important implications for the PERT scheduling problem. First, it means that all possible schedules are represented. Second, it means that, as long as the delay time intervals are not violated, we can characterize all possible outcomes of a decision that needs to be made in the schedule. This gives rise to the possibility of robust point-in-time scheduling decisions without needed to re-run the simulation in order to get the results.
An Integrated Object Model for Activity Network Based
Gert Zülch and Jörg Fischer (University of Karlsruhe) and Uwe Jonsson (Axion GmbH)
This paper describes an object-oriented simulation approach towards an integrated planning of production systems. The main obstacle for an integrated use of simulation over different planning areas and stages are the different views on a production system. Therefore, an object model is developed, which enables the co-existence of different views and levels of detail in the same simulation model while maintaining its consistency. This is achieved by combining object-orientated technology with a network based simulation approach. The prevailing idea is to offer the opportunity to re-use existing models for the investigation of different aspects of a production system. The approach is abstractly described as a conceptual object model and is thus, independent from a concrete simulation language, tool or environment. The last part of this paper introduces the simulation tool OSim, that implements this object model and demonstrates its usage through an example.
Mathematical Programming Models for Discrete Event
Lee W. Schruben (University of California at Berkeley)
Analytical models for the dynamics of discrete event systems are introduced where the system trajectories are solutions to linear and mixed-integer programs.
Organization and Selection of Reconfigurable
Antonio Diaz-Calderon, Christiaan J.J. Paredis, and Pradeep K. Khosla (Carnegie Mellon University)
This paper introduces the concept of reconfigurable simulation models and describes how these models can be used to support simulation-based design. As in object-oriented programming, a reconfigurable model consists of a separate interface and multiple implementations. An AND-OR tree represents which implementations can be bound to each interface. From the resulting model space, a designer can quickly select the simulation model that is most appropriate for the current design stage. We conclude the paper with an example that illustrates the XML-based implementation of reconfigurable models.
Toward a Standard Process: The Use of UML for
Designing Simulation Models
Hendrik Richter and Lothar März (Fraunhofer Institut für Produktionstechnik und Automatisierung)
Designing complex simulation models is a task essentially associated with software engineering. In this paper, the Unified Modeling Language (UML) is used to specify simulation models. It is shown that, similar to the ``Unified Process'' in software engineering, such a methodology forms a sound base for developing complex simulation models. An example is provided to illustrate how this approach supports the design process.
Computer Assistance for Model
Henk de Swaan Arons and Eelco van Asperen (Erasmus University Rotterdam)
Modeling requires considerable knowledge of the various stages of the simulation process. The modeler needs to know a great deal of the system to be modeled (domain specific knowledge), the ins and outs of the modeling process itself (the degree of detail of the model) and how to implement the model in a simulation language. Each of these stages would benefit from some kind of knowledgeable support. In this article a decision-making process is described that supports the modeler to build a model step by step. As a vehicle the Arena simulation environment has been used. The support is based on information provided by the modeler and is essentially data-driven. It suggests which modules could be used best, which parameters need to be determined and helps to formulate route information. This research aims for an implementation of this support using a knowledge-based system.
Aggressiveness/Risk Effects Based Scheduling in Time
Vittorio Cortellessa (West Virginia University) and Francesco Quaglia (Università di Rome )
The Time Warp synchronization protocol for parallel discrete event simulation is characterized by aggressiveness and risk. The former property refers to greediness in the execution of unsafe events. The latter one refers to greediness in the notification of new events produced by aggressive event execution. Both these properties are potential sources for rollback occurrence/spreading. In this paper we present a scheduling algorithm for the selection of the next LP to be run on a processor which tends to keep low the joint impact of these two properties on the experienced amount of rollback. Reduction of negative effects of aggressiveness and risk is achieved by giving higher priority to the LPs whose next event has low probability to be undone due to rollback and has low fan-out that is, notifies few new events. Our algorithm differs from most previous solutions in that they miss a direct control on the effects due to risk. These solutions could originate poor performance for applications with high variance of the number of new events notified which is an indicator of the risk associated with event execution.
Parallel Execution of a Sequential Network
Kevin G. Jones (The University of Texas at San Antonio) and Samir R. Das (University of Cincinnati)
Parallel discrete event simulation (PDES) techniques have not yet made a substantial impact on the network simulation community because of the need to recast the simulation models using a new set of tools. To address this problem, we present a case study in transparently parallelizing a widely used network simulator, called ns. The use of this parallel ns does not require the modeler to learn any new tools or complex PDES techniques. The paper describes our approach and design choices to build the parallel ns and presents preliminary performance results, which are very encouraging.
Cost/Benefit Analysis of Interval Jumping in Wireless
David M. Nicol (Dartmouth College) and L. Felipe Perrone (College of William and Mary)
Computation of power control calculations is one of the most time-consuming aspects of simulating wireless communication systems. These calculations are critical to understanding how a wireless network will perform, and so cannot be conveniently ignored. Power-control calculations implement solutions to discretized differential equations, and so are essentially time-stepped. In a previous paper we proposed a technique for "interval jumping", that allows for substantially many time-steps to be jumped over, thereby reducing the amount of computation needed to achieve the same state as would straightforward time-stepping. The technique involves identification of a region of simulation time during which no channel assignments change due to limits on transmitter power, and a ``jump'' over that region. In this paper we examine the cost/benefit tradeoffs between policies which seek to minimize the work done to identify a jump interval, and the cost of computing those policies. We find that a tiered dynamic programming approach yields policies that very nearly minimize the searching overhead, while enjoying substantively lower computation costs than does the policy which strictly minimizes the searching overhead.
Software Engineering Best Practices Applied to the
David H. Withers (Dell Computer Corporation)
We present a mapping of Best Practices from the field of software engineering to the practice of discrete event simulation model construction. There are obvious parallels between the two activities. We therefore hypothesize there should be opportunities to improve the model construction process by taking advantage of these parallels. This research extends the prior work (Withers, 1993) that provided a structured definition of the modeling process.
Models and Representation of Their
Hessam S. Sarjoughian and Bernard P. Zeigler (University of Arizona)
Models, similar to other intellectual properties, are increasingly being treated as commodities worthy of protection. Providing ownership for models is key for promoting model reusability, composability, and distributed simulation. However, to date, it appears no principled approach has been developed to support ownership of models. Instead, individuals such as modelers and legal personnel employ ad hoc means to obtain and (re)use models developed and owned by others. In this article, we briefly describe access control capabilities offered by computer languages, operating systems, and HLA ownership management services. The examinations of such methods suggest the need for formal ownership specification. The article discusses, in an informal setting, requirements for model ownership from the point of view of increasing demand and necessity for model reuse, distributed simulation, and future trends for collaborative model development. We develop concepts for model ownership suitable for collaborative model development and distributed execution. Based on the developed concepts, we present an approach, within the DEVS modeling & simulation framework, for specifying model ownership. The article closes with the consideration of the proposed approach for the Collaborative DEVS Modeling environment and a brief discussion of HLA services relevant to model ownership.
On Simulation Model Complexity
Leonardo Chwif and Marcos Ribeiro Pereira Barretto (University of São Paulo) and Ray J. Paul (Brunel University)
Nowadays the size and complexity of models is growing more and more, forcing modelers to face some problems that they were not accustomed to. Before trying to study ways to deal with complex models, a more important and primary question to explore is, is there any means to avoid the generation of complex models? The primary purpose of this paper is to discuss several issues regarding the complexity of simulation models, summarizing the findings in this area so far, and calling attention to this area that, despite its importance, appears to remain at the bottom of simulation research agendas.
A Method for Achieving Stable Distributions of Wireless
Mobile Location in Motion Simulations
Tony Dean (Motorola, Inc.)
A cellular engineer typically estimates system performance via simulation. Most cellular operations software provides data from which one can infer the average, busy hour, subscriber location distribution, which becomes an input to the simulation. When the simulation does not include mobility, as is typical with Monte Carlo simulations, modeling this distribution is a straight-forward task. However, when the simulation models mobility, it must do so in such a way that the subscriber location distribution is stable. We introduce a stochastic mobility model for the purpose of achieving and stabilizing a priori subscriber location distributions.
Using Simulation and Critical Points to Define States
in Continuous Search Spaces
Marc S. Atkin and Paul R. Cohen (University of Massachusetts at Amherst)
Many artificial intelligence techniques rely on the notion of a ``state'' as an abstraction of the actual state of the world, and an ``operator'' as an abstraction of the actions that take you from one state to the next. Much of the art of problem solving depends on choosing the appropriate set of states and operators. However, in realistic, and therefore dynamic and continuous search spaces, finding the right level of abstraction can be difficult. If too many states are chosen, the search space becomes intractable; if too few are chosen, important interactions between operators might be missed, making the search results meaningless. We present the idea of simulating operators using critical points as a way of dynamically defining state boundaries; new states are generated as part of the process of applying operators. Critical point simulation allows the use of standard search and planning techniques in continuous domains, as well as the incorporation of multiple agents, dynamic environments, and non-atomic variable length actions into the search algorithm. We conclude with examples of implemented systems that show how critical points are used in practice.
Facilitating Level Three Cache Studies Using Set
Niki C. Thornock and J. Kelly Flanagan (Brigham Young University)
We discuss some of the difficulties present in trace collection and trace-driven cache simulation. We then describe our multiprocessor tracing technique and verify that it accurately collects long traces. We propose sampling as a method to reduce required disk space, enable simulations to run faster, and effectively enlarge the trace buffer of our hardware monitor, decreasing trace distortion. To this end, we investigate time sampling and two types of set sampling. We conclude that the second set sampling technique achieves the most accurate results. The miss rate for the second set sampling method is calculated as the number of misses to sampled sets divided by the total number of references scaled by the sample size. We determined that a 10% sample size was the most accurate while still reducing required disk space.
A Systematic Approach to Linguistic Fuzzy Modeling
Based on Input-Output Data
Hossein Salehfar (University of North Dakota), Nagy Bengiamin (California State University - Fresno) and Jun Huang (University of North Dakota)
A new systematic algorithm to build adaptive linguistic fuzzy models directly from input-output data is presented in this paper. Based on clustering and projection in the input and output spaces, significant inputs are selected, the number of clusters is determined, rules are generated automatically, and a linguistic fuzzy model is constructed. Then, using a simplified fuzzy reasoning mechanism, the Back-Propagation (BP) and Least Mean Squared (LMS) algorithms are implemented to tune the parameters of the membership functions. Compared to other algorithms, the new algorithm is both computationally and conceptually simple. The new algorithm is called the Linguistic Fuzzy Inference (LFI) model.
SNOOPy Calendar Queue
Kah Leong Tan and Li-Jin Thng (National University of Singapore)
Discrete event simulations often require a future event list structure to manage events according to their timestamp. The choice of an efficient data structure is vital to the performance of discrete event simulations as 40% of the time may be spent on its management. A Calendar Queue (CQ) or Dynamic Calendar Queue (DCQ) are two data structures that offers O(1) complexity regardless of the future event list size. CQ is known to perform poorly over skewed event distributions or when event distribution changes. DCQ improves on the CQ structure by detecting such scenarios in order to redistribute events. Both CQ and DCQ determine their operating parameters (bucket widths) by sampling events. However, sampling technique will fail if the samples do not accurately reflect the interevent gap size. This paper presents a novel and alternative approach for determining the optimum operating parameter of a calendar queue based on performance statistics. Stress testing of the new calendar queue, henceforth referred to as the Statistically eNhanced with Optimum Operating Parameter Calendar Queue (SNOOPy CQ), with widely varying and severely skewed event arrival scenarios show that SNOOPy CQ offers a consistent O(1) performance and can execute up to 100 times faster than DCQ and CQ in certain scenarios.
A Simulation Model of Backfilling and I/O Scheduling
in a Partitionable Parallel System
Helen D. Karatza (Aristotle University of Thessaloniki)
A special type of scheduling called backfilling is presented using a parallel system upon which multiple jobs can be executed simultaneously. Jobs consist of parallel tasks scheduled to execute concurrently on processor partitions, where each task starts at the same time and computes at the same pace. The impact of I/O scheduling on system performance is also examined. The goal is to achieve high system performance and maintain fairness in terms of individual job execution. The performance of different backfilling schemes and different I/O scheduling strategies is compared over various processor service time coefficients of variation and for various degrees of multiprogramming. Simulation results demonstrate that backfilling improves system performance while preserving job sequencing. Also, the results show that when there is contention for the disk resources, trends in system can differ from those appearing in the research literature if I/O behavior is negligible or it is not explicitly considered.