WSC 2008 Final Abstracts
Modeling Methodology II Track
Monday 10:30:00 AM 12:00:00 PM
Chair: Andreas Tolk (Old Dominion University)
Transparent and Adaptive Computation-Block Caching for Agent-Based Simulation on a PDES Core
Yin Xiong, Maria Hybinette, and Eileen Kraemer (University of Georgia)
We present adaptive computation-block caching that supports improved performance and is suited for agent-based simulations. The approach is illustrated in SASSY (Scalable Agents Simulation System). SASSY leverages a Parallel Discrete Event Simulation for performance, but provides an agent-based API to the developer. Agent-based simulation is suited to computation-block caching because relevant calculations completed at each event may be relatively heavyweight and may be repeated. The potential savings of avoiding a computation entirely may offset the overhead cost of caching. The approach is refined through the use of statistical methods for choosing which computation blocks should be cached or not. If the relevant computation is trivial, caching is not worth the cost. In other cases caching provides a substantial speedup. Our mechanism tracks these costs online and adjusts accordingly. It requires no additional coding but is automatically integrated into applications. We assess the performance of the approach in a benchmark-application
Using Agent Technology to Move From Intention-Based to Effect-Based Models
Andreas Tolk, Robert J Bowen, and Patrick T Hester (Old Dominion University)
Following current modeling paradigms, most processes are captured in the form of modeling a desired intent, often using success probabilities. In addition, only special roles that entities are intended to play are modeled. For effect-based modeling, the unintended but nonetheless resulting effects are as important as the intended effects, and they are therefore modeled. Also, uninspected actions based on alternative roles are important. Using agents to not only represent influencers and targets but also the processes, it becomes possible to capture all effects and move from "what I intended to accomplish" to "what I really accomplished," including side and secondary effects. The agent architecture and a prototype for this effect-based model are presented in this paper.
An Analysis of Emerging Behaviors in Large-Scale Queueing-Based Service Systems Using Agent-based Simulation
Wai Kin Victor Chan (Rensselaer Polytechnic Institute)
This paper considers a large-scale service system consisting of a number of service areas (cells). Each cell contains a queueing model that operates continuously and independently from the queueing models in other cells. Each cell changes its state between alive and dead based on certain rules that depend on the queueing status of its own queue, the neighboring queues, and the whole community, while satisfying a constraint on the number of live cells in a neighborhood. The objective is to examine emerging behaviors from the interactions of the cells under various rules. Chaotic, deterministic, and in-between emerging behaviors are presented.
Monday 1:30:00 PM 3:00:00 PM
Human Factors I
Chair: Olivier Dalle (Université de Nice)
Mental Simulation for Creating Realistic Behavior in Physical Security Systems Simulation
Volkan Ustun and Jeffrey S Smith (Auburn University)
Mental simulation is proposed by cognitive psychologists as a candidate to model the human reasoning process. In this paper, we propose a methodology that models mental simulation to create realistic human behavior in simulated environments. This methodology is used to generate realistic intruder and guard behavior in physical security systems simulation. The behaviors include moving to a target while avoiding detection/capture for intruders and following and apprehending intruders for guards.
Integrated Human Decision Making Model under Belief-Desire-Intention Framework for Crowd Simulation
Seungho Lee and Young-Jun Son (The University of Arizona)
An integrated Belief-Desire-Intention (BDI) modeling framework is proposed for human decision making and planning, whose sub-modules are based on Bayesian belief network (BBN), Decision-Field-Theory (DFT), and probabilistic depth first search (PDFS) technique. To mimic realistic human behaviors, attributes of the BDI framework are reverse-engineered from the human-in-the-loop experiments conducted in the Cave Automatic Virtual Environment (CAVE). The proposed modeling framework is demonstrated for human’s evacuation behaviors under a terrorist bomb attack situation. The simulated environment and agents (human model) conforming to the proposed BDI framework are implemented in AnyLogic agent-based simulation software, where each agent calls external Netica BBN software to perform its perceptual processing function and Soar software to perform its real-time planning and decision-execution functions. The constructed simulation has been used to test impact of several factors (e.g. demographics of people, number of policemen) on evacuation performance (e.g. average evacuation time, percentage of casualties).
Introducing Age-Based Parameters into Simulations of Crowd Dynamics
D. J. Kaup, Thomas L. Clarke, Linda C. Malone, and Florian Jentsch (University of Central Florida) and Rex Oleson (SAIC)
Very few crowds consist of individuals who are exactly the same. Defining variables, such as age, and how they affect an individual's movement, could increase realism in simulations of crowd movement. In this paper, we present and discuss how age variations of individuals can be included in crowd simulations. Starting with the Helbing, Molnar, Farkas, and Vicsek model (HMFV), we modeled age differences by modifying the strength of the existing social forces. We created simulation scenarios with the varied strengths and used multiple approaches for validation, including experts' subjective validation and experimental validation via comparison of model predictions with observed crowd movements. The results indicated that individual characteristics such as age can be modeled by social forces Future extensions of our work would be to include individuals, small subgroups, and/or large groups of people to model multicultural crowd behavior.
Monday 3:30:00 PM 5:00:00 PM
Human Factors II
Chair: Helena Szczerbicka (University of Hannover)
A Simplified Modeling Approach for Human System Interaction
Torbjörn Per Edvin Ilar (Luleå University of Technology)
Despite increasing dependency on technology, the importance of humans is expected to increase and to provide a realistic basis for decision support, both technical and organizational processes must be included in simulation models. In a rapidly changing environment, skill development is also important and should be considered when developing simulation models. This paper describes a three step straightforward method that support the implementation of supporting (human dependent) processes, operators competence and competence development using learning curves to obtain a more representative simulation of system performance. The effect on the performance measures are also demonstrated based on a case study of a highly automated press line. The advantages and disadvantages with the proposed method is also discussed.
MMOHILS: A Simpler Approach to Valid Agents in Human Simulation Studies
Seth N. Hetu and Gary Tan (National University of Singapore)
A novel technique for accurately and inexpensively simulating large numbers of people is introduced: Massively Multiplayer Online Human In the Loop Simulation (MMOHILS). This technique is applicable to certain simulations which would normally use AI-based agents to model human behavior, and its validation techniques are substantially simpler. A prototype for two types of MMOHILS (experimental and unannounced) are laid forth in this paper, with examples given from a current prototype in development for use in egress analysis.
Modelling and Simulation of Team Effectiveness Emerged from Member-Task Interaction
Shengping Dong, Bin Hu, and Jiang Wu (School of Management, Huazhong University of Science and Technology)
A team's task process consists of allocation, processing and evaluation of a series of tasks. Team effectiveness emerges from interactions among team members. The interactions between the task process and members occur when team allocates and processes tasks. To address the effect of the interaction on team effectiveness, Multi-agent based modelling and simulation is utilized to develop a multi-agent model of team's task process, in which we put forward member relation degree and member-task matching degree to describe the social relations existing in a team and how members' competence match with tasks' demands respectively. We implement the model by Repast J and conduct experiments to validate the model using face validation technique in an actual Chinese team. The implication of the model to the team is discussed and some suggestions are offered. The conclusion is given at last.
Tuesday 8:30:00 AM 10:00:00 AM
Chair: Gabriel Wainer (Carleton University)
Design Guidelines for Simulation Building Blocks
Alexander Verbraeck (TU Delft) and Edwin Valentin (Systems Navigator)
Component based or building block based simulation model development is regularly mentioned as an interesting new development and a potential field of research. Most of the commercial simulation environments offer the users of their software functions to group model constructs and upgrade these to advanced model constructs that the users can use in future simulation studies. Unfortunately, the created model constructs are rarely reused and often stop being used after the first simulation study. In this paper we describe a list of guidelines to consider in the design of building blocks to enhance the reusability and the flexibility of the simulation building block to be used in multiple simulation studies, also by model developers who have not been involved in the design of the building blocks.
Extending DEVS to Support Multiple Occurrence in Component-based Simulation
Olivier Dalle (Université de Nice - Sophia Antipolis, CNRS and INRIA), Bernard P. Zeigler (University of Arizona) and Gabriel A. Wainer (Carleton University)
This paper presents a new extension of the DEVS formalism that allows multiple occurrences of a given instance of a DEVS component. This paper is a follow-up to a previous short paper in which the issue of supporting a new construction called a shared component was raised, in the case of a DEVS model. In this paper, we first
demonstrate, formally, that the multi-occurrence extended definition, that includes the case of shared components, is valid because any model that is built using this extended definition accepts an equivalent model built using standard DEVS. Then we recall the benefits of sharing components for modeling, and further extend this analysis to the simulation area, by
investigating how shared components can help to design better simulation engines. Finally, we describe an existing implementation of a simulation software that fully supports this shared component feature, both at the modeling and simulation levels.
Definition and Analysis of Composition Structures for Discrete-Event Models
Mathias Röhl and Adelinde M. Uhrmacher (University of Rostock)
The re-use of a model by someone else than the original developer is still an open challenge. This paper presents composition structures and interface descriptions for discrete-event models. Interfaces are introduced as separate units of description that complement model definitions. As XML documents, interfaces may be stored in databases to search, select, and analyze composition candidates based on public visible property descriptions. A meta model formalizes interfaces, components, and compositions, such that the refinement of interfaces into model implementations and the compatibility of interfaces can be analyzed. The composition approach combines different hierarchical relations (type hierarchies, refinement hierarchies, and composition hierarchies) to simplify the modeling process.
Tuesday 10:30:00 AM 12:00:00 PM
Conceptual Modeling I
Chair: Stewart Robinson (University of Warwick)
Conceptual Modelling: Knowledge Acquisition and Model Abstraction
Kathy Kotiadis and Stewart Robinson (Warwick Business School)
Conceptual modelling has gained a lot of interest in recent years and simulation modellers are particularly interested in understanding the processes involved in arriving at a conceptual model. This paper contributes to this understanding by discussing the artifacts of conceptual modelling and two specific conceptual modelling processes: knowledge acquisition and model abstraction. Knowledge acquisition is the process of finding out about the problem situation and arriving at a system description. Model abstraction refers to the simplifications made in moving from a system description to a conceptual model. Soft Systems Methodology has tools that can help a modeller with knowledge acquisition and model abstraction. These tools are drawing rich pictures, undertaking analyses ‘one’, ‘two’, ‘three’, and constructing a root definition and the corresponding purposeful activity model. The use of these tools is discussed with respect to a case study in health care.
Accomplishing Reuse with a Simulation Conceptual Model
Osman Balci and James D. Arthur (Virginia Tech) and Richard E. Nance (Orca Computer, Inc.)
Reuse has been very difficult or in some cases impossible in the Modeling and Simulation (M&S) discipline. This paper focuses on how reuse can be accomplished by using a conceptual model (CM) in a community of interest (COI). We address the issue of reuse in a multifaceted manner covering many areas (types) of M&S such as discrete, continuous, Monte Carlo, system dynamics, gaming-based, and agent-based. M&S is commonly employed and reuse is critically needed by many COIs such as air traffic control, automobile manufacturing, ballistic missile defense, business process reengineering, emergency response management, military training, network-centric operations and warfare, supply chain management, telecommunications, and transportation. We present how a CM developed for a COI can assist in reuse for the design of any type of large-scale complex M&S application in that COI. A CM becomes an asset for a COI and offers significant economic benefits through its effective reuse.
Mathematical Models Towards Self-organizing Formal Federation Languages Based on Conceptual Models of Information Exchange Capabilities
Andreas Tolk (Old Dominion University) and Saikou Y Diallo and Charles D Turnitsa (Virginia Modeling Analysis & Simulation Center)
Conceptual models capture information that is crucial for composability of legacy solutions that is not formally captured in the derived technical artifacts. It is necessary to make this information available for the selection (or elimination) of available solutions, their orchestration, and their execution. Current standards barely address this class of problems. The approach presented in this paper is the first step towards self-organizing federation languages. The system interfaces are described in form of exchangeable data. The context of information exchange (syntax, semantics, and pragmatics) is captured as metadata. These metadata are used to identify the elements of a formal federation language that links model composability and simulation interoperability based on conceptual model elements. The paper describes the formal process of selection, orchestration, and execution and the underlying mathematics for the information exchange specifications that bridge conceptual and engineering levels of the federation process.
Tuesday 1:30:00 PM 3:00:00 PM
Conceptual Modeling II
Chair: Kathy Kotiadis (University of Warwick)
Conceptual Simulation Modeling: The Structure of Domain Specific Simulation Environment
Kitti Setavoraphan and Floyd H. Grant (University of Oklahoma)
This study focuses on the development of a conceptual simulation modeling tool that can be used to structure a domain specific simulation environment. The issues in Software Engineering and Knowledge Engineering such as object-oriented concepts and knowledge representations are addressed to identify and analyze modeling frameworks and patterns of a specific problem domain. Thus, its structural and behavioral characteristics can be conceptualized and described in terms of simulation architecture and context. Moreover, symbols, notations, and diagrams are developed as a communication tool that creates a blueprint to be seen and recognized by both domain experts and simulation developers, which leads to the effectiveness and efficiency in the simulation development of any specific domains.
Combined Use of Modeling Techniques for the Development of the Conceptual Model in Simulation Projects
José Arnaldo Barra Montevechi and Rafael Florêncio da Silva Costa (Universidade Federal de Itajubá), Fabiano Leal, Alexandre Ferreira de Pinho, and Fernando Augusto Silva Marins (Universidade Estadual Paulista), José Tadeu de Jesus (PadTec) and Fábio Ferreira Marins (Universidade Federal de Santa Catarina)
The objective of this paper is to utilize the SIPOC, flow-chart and IDEF0 modeling techniques combined to elaborate the conceptual model of a simulation project. It is intended to identify the contribution of these techniques in the elaboration of the computational model. To illustrate such application, a practical case of a high-end technology enterprise is presented. The paper concludes that the proposed approach eases the elaboration of the computational model.
Experience in the Broadening of a Single-Purpose Simulation Model
Reid L Kress, Pete Bereolos, Karen Bills, James Clinton, Jack Dixon, Phil Dunn, Julie Moore, and Rob Wilson (BW Y-12)
Simulation models are often developed for a single purpose. However, once a model is accepted by management and other stake-holders, it is quite common and desirable to wish to broaden the application of the model to several areas. This is not always a straight-forward evolution because a model designed to evaluate one performance measure may not be well-suited for others. This paper summarizes the experience of the Y-12 National Security Complex’s simulation modeling group in broadening its equipment-scoping simulation model into a model that could examine plant mass balance, model internal products including complex feedback loops, include chemical sampling and analysis, evaluate in-process storage, and perform basic scheduling analyses. Results of the effort were successful and the paper concludes that single-purpose-model broadening can be achieved with the correct mix of planning and execution.
Tuesday 3:30:00 PM 5:00:00 PM
Chair: George Riley (Georgia Institute of Technology)
Distributed Multi-layered Workload Synthesis for Testing Stream Processing Systems
Eric Bouillet, Parijat Dube, David George, Zhen Liu, Dimitrios Pendarakis, and Li Zhang (IBM T. J. Watson Research Center)
Testing and benchmarking of stream processing systems requires workload representative of real world scenarios with myriad of users, interacting through different applications over different modalities with different underlying protocols. The workload should have realistic volumetric and contextual statistics at different levels: user level, application level, packet level etc. Further realistic workload is inherently distributed in nature. We present a scalable framework for synthesis of distributed workload based on identifying different layers of workload corresponding to different time-scales. The architecture is extensible and modular, promotes reuse of libraries at different layers and offers the flexibility to add additional plug-ins at different layers without sacrificing the efficiency.
A Methodology for Unit Testing Actors in Proprietary Discrete Event Based Simulations
Mark E Coyne, Scott R Graham, Kenneth Mark Hopkinson, and Stuart H Kurkowski (Air Force Institute of Technology)
This paper presents a dependency injection based, unit testing methodology for unit testing components, or actors, involved in discrete event based computer network simulation via an xUnit testing framework. The fundamental purpose of discrete event based computer network simulation is verification of networking protocols used in physical–not simulated–networks. Thus, use of rigorous unit testing and test driven development methodologies mitigates risk of modeling the wrong system. We validate the methodology through the design and implementation of OPNET-Unit, an xUnit style unit testing application for an actor oriented discrete event based network simulation environment, OPNET Modeler.
Measuring the Effectiveness of the S-Metric to Produce Better Network Models
Isabel Beichl and Brian Cloteaux (National Institute of Standards and Technology)
Recent research has showed that while many complex networks follow a power-law distribution for their node degrees, it is not sufficient to model these networks based only on their degree distribution. In order to better distinguish between these networks, the metric s was introduced to measure how interconnected the hub nodes are in a network.
We examine the effectiveness of creating network models based on this metric. Through a series of computational experiments, we compare how well a set of common structural network metrics are preserved between instances of the autonomous system topology and a series of random models with identical degree sequences and similar s values. We demonstrate that creating models based on the s metric can produce moderate improvement in structural characteristics than strictly using degree distribution. Our results also indicate some interesting relationships exist between the s metric and the various structural metrics.
Wednesday 8:30:00 AM 10:00:00 AM
Parallel and Distributed Simulation I
Chair: Stephen Onggo (Lancaster University)
An Application of Parallel Monte Carlo Modeling for Real-Time Disease Surveillance
David W Bauer Jr (The MITRE Corporation)
The global health, threatened by emerging infectious diseases,
pandemic influenza, and biological warfare, is becoming increasingly
dependent on the rapid acquisition, processing, integration and
interpretation of massive amounts of data. In response to these
pressing needs, new information infrastructures are needed to
support active, real time surveillance. Detection algorithms may
have a high computational cost in both the time and space domains.
High performance computing platforms may be the best approach for
efficiently computing these algorithms. Unfortunately, these
platforms are unavailable to many health care agencies. Our work
focuses on efficient parallelization of outbreak detection
algorithms within the context of cloud computing as a high
throughput computing platform. Cloud computing is investigated as an
approach to meet real time constraints and reduce or eliminate costs
associated with real time disease surveillance systems.
Partial-Modular Devs for Improving Performance of Cellular Space Wildfire Spread Simulation
Yi Sun and Xiaolin Hu (Georgia State University)
Simulation of wildfire spread remains to be a challenging task. In previous work, a cellular space fire spread simulation model has been developed based on the Discrete Event System Specification (DEVS) formalism. There is a need to improve simulation performance of this model for simulating large scale wildfires. This paper develops a partial-modular implementation of the DEVS-based cellular space model that eliminates the large number of inter-cell message exchanges for improving simulation performance. Both the modular and partial-modular approaches are presented and experiment results are provided. The results show that the partial-modular implementation can significantly improve simulation performance of the cellular space wildfire spread model.
Parallel Discrete-Event Simulation of Population Dynamics
Bhakti Stephan Onggo (Lancaster University)
Research in parallel simulation has been around for more than two decades. However, the number of papers reporting on its application to real world problems is limited. At the 2002 PADS conference, researchers discussed the need to go beyond synchronization and performance issues, and, in particular, to demonstrate that parallel simulation could be used in real world applications outside military and network simulations. Since then, we have seen an increase in the number of papers on the parallel simulation applications in areas such as operations management and the physical sciences. This paper advocates the application of parallel simulation in population dynamics which is often used as the basis for policy planning and analysis. In this paper, we show how the simulation model is implemented using the musik parallel simulation library. We conduct some experiments to measure the simulation performance. The result shows that good event parallelism can be achieved.
Wednesday 10:30:00 AM 12:00:00 PM
Parallel and Distributed Simulation II
Chair: Steffen Strassburger (Technical University of Ilmenau)
Deferred Vs. Immediate Modification of Simulation State in a Parallel Discrete Event Simulator Using Threaded Worker Pools
David Wayne Mutschler (Naval Air Systems Command)
The Joint Integrated Mission Model (JIMM) is a real-time legacy battlefield simulator employed in detailed analyses and virtual exercises. To leverage more processors to im-prove real-time execution, a worker pool of threads optimistically executes events in parallel but avoids cascading rollback by executing only one future event per simulated object. Safeguards for maintenance of simulation state are programmed explicitly and either deferred or immediate modification of state variables could be employed in case of event rollback. In the beginning of the main parallelization effort, deferred modification was used where simulation state is updated only when the event can be completed safely. However, after successful implementation, it was determined to be impractical. Later, all safeguard programming employed immediate modification where original state is restored in case of rollback. This paper discusses these techniques for parallel execution of events in JIMM from initial efforts through later code maintenance.
Dynamic Entity Distribution in Parallel Discrete Event Simulation
Michael Slavik, Imad Mahgoub, and Ahmed Badi (Florida Atlantic University)
Event based simulations are an important scientific application in many fields. With the rise of cluster computing, distributed event simulation optimization becomes an essential research topic. This paper identifies cross-node event queues as a major source of slow down in practical parallel event simulations and proposes dynamically moving entities between nodes to minimize such remote event queues. The problem statement is formalized and an algorithm based on an approximation algorithm for the Capacitated Minimum K-Cut Problem is proposed. The algorithm is simulated and results are presented that show its effectiveness. For simulations with reasonably regular structural relationships between entities, reductions of remote entity queues from 80 to 90% are demonstrated.
Quantitative Assessment of an Agent-Based Simulation on a Time Warp Executive
George Vulov, Tianhao He, and Maria Hybinette (University of Georgia)
We recently introduced SASSY, the design for a hybrid simulator that provides an agent-based API atop a PDES kernel (Hybinette et al. 2006). Our hypothesis is that a design like SASSY offers the advantages of an agent-based paradigm for the application developer, but also provides the performance advantages of a PDES kernel. Since the time of our initial publication, most aspects of SASSY's design have been implemented, and we are now assessing our hypotheses e.g., (He and Hybinette 2008). In this paper we investigate performance advantages for a simple agent-based application on SASSY. In most cases, agent-based simulation environments are configured using a time-step approach, where the simulation proceeds in discrete steps. In this paper we evaluate the performance of a simple application running in a traditional time-step simulation, and also its performance when running on SASSY with PDES support.