Pub Date : 2001-08-15DOI: 10.1109/MASCOT.2001.948849
C. Juiz, R. Puigjaner, H. Perros
Architectures for the design of soft real-time systems typically do not facilitate the use of performance tools that will enable a designer to analyse the performance of the system that is being designed. Intercommunication data areas, whose behaviours are characterised by interaction protocols, are the building blocks of many soft real-time systems. The interaction protocols are grouped into families depending on their reading/writing functionality. We present two members belonging to the pool family. These components transfer data among tasks without synchronisation in a non-selective manner. In the basic pool model no task has priority over other tasks whereas in the priority pool model writer tasks have non-preemptive priority over reader tasks. Both queueing models are analysed approximately using a decomposition technique. Basic pool modelling is based on single-server and multi-server semaphore queues. Priority pool modelling is inspired on non-preemptive priority classical analysis.
{"title":"Performance analysis of pools in soft real-time design architectures","authors":"C. Juiz, R. Puigjaner, H. Perros","doi":"10.1109/MASCOT.2001.948849","DOIUrl":"https://doi.org/10.1109/MASCOT.2001.948849","url":null,"abstract":"Architectures for the design of soft real-time systems typically do not facilitate the use of performance tools that will enable a designer to analyse the performance of the system that is being designed. Intercommunication data areas, whose behaviours are characterised by interaction protocols, are the building blocks of many soft real-time systems. The interaction protocols are grouped into families depending on their reading/writing functionality. We present two members belonging to the pool family. These components transfer data among tasks without synchronisation in a non-selective manner. In the basic pool model no task has priority over other tasks whereas in the priority pool model writer tasks have non-preemptive priority over reader tasks. Both queueing models are analysed approximately using a decomposition technique. Basic pool modelling is based on single-server and multi-server semaphore queues. Priority pool modelling is inspired on non-preemptive priority classical analysis.","PeriodicalId":375127,"journal":{"name":"MASCOTS 2001, Proceedings Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126582083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-08-15DOI: 10.1109/MASCOT.2001.948890
M. Busari, C. Williamson
This paper uses trace-driven simulations to evaluate the performance of different cache management techniques for multi-level Web proxy caching hierarchies. In particular the experiments consider heterogeneous cache replacement policies within a two-level caching hierarchy, and size-based partitioning across the levels of a caching hierarchy. Three different synthetic Web proxy workloads are used in the study, reflecting complete overlap, partial overlap, and no overlap in the workloads seen by the child-level proxies. The simulation results demonstrate that heterogeneous replacement policies and size-based partitioning each offer modest improvements in caching performance. The sensitivity of the results to the degree of workload overlap is also discussed.
{"title":"Simulation evaluation of a heterogeneous Web proxy caching hierarchy","authors":"M. Busari, C. Williamson","doi":"10.1109/MASCOT.2001.948890","DOIUrl":"https://doi.org/10.1109/MASCOT.2001.948890","url":null,"abstract":"This paper uses trace-driven simulations to evaluate the performance of different cache management techniques for multi-level Web proxy caching hierarchies. In particular the experiments consider heterogeneous cache replacement policies within a two-level caching hierarchy, and size-based partitioning across the levels of a caching hierarchy. Three different synthetic Web proxy workloads are used in the study, reflecting complete overlap, partial overlap, and no overlap in the workloads seen by the child-level proxies. The simulation results demonstrate that heterogeneous replacement policies and size-based partitioning each offer modest improvements in caching performance. The sensitivity of the results to the degree of workload overlap is also discussed.","PeriodicalId":375127,"journal":{"name":"MASCOTS 2001, Proceedings Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130744589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-08-15DOI: 10.1109/MASCOT.2001.948876
G. Min, M. Ould-Khaoua
The Markov-modulated Poisson process (MMPP) has been extensively used for modelling bursty traffic generated by multimedia applications because it is able to model the time-varying arrival rate and capture the important correlation between inter-arrival times. However, interconnection networks used in current multicomputers have been mainly analysed under the assumption of the "non-bursty" Poisson arrival process. This study extends further the application of the MMPP to queueing networks in order to develop a new analytical model for computing message latency in pipelined circuit-switched interconnection networks. Simulation experiments reveal that the model exhibits a good degree of accuracy under various operating conditions.
{"title":"A queueing model for pipelined circuit-switched networks with the MMPP traffic","authors":"G. Min, M. Ould-Khaoua","doi":"10.1109/MASCOT.2001.948876","DOIUrl":"https://doi.org/10.1109/MASCOT.2001.948876","url":null,"abstract":"The Markov-modulated Poisson process (MMPP) has been extensively used for modelling bursty traffic generated by multimedia applications because it is able to model the time-varying arrival rate and capture the important correlation between inter-arrival times. However, interconnection networks used in current multicomputers have been mainly analysed under the assumption of the \"non-bursty\" Poisson arrival process. This study extends further the application of the MMPP to queueing networks in order to develop a new analytical model for computing message latency in pipelined circuit-switched interconnection networks. Simulation experiments reveal that the model exhibits a good degree of accuracy under various operating conditions.","PeriodicalId":375127,"journal":{"name":"MASCOTS 2001, Proceedings Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132597562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-08-15DOI: 10.1109/MASCOT.2001.948887
R. Fujimoto
Summary form only given, as follows. On-line simulation refers to the use of simulation tools to aid in the management of operational systems. For example, faster-than-real-time simulation of the air transportation network can be used to aid air traffic controllers in managing traffic flows to reduce congestion and improve safety. Live feeds from air traffic control centers are used to build a situation database indicating the current status of the transportation network. This database is used to initialize faster-than-real-time simulations that are used to explore the impact of decisions made by traffic controllers in order to determine appropriate courses of action. Other applications of on-line simulation include use in managing military engagements, communication networks, or supply chains. I will describe techniques that we have developed to realize efficient on-line simulations. Parallel execution is used to achieve fast execution of simulation models. Techniques to incrementally clone running parallel simulations enable rapid, concurrent evaluation of alternate courses of action in order to aid in decision making processes. Our experiences in applying these techniques to applications such as air traffic control are described.
{"title":"On-line simulation techniques for real-time management of systems","authors":"R. Fujimoto","doi":"10.1109/MASCOT.2001.948887","DOIUrl":"https://doi.org/10.1109/MASCOT.2001.948887","url":null,"abstract":"Summary form only given, as follows. On-line simulation refers to the use of simulation tools to aid in the management of operational systems. For example, faster-than-real-time simulation of the air transportation network can be used to aid air traffic controllers in managing traffic flows to reduce congestion and improve safety. Live feeds from air traffic control centers are used to build a situation database indicating the current status of the transportation network. This database is used to initialize faster-than-real-time simulations that are used to explore the impact of decisions made by traffic controllers in order to determine appropriate courses of action. Other applications of on-line simulation include use in managing military engagements, communication networks, or supply chains. I will describe techniques that we have developed to realize efficient on-line simulations. Parallel execution is used to achieve fast execution of simulation models. Techniques to incrementally clone running parallel simulations enable rapid, concurrent evaluation of alternate courses of action in order to aid in decision making processes. Our experiences in applying these techniques to applications such as air traffic control are described.","PeriodicalId":375127,"journal":{"name":"MASCOTS 2001, Proceedings Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131037857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-08-15DOI: 10.1109/MASCOT.2001.948868
Mustafa Uysal, G. A. Alvarez, A. Merchant
Enterprise storage systems depend on disk arrays for their capacity and availability needs. To design and maintain storage systems that efficiently satisfy evolving requirements, it is critical to be able to evaluate configuration alternatives without having to physically implement them. In this paper, we describe an analytical model to predict disk array throughput, based on a hierarchical decomposition of the internal array architecture. We validate the model against a state-of-the-art disk array for a variety of synthetic workloads and array configurations. To our knowledge, no previously published analytical model has either incorporated the combined effects of the complex optimizations present in modern disk arrays, or been validated against a real, commercial array. Our results are quite encouraging for an analytical model: predictions are accurate in most cases within 32% of the observed array performance (15% on the average) for our set of experiments.
{"title":"A modular, analytical throughput model for modern disk arrays","authors":"Mustafa Uysal, G. A. Alvarez, A. Merchant","doi":"10.1109/MASCOT.2001.948868","DOIUrl":"https://doi.org/10.1109/MASCOT.2001.948868","url":null,"abstract":"Enterprise storage systems depend on disk arrays for their capacity and availability needs. To design and maintain storage systems that efficiently satisfy evolving requirements, it is critical to be able to evaluate configuration alternatives without having to physically implement them. In this paper, we describe an analytical model to predict disk array throughput, based on a hierarchical decomposition of the internal array architecture. We validate the model against a state-of-the-art disk array for a variety of synthetic workloads and array configurations. To our knowledge, no previously published analytical model has either incorporated the combined effects of the complex optimizations present in modern disk arrays, or been validated against a real, commercial array. Our results are quite encouraging for an analytical model: predictions are accurate in most cases within 32% of the observed array performance (15% on the average) for our set of experiments.","PeriodicalId":375127,"journal":{"name":"MASCOTS 2001, Proceedings Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123955960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-08-15DOI: 10.1109/MASCOT.2001.948893
C. Cseh
The ABR service class of ATM networks uses a feedback control mechanism to adapt to varying link capacities. The VS/VD technique offers the possibility to segment the otherwise end-to-end ABR control loop into separate loops. The improved feedback delay and the control of ABR traffic inside closed segments provide a better performance for ABR connections. This paper presents the use of classical linear control theory to model and develop an ABR VS/VD flow control algorithm. Discrete event simulation is used to analyze the behavior of the algorithm with respect to transient behavior and correctness of the control model. Linear control theory offers the means to derive correct choices of parameters and to assess performance issues, like stability of the system, during the design phase. The performance goals are a high link utilization, fair bandwidth distribution and robust operation in various environments, which are verified by discrete event simulations. The major contribution of this work is the use of analytic methods (linear control theory) to model and design an ABR flow control algorithm tailored for the special layout of a VS/VD switch, and the use of simulation techniques to verify the result.
{"title":"Modeling and analysis of an ABR flow control algorithm for a virtual source/virtual destination switch","authors":"C. Cseh","doi":"10.1109/MASCOT.2001.948893","DOIUrl":"https://doi.org/10.1109/MASCOT.2001.948893","url":null,"abstract":"The ABR service class of ATM networks uses a feedback control mechanism to adapt to varying link capacities. The VS/VD technique offers the possibility to segment the otherwise end-to-end ABR control loop into separate loops. The improved feedback delay and the control of ABR traffic inside closed segments provide a better performance for ABR connections. This paper presents the use of classical linear control theory to model and develop an ABR VS/VD flow control algorithm. Discrete event simulation is used to analyze the behavior of the algorithm with respect to transient behavior and correctness of the control model. Linear control theory offers the means to derive correct choices of parameters and to assess performance issues, like stability of the system, during the design phase. The performance goals are a high link utilization, fair bandwidth distribution and robust operation in various environments, which are verified by discrete event simulations. The major contribution of this work is the use of analytic methods (linear control theory) to model and design an ABR flow control algorithm tailored for the special layout of a VS/VD switch, and the use of simulation techniques to verify the result.","PeriodicalId":375127,"journal":{"name":"MASCOTS 2001, Proceedings Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125166367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-08-15DOI: 10.1109/MASCOT.2001.948851
L. Zou, M. Ammar, C. Diot
In a distributed multi-user game, entities need to communicate their state information to other entities. Usually only a subset of the game's entities are interested in information being disseminated by any particular entity. In a large scale distributed game, broadcasting messages containing each information to all participants and applying a relevance filter at the end host is wasteful in both network and processing resources. We consider techniques that address this problem by dividing the entities into groups and using multicast communication to disseminate information to the groups which would be interested in such information. We investigate two grouping strategies: cell-based grouping and entity-based grouping. Our goal is to understand the tradeoffs between grouping overhead and communication overhead and compare the cost of both strategies under various conditions.
{"title":"An evaluation of grouping techniques for state dissemination in networked multi-user games","authors":"L. Zou, M. Ammar, C. Diot","doi":"10.1109/MASCOT.2001.948851","DOIUrl":"https://doi.org/10.1109/MASCOT.2001.948851","url":null,"abstract":"In a distributed multi-user game, entities need to communicate their state information to other entities. Usually only a subset of the game's entities are interested in information being disseminated by any particular entity. In a large scale distributed game, broadcasting messages containing each information to all participants and applying a relevance filter at the end host is wasteful in both network and processing resources. We consider techniques that address this problem by dividing the entities into groups and using multicast communication to disseminate information to the groups which would be interested in such information. We investigate two grouping strategies: cell-based grouping and entity-based grouping. Our goal is to understand the tradeoffs between grouping overhead and communication overhead and compare the cost of both strategies under various conditions.","PeriodicalId":375127,"journal":{"name":"MASCOTS 2001, Proceedings Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127941161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-08-15DOI: 10.1109/MASCOT.2001.948885
Luís Cardoso, Jorge Patrão, Carlos B. Lopes, A. Sousa, R. Valadas
This paper presents a tool for MPLS network dimensioning that allows for multi-hour dimensioning of networks supporting simultaneously peer-to-peer and client-server services. The dimensioning model is able to take into account several LSP attributes: degree of survivability (link disjoint and node disjoint cases), maximum hop count, usable colours and preferred routes. The dimensioning problem is a combined capacity design and routing problem where the LSP sets are calculated in order to minimise the network operational costs. This problem is formulated as an integer programming problem, which is solved through an heuristic based on Lagrangean relaxation with sub-gradient optimisation. The network design tool, named PTPlan MPLS, includes a graphical interface for an easy introduction and edition of the network parameters. Results show that the tool can design networks of realistic size in seconds using a standard PC platform.
{"title":"PTPlan MPLS: a tool for MPLS network dimensioning","authors":"Luís Cardoso, Jorge Patrão, Carlos B. Lopes, A. Sousa, R. Valadas","doi":"10.1109/MASCOT.2001.948885","DOIUrl":"https://doi.org/10.1109/MASCOT.2001.948885","url":null,"abstract":"This paper presents a tool for MPLS network dimensioning that allows for multi-hour dimensioning of networks supporting simultaneously peer-to-peer and client-server services. The dimensioning model is able to take into account several LSP attributes: degree of survivability (link disjoint and node disjoint cases), maximum hop count, usable colours and preferred routes. The dimensioning problem is a combined capacity design and routing problem where the LSP sets are calculated in order to minimise the network operational costs. This problem is formulated as an integer programming problem, which is solved through an heuristic based on Lagrangean relaxation with sub-gradient optimisation. The network design tool, named PTPlan MPLS, includes a graphical interface for an easy introduction and edition of the network parameters. Results show that the tool can design networks of realistic size in seconds using a standard PC platform.","PeriodicalId":375127,"journal":{"name":"MASCOTS 2001, Proceedings Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130524093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-08-15DOI: 10.1109/MASCOT.2001.948869
Tsozen Yeh, D. Long, S. Brandt
Increases in CPU performance have surpassed those in hard drives. As a result, disk operations have become more expensive in terms of the number of CPU cycles spent waiting for them to complete. File prediction can mitigate this problem by prefetching files into cache before they are accessed. Identifying relationships between individual files plays a key role in successfully performing file prefetching. It is well-known that previous patterns of file references can be used to predict future references. Nevertheless, knowledge about the programs producing the relationships between individual files has rarely been investigated. We present a program-based successor (PBS) model that identifies relationships between files through the names of the programs accessing them. We develop a program-based last successor (PLS) model derived from PBS to do file prediction. Our simulation results show that PLS makes 21% fewer incorrect predictions and roughly the same number of correct predictions as the last-successor (LS) model. We also examine the cache hit ratio achieved by applying PLS to the least recently used (LRU) caching algorithm and show that a cache using PLS and LRU together can perform better than a cache up to 40 times larger using LRU alone. Finally, we argue that because program-based successors are more likely to be used soon, incorrectly prefetched program-based successors are more likely to be used and thus less incorrect than incorrectly prefetched files from non-program-based models.
{"title":"Performing file prediction with a program-based successor model","authors":"Tsozen Yeh, D. Long, S. Brandt","doi":"10.1109/MASCOT.2001.948869","DOIUrl":"https://doi.org/10.1109/MASCOT.2001.948869","url":null,"abstract":"Increases in CPU performance have surpassed those in hard drives. As a result, disk operations have become more expensive in terms of the number of CPU cycles spent waiting for them to complete. File prediction can mitigate this problem by prefetching files into cache before they are accessed. Identifying relationships between individual files plays a key role in successfully performing file prefetching. It is well-known that previous patterns of file references can be used to predict future references. Nevertheless, knowledge about the programs producing the relationships between individual files has rarely been investigated. We present a program-based successor (PBS) model that identifies relationships between files through the names of the programs accessing them. We develop a program-based last successor (PLS) model derived from PBS to do file prediction. Our simulation results show that PLS makes 21% fewer incorrect predictions and roughly the same number of correct predictions as the last-successor (LS) model. We also examine the cache hit ratio achieved by applying PLS to the least recently used (LRU) caching algorithm and show that a cache using PLS and LRU together can perform better than a cache up to 40 times larger using LRU alone. Finally, we argue that because program-based successors are more likely to be used soon, incorrectly prefetched program-based successors are more likely to be used and thus less incorrect than incorrectly prefetched files from non-program-based models.","PeriodicalId":375127,"journal":{"name":"MASCOTS 2001, Proceedings Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125000384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-08-15DOI: 10.1109/MASCOT.2001.948852
Y. M. Teo, B. Onggo, S. Tay
A new formal approach based on partial order set (poset) theory is proposed to analyze the space requirement of discrete-event parallel simulation. We divide the memory required by a simulation problem into memory to model the states of the real-world system, memory to maintain a list of future event occurrences, and memory required to implement the event synchronization protocol. We establish the relationship between poset theory and event orderings in simulation. Based on our framework, we analyze the space requirement using an open and a closed system as examples. Our analysis shows that apart from problem size and traffic intensity that affects the memory requirement, event ordering is an important factor that can be analyzed before implementation. In an open system, a weaker event ordered simulation requires more memory than strong ordering. However, the memory requirement is constant and independent of event ordering in closed systems.
{"title":"Effect of event orderings on memory requirement in parallel simulation","authors":"Y. M. Teo, B. Onggo, S. Tay","doi":"10.1109/MASCOT.2001.948852","DOIUrl":"https://doi.org/10.1109/MASCOT.2001.948852","url":null,"abstract":"A new formal approach based on partial order set (poset) theory is proposed to analyze the space requirement of discrete-event parallel simulation. We divide the memory required by a simulation problem into memory to model the states of the real-world system, memory to maintain a list of future event occurrences, and memory required to implement the event synchronization protocol. We establish the relationship between poset theory and event orderings in simulation. Based on our framework, we analyze the space requirement using an open and a closed system as examples. Our analysis shows that apart from problem size and traffic intensity that affects the memory requirement, event ordering is an important factor that can be analyzed before implementation. In an open system, a weaker event ordered simulation requires more memory than strong ordering. However, the memory requirement is constant and independent of event ordering in closed systems.","PeriodicalId":375127,"journal":{"name":"MASCOTS 2001, Proceedings Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129956595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}