View Full Document

Model-Driven Performance Analysis Methodology



View the full content.
View Full Document
View Full Document

17 views

Unformatted text preview:

Model Driven Performance Analysis Methodology for Distributed Software Systems Swapna S Gokhale Paul Vandal Dept of CSE Univ of Connecticut Storrs CT ssg engr uconn edu Aniruddha Gokhale Dimple Kaul Arundhati Kogekar Dept of EECS Vanderbilt Univ Nashville TN a gokhale vanderbilt edu Abstract A key enabler of the recently popularized assemblycentric development approach for distributed real time software systems is QoS enabled middleware which provides reusable building blocks in the form of design patterns that codify solutions to commonly recurring problems These patterns can be customized by choosing an appropriate set of con guration parameters The con guration options of the patterns exert a strong in uence on system performance which is of paramount importance in many distributed software systems Despite this considerable in uence currently there is a lack of signi cant research to analyze performance of middleware at design time where performance issues can be resolved at a much earlier stage of the application life cycle and with substantially less costs The present project seeks to develop a performance analysis methodology for design time performance analysis for distributed software systems implemented using middleware patterns and their compositions The methodology is illustrated on a producer consumer system implemented using the Active Object AO pattern in middleware Finally broader impacts of the methodology for middleware specialization are also described I I NTRODUCTION Society today is increasingly reliant on the services provided by distributed software systems These services have become prevalent in many domains including health care nance telecommunications and avionics In many of these domains the performance of a service is just as important as the functionality provided by the service To counter the dual pressures of developing systems which offer a rich set of services with good performance while simultaneously reducing their time to market service providers are increasingly favoring the assembly centric approach over the traditional development centric approach A key facilitator of this assembly centric approach has been QoS enabled middleware 18 Middleware consists of software layers that provide platform independent execution semantics and reusable services that coordinate how system components are composed and interoperate Middleware offers a large number of reusable building blocks in the form of design patterns 4 20 which codify solutions to commonly recurring problems These patterns can be customized with an appropriate set of con guration parameters as per system requirements The choice of con guration parameters have a profound in uence on the performance of a pattern and hence a system 1 4244 0910 1 07 20 00 2007 IEEE Jeff Gray Yuehua Lin Dept of CIS Univ of Alabama at Birmingham Birmingham AL gray cis uab edu implemented using the pattern Despite the in uence on system performance which is crucial for many software systems current methods of selecting the patterns and their con guration options are manual ad hoc and hence error prone The problem is further compounded because there are no techniques available to analyze the impact of different con guration parameters on the performance of a pattern prior to building a system Performance analysis is thus invariably conducted after a system is assembled and it is often too late and too expensive to take corrective action if a particular selection of patterns and their con guration parameters cannot satisfy the desired performance expectations The capability to conduct design time performance analysis of middleware patterns and the composition of these patterns is thus necessary especially for systems with stringent performance requirements This project seeks to develop an analysis methodology for design time performance analysis of a system implemented using middleware patterns The methodology is comprised of two steps The rst step consists of formulating and solving performance models of individual middleware patterns In the second step strategies to compose the performance models of individual patterns mirroring their composition and methods to solve the composite model to estimate system performance are developed Our goal is to automate these processes via model driven engineering MDE 19 where the systems developer is provided artifacts that are intuitive and closer to their domain to compose the systems from building blocks Generative tools 2 supported by the MDE approach can then automate the synthesis of performance analysis metadata that is subsequently used by back end analysis tools The illustration of the rst step of the methodology on Reactor Proactor and Active Object AO patterns demonstrates the feasibility of conducting performance analysis of a system implemented using a middleware pattern at design time using the model driven paradigm The rest of the paper is organized as follows Section II Section II provides an overview of the performance analysis process of a middleware pattern Section III illustrates the process using the AO pattern Section IV discusses broader impacts of the project Section V offers concluding remarks and directions for future research II P ERFORMANCE ANALYSIS OF A MIDDLEWARE PATTERN In this section we discuss the process we follow for performance analysis of an individual pattern We describe the various steps involved in the process shown in Figure 1 and how they support each other Fig 1 Performance analysis process of a middleware pattern Model formulation The model formulation step is comprised of capturing the basic or the invariant characteristics of a pattern into a performance model Different modeling paradigms such as Stochastic Reward Nets SRNs 17 Layered Queuing Networks 23 and Colored Petri Nets 11 may be used for this purpose The performance model can then be solved simulated using tools such as SPNP 10 and DesignCPN 12 Model validation The performance estimates obtained by solving the performance model are validated in this step using simulation or experimentation Experimentation is conducted by implementing a system with the pattern on the ACE framework www dre vanderbilt edu ACE Simulation is conducted using a generalpurpose simulation language such as CSIM 21 Model generalization In this step the the process of formulating the model is generalized to enable the system developer to customize the model according to the system at hand and to


Access the best Study Guides, Lecture Notes and Practice Exams

Loading Unlocking...
Login

Join to view Model-Driven Performance Analysis Methodology and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Model-Driven Performance Analysis Methodology and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?