System Evaluation Coursework

System Evaluation

Coursework

• You should attempt all exercises.
• Indicative marking: 25 marks for parts A and B, 50 marks for part C.
• Answers to each exercise should be included in a single MS Word or PDF file submitted into NESS. So, you will submit 3 files: a file for A, a file for B and a file for C.
• For A you should include explanations of what you have done along which must show how you computed all metrics. Show your working out.
• For B and C, you should include PEPA input files, screen shots, graphs and diagrams as appropriate, along with explanations of what you have done.

Part A: Performance Measurement

In these questions you are going to compute metrics based on mean time to failure, mean time to repair, availability and unavailability, plus calculations of variance and confidence intervals. You should review the lecture material on these measures before you start this exercise.

1. Calculate the MTTR for each of the values of unavailability and MTTF given in the following table. Show your working.

MTTF

 10 50 100 200 1000 0.1 0.05 0.01

Unavailabilty

1. Consider a study where measurements are taken for the length of periods between successive failure and repair events in a server. The server was measured to be operational over 10 durations of lengths 150, 100, 250, 175, 125, 200, 225, 195, 180 and 200 time units respectively. In between these periods of operation, the server was measured to be in a failed state for durations 30, 40, 70, 60, 100, 50, 60, 40 and 50 time units.

Hence calculate the following metrics:

1. i) MTTF ii) MTTR iii) The proportion of time during which the server is operational.
2. iv) The availability of the server.

1. A website is monitored over a period of time to observe its response time to a particular query. Samples are taken at 10 instants giving the following values: 10, 20, 10, 15, 50, 25, 20, 30, 10, 20. Hence calculate the following: i) Average (mean) response time. ii) Variance of the response time. iii) 95% confidence interval.

How could gain more confidence in the results?

Part B: Performance Modelling

To tackle this exercise you will need to use PEPA (Performance Evaluation Process Algebra) http://www.dcs.ed.ac.uk/pepa/ and the PEPA Eclipse Plug-in, which is available from

http://www.dcs.ed.ac.uk/pepa/tools/. You will need to have Eclipse installed (which also requires you to have Java installed first). Install the tools before attempting the exercises.

1. Write a specification for an M/M/1/3 queue in PEPA and load your model into one of the PEPA tools to make sure that there are no errors.

As well as the lecture slides, you may find the information in the following paper useful:

• N. Thomas and J. Hillston. Using Markovian process algebra to specify interactions in queueing systems. Technical Report ECS-LFCS-97-373, Laboratory for

Foundations of Computer Science, Department of Computer Science, The University of Edinburgh, 1997. (available on Canvas)

1. Use the PEPA Eclipse plug-in to derive the states in the underlying CTMC and (having chosen some appropriate rates) solve the model numerically to find the steady state probabilities of being in each state respectively.
2. Using the equations for M/M/n/k queues given in the lectures, show that the numerical solution of your model is correct.
3. Using your answer from part b), find the average queue size and hence use Little’s law to find the average response time.

You will produce a report that includes PEPA input files, screen shots, graphs and diagrams as appropriate, along with explanations of what you have done.

Part C

For this exercise you will need to analyse a more substantial model. You could define your own model, but a more reliable approach would be to take one from the literature. There is a substantial archive of papers on PEPA at http://www.dcs.ed.ac.uk/pepa/papers/ and there are more in Google Scholar. Some suggestions – all available in Canvas – for papers describing possible systems to study include:

• R.W. Holton and J.P.N. Glover. An SPA performance model of a production cell. In D. Kouvatsos, editor, Proceedings of the Thirteenth UK Performance Engineering Workshop, pages 6/1-6/6, Bradford, 1997.
• Bowman, J. Bryans, and J. Derrick. Analysis of a multimedia stream using stochastic process algebra. In C. Priami, editor, Sixth International Workshop on Process Algebras and Performance Modelling, pages 51-69, Nice, September 1998.
• Y Zhao and N Thomas. Efficient solutions of a PEPA model of a key distribution centre. Performance Evaluation 67 (8), 740-756, 2010.
• SNS Kamil, N Thomas, A case study in inspecting the cost of security in cloud computing, Electronic Notes in Theoretical Computer Science, 318, 179-196, 2015.
• C Abdullah, N Thomas, A PEPA model of IEEE 802.11b/g with hidden nodes, Computer Performance Engineering, LNCS 9951, 126-140, 2016.
• X Chen, J Ding, N Thomas, Dynamic Scheduling Policy for Patient Flow in a Smart Environment, Chinese Journal of Electronics 26 (3), 530-536, 2017.
• A Alssaiari, RA JM Gining, N Thomas, Modelling Energy Efficient Server management policies in PEPA. 3rd International Workshop on Energy-aware Simulation (ENERGYSIM’17), 2017.
• M Alotaibi, N Thomas, Performance Evaluation of a Secure and Scalable E-Voting

Scheme Using PEPA. In: Balsamo S., Marin A., Vicario E. (eds) New Frontiers in Quantitative Methods in Informatics. InfQ 2017. Communications in Computer and Information Science, vol 825. Springer, 2018.

• A Alkoradees, N Thomas, Optimising Health Systems, 34th Annual UK Performance Engineering Workshop, 2018.
• O Almutairi and N Thomas. Performance Modelling of an Anonymous and Failure Resilient Fair-Exchange E-Commerce Protocol. In Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering (ICPE ’19), 5-12, 2019.

Copies of all these papers are available in the module pages in Canvas.

For the exercise in Part C you will produce a report which covers the following:

1. Description of the system you are going to model.
2. Implementation your model in the PEPA Eclipse Plug-in.
3. Identification of suitable metrics and parameter values, justifying your choices.
4. Use of the tool to derive results based on these values and measures.
5. Presentation of graphs of the results, highlighting any interesting or noteworthy features.
6. Discussion of how your model and/or analysis could be extended to consider different features of the system.