C loud A pp P rofiler: T elco C loud A pplications T racing and M - - PowerPoint PPT Presentation

c loud a pp p rofiler t elco c loud a pplications t
SMART_READER_LITE
LIVE PREVIEW

C loud A pp P rofiler: T elco C loud A pplications T racing and M - - PowerPoint PPT Presentation

C loud A pp P rofiler: T elco C loud A pplications T racing and M onitoring CTPD Project By: Sarra KHAZRI / / /Pr Mohamed CHERIET / Montreal, December 11, 2013 Outline Outline Outline Outline 2 Issue Objectives Review of


slide-1
SLIDE 1

Cloud AppProfiler: Telco Cloud Applications Tracing and Monitoring

CTPD Project

By: Sarra KHAZRI /

/ / /Pr Mohamed CHERIET

Montreal, December 11, 2013

slide-2
SLIDE 2

Outline Outline Outline Outline

  • Issue
  • Objectives
  • Review of Litterature

2

  • Review of Litterature
  • Proposed Solution
  • Cloud Applications Tracing Challenges
  • Results
  • Future work
  • Demo
slide-3
SLIDE 3

Issue Issue Issue Issue

3

Poor performance can be caused by the lack of proper resources:

limited bandwidth

limited disk space

limited disk space

limited memory

limited CPU

limited network connections

limited latency

Performance issues in the system can end a service delivery.

slide-4
SLIDE 4

Issue Issue Issue Issue

4

Poor performance causes companies to:

Lose customers

Deal with the service outage

Reduce bottom line revenues

reduce employee productivity

deal with general lost productivity.

slide-5
SLIDE 5

Cloud Applications Tracing Challenges Cloud Applications Tracing Challenges Cloud Applications Tracing Challenges Cloud Applications Tracing Challenges

5

Virualization

Challenges

Performance metrics Migration Scalability

slide-6
SLIDE 6

Cloud Applications Tracing Challenges Cloud Applications Tracing Challenges Cloud Applications Tracing Challenges Cloud Applications Tracing Challenges

6

◙ Virtualization: monitoring the hypervisor layer isn't something traditional systems managements were easy to manage. ◙ End User Response Profiling: End user response time is difficult to monitor for cloud application for two reasons: ◙ cloud applications operate across the open public network ◙ the end users are often distributed across the globe. ◙ Performance metrics: Various metrics needed to be calculated ◙ Cloud Scalability: Scalability is very large and it isn't predictable and measurable

slide-7
SLIDE 7

Cloud Applications Tracing Challenges Cloud Applications Tracing Challenges Cloud Applications Tracing Challenges Cloud Applications Tracing Challenges

7

◙ Scalability: to ensure that the monitoring can cope with a large number of probes . ◙ Elasticity: So that the virtual resources created and destroyed by expanding and contracting networks are monitored correctly. expanding and contracting networks are monitored correctly. ◙ Migration: So that any virtual resource which moves from one physical host to another is monitored correctly. ◙ Adaptability: So that monitoring framework can adapt to varying computational and network loads in order to not be invasive ◙ Automatic : So that the monitoring framework can keep running without intervention and configuration.

slide-8
SLIDE 8
  • bjective
  • bjective
  • bjective
  • bjective

8

The main objective is to design and develop a new model to trace and monitor applications in the cloud.

We seek through this solution to achieve the following objectives:

◙ Collecting data from applications running on the cloud using a monitoring agent. ◙ Storing data and calculating applications performance metrics. ◙ Visualizing metrics in graphs and charts. ◙ Analyzing applications performance metrics and displaying warning and alerts in case of problems.

slide-9
SLIDE 9

State of Art State of Art State of Art State of Art

9

◘ Paid Solution:

◙ AppDynamics ◙ Manage Engine Applications Manager.

◘ Free Solution:

◙ The Lattice Monitoring Framework[2010]

slide-10
SLIDE 10

Proposed solution Proposed solution Proposed solution Proposed solution

10

Architecture Architecture Architecture Architecture

slide-11
SLIDE 11

Proposed solution Proposed solution Proposed solution Proposed solution

11

Modules Implemented Modules Implemented Modules Implemented Modules Implemented

slide-12
SLIDE 12

Methodology Methodology Methodology Methodology

Performance Analysis of cloud-based streaming Applications

12

Calculation of the Performance metrics Calculation of the Performance metrics Calculation of the Performance metrics Calculation of the Performance metrics End-to-end delay Jitter Packet Loss Throughput

Number of requests accepted And refused Application uptime

Application cpu , memory Utilization

Metrics

slide-13
SLIDE 13

Methodology Methodology Methodology Methodology

Performance Analysis of cloud-based streaming Applications

◙ End-to-end delay ◙ Jitter

13

Performance Metrics Performance Metrics Performance Metrics Performance Metrics

◙ Packet loss ◙ Throughput ◙ ApplicationThroughput ◙ Application Availabilty ◙ Application Resources

utilzation

audio video audio video

slide-14
SLIDE 14

Methodology Methodology Methodology Methodology

Performance Analysis of cloud-based streaming Applications

14

Calculation of the Performance metrics Calculation of the Performance metrics Calculation of the Performance metrics Calculation of the Performance metrics

Analyse

  • wireshark
  • tshark

Data Collection

  • Software

Management Analyse

  • Graphs &

charts (jquery library & highcharts)

Visulization

slide-15
SLIDE 15

Methodology Methodology Methodology Methodology

Performance Analysis of cloud-based streaming Applications

15

Calculation of the Performance metrics Calculation of the Performance metrics Calculation of the Performance metrics Calculation of the Performance metrics

08:13:43.880866host1.49609>host2.49609: sr@4984.5664062500 44854910162p41472b 08:13:46.627595 host2.49609>host1.49609:rr 10l 61209s 227j @4984.5664062500+2.7451171875

timestamp Source node UDP src port destination node UDP dst port NTP timestamp Sender’spacketcount Sender’s octet count

08:13:46.627595 host2.49609>host1.49609:rr 10l 61209s 227j @4984.5664062500+2.7451171875 08:13:49.145088 host1.49609 >host2.49609: sr @4989.4814453125 44894326 315p 80640b 08:13:53.194153host2.49609>host1.49609: rr15l61413s353j@4989.4814453125+4.0458984375

Cumulative number

  • f packetlost

Highestsequencenumber Interarrivaljitter Last RTCP-SR Timestampreceived

slide-16
SLIDE 16

Methodology Methodology Methodology Methodology

Delay(second) = t2-(t1+DLSR)

◙ Jitter(second) =Interarrival Jitter /sampling rate of media codec ◙ Packet Loss(%) = [(highest sequence number i – highest sequence number i-

1)/(cumulative number of lost packet i - cumulative number of lost packet i -1)] * 100

16

Calculation of the Performance metrics Calculation of the Performance metrics Calculation of the Performance metrics Calculation of the Performance metrics

100

Throughput (kbps) =X *Y*8/ Z Or

X = RTPpayload + Rtpheader(12)+ UDP(8)+ IP(20)+ Frame Relay(6) (bytes/packet)

Y= timestamp i –timestamp i-1(seconds)

Z= [cumulative number of lost packet i - cumulative number of lost packet i -1)] – (cumulative number of lost packet i - cumulative number of lost packet i -1]

slide-17
SLIDE 17

Results Results Results Results

17

slide-18
SLIDE 18

Results Results Results Results

18

Export Graph Export Graph Export Graph Export Graph

slide-19
SLIDE 19

Future Works Future Works Future Works Future Works

◙ Integration of Application Profiler in Smart Cloud Profiler :

  • Contribute to the tracing of telecomminications applications in the

ecolotic project : ims apps

  • Have a automatic cloud app tracing system.

19

slide-20
SLIDE 20

Demo Demo Demo Demo

20

slide-21
SLIDE 21

References References References References

  • Jin Shao; Hao Wei; Qianxiang Wang; Hong Mei, "A Runtime Model Based Monitoring Approach for Cloud," Cloud

Computing (CLOUD), 2010 IEEE 3rd International Conference on , vol., no., pp.313,320, 5-10 July 2010

  • HaiboMi; Huaimin Wang; HuaCai; Yangfan Zhou; Lyu, M.R.; Zhenbang Chen, "P-Tracer: Path-Based Performance

Profiling in Cloud Computing Systems," Computer Software and Applications Conference (COMPSAC), 2012 IEEE 36th Annual , vol., no., pp.509,514, 16-20 July 2012

  • HaiboMi; Huaimin Wang; Gang Yin; HuaCai; Qi Zhou; Tingtao Sun, "Performance problems diagnosis in cloud

computing systems by mining request trace logs,"Network Operations and Management Symposium (NOMS), 2012 IEEE , vol., no., pp.893,899, 16-20 April 2012

  • De

Chaves, S.A.; Uriarte, R.B.; Westphall, C.B., "Toward an architecture for monitoring private 21

  • De

Chaves, S.A.; Uriarte, R.B.; Westphall, C.B., "Toward an architecture for monitoring private clouds," Communications Magazine, IEEE , vol.49, no.12, pp.130,137, December 2011

  • [http://www.infosys.com/engineering-services/features-opinions/Documents/cloud-performance-monitoring.pdf
  • http://www.cloudtweaks.com/2012/08/how-performance-issues-impact-cloud-adoption/
  • http://www.priv.gc.ca/resource/fs-fi/02_05_d_51_cc_e.pdf
  • http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf
  • http://www.us-cert.gov/sites/default/files/publications/CloudComputingHuthCebula.pdf
  • http://www.toolsjournal.com/testing-articles/item/803-cloud-application-performance-monitoring-challenges-and-

solutions

  • http://www.unc.edu/courses/2010spring/law/357c/001/cloudcomputing/examples.html
  • Vijayakumar, Smita, Qian Zhu, and GaganAgrawal. "Dynamic resource provisioning for data streaming applications in

a cloud environment." Cloud Computing Technology and Science (CloudCom), 2010 IEEE Second International Conference on. IEEE, 2010.

slide-22
SLIDE 22

Thank Thank Thank Thank You You You You

22