Center for Research in
Intelligent Storage
Improving Data Access Performance
- f Applications in
IT Infrastructure
Hao Wen Advisor: David Du
April 24th, 2019 Department of Computer Science and Engineering, University of Minnesota, USA
Improving Data Access Performance of Applications in IT - - PowerPoint PPT Presentation
Improving Data Access Performance of Applications in IT Infrastructure Hao Wen Advisor: David Du April 24 th , 2019 Department of Computer Science and Engineering, University of Minnesota, USA C enter for R esearch in I ntelligent S torage
Center for Research in
Intelligent Storage
Hao Wen Advisor: David Du
April 24th, 2019 Department of Computer Science and Engineering, University of Minnesota, USA
2
Center for Research in
Intelligent Storage
Virtualized Servers Virtualized Network Virtualized Storage Datacenter servers Datacenter network Datacenter storage
Virtual Machines Containers
3
Center for Research in
Intelligent Storage
4
Center for Research in
Intelligent Storage
Inexpensive servers Inexpensive switches Inexpensive storage
What does virtualization bring?
Mobility (Move applications) Flexibility (Deploy & Scale applications)
The abilities to customize services and control all resources
Firewall Encryption Encryption Analytics
Hyper-converged Infrastructure
5
Center for Research in
Intelligent Storage
Users have various storage requirements SLA/SLO
Encryption Analytics Resources Services
VM Container Storage Network Backup
6
Center for Research in
Intelligent Storage
…
App in Containers Systematic control over client, network, storage for app in networked storage Network Function Virtualization
Encryption Firewall DNS
App in VMs Ability to control all resources Resource allocation Storage Function Virtualization
Encryption Backup Analytics
7
Center for Research in
Intelligent Storage
Hardware OS App Hardware Hypervisor
VM
App1 OS
VM
App OS
…
Container
App
Hardware OS Docker
…
Container
App
Emulation of a computer system Unit of software that packages up code and all its dependencies into a single object
8
Center for Research in
Intelligent Storage
Internet
... Network Attached Storage (NAS) Storage Area Network (SAN) or
Storage Server
9
Center for Research in
Intelligent Storage
meet storage requirements in VMs. [ICPP2015, IEEE TCC]
storage requirements deployed in the Kubernetes environment based on Docker containers. [Under submission]
the I/O path to ensure latency SLO for applications in networked storage environment. [MASCOTS 2018]
Center for Research in
Intelligent Storage
11
Center for Research in
Intelligent Storage
With VMs:
data center.
data from anywhere at anytime.
prevalent VM application
(VDI) 1,2,3 manages Desktops in data center, presents a desktop to users like running locally.
1Citrix virtual desktop handbook 7.x. https://support.citrix.com/article/CTX221865. 2 Desktop virtualisation. https://www.microsoft.com/en-in/cloud-platform/desktop-virtualization. 3 Horizon 7. https://www.vmware.com/products/horizon.html.
12
Center for Research in
Intelligent Storage
Hardware Hypervisor
VM
Desktop app Desktop OS
VM
Desktop app Desktop OS
Virtual Desktop Clone
win7 win10 Win7+ Web server
Master images
Replica Primary Persistent NAS
Virtual Disks
HDD SSD HDD+SSD
Floating Linked Clone Dedicated Linked Clone Full Clone
13
Center for Research in
Intelligent Storage
Users deploy a VDI system in a data center. How can the administrator describe the storage requirements of VDI, and identify what capability a storage appliance needs to satisfy the requirements. Challenges:
life cycle.
14
Center for Research in
Intelligent Storage
4Vmware virtual san design and sizing guide for horizon view virtual desktop infrastructures. https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/
whitepaper/products/vsan/vmw-tmd-virt-san-dsn-szing-guid-horizon-view-white-paper.pdf.
5Sizing and best practices for deploying vmware view 5.1 on vmware vsphere 5.0 u1 with dell equallogic storage. https://downloads.dell.com/manuals/all-products/esup
rt_solutions_int/esuprt_solutions_int_solutions_resources/s-solution-resources_white-papers71_en-us.pdf.
storage requirements of virtual desktops.
instances5.
requirements overlook the characteristics of the VM storage requirements.
Key: We need a model!
15
Center for Research in
Intelligent Storage
unique storage access patterns.
homogeneous and heterogeneous configurations of VDI.
bottlenecks on specific target virtual disks at a specific time.
minimum storage configuration to satisfy storage requirements of VDI.
16
Center for Research in
Intelligent Storage
Hypervisor FLC Hardware FLC FLC
...
Hypervisor FLC Hardware DLC
...
Storage Array 1 (SSD) Storage Array i (Hybrid) Storage Array N (HDD)
. . . . . .
Data Store Remote Repository
NAS
Download
Floating Linked Clone User First login A second login
Master
...
Load OS Data
User Profile and User Data Replica
... FLC
DLC
...
Primary Disk Sync During Active Stage
Hypervisor DLC Hardware
...
...
Storage Array 1 (SSD) Storage Array i (Hybrid) Storage Array N (HDD)
. . . . . .
Master Data Store Remote Repository
NAS
Sync During Active Stage
Dedicated Linked Clone User First login A second login
...
Load OS Data
Read Cached User Profile and User Data Primary Disk Persist
DLC DLC Hypervisor FLC Hardware DLC DLC FLC
... ...
Replica
17
Center for Research in
Intelligent Storage
Answer at time t, how much data will be read from each virtual disk and how much data will be written to each virtual disk.
t Number of VMS
VMs in boot Number of VMs arrives VMs in login VMs in active stage
different stages into the at time t, for each virtual disk.
Model of a single VM Model of multiple VMs of the same type Model of multiple VMs of different types
18
Center for Research in
Intelligent Storage
desktops in VDI. (VDI cluster + VMware View Planner)
Comparison between the throughput requirement calculated from the model of a single VM with the direct measurement
19
Center for Research in
Intelligent Storage
Table: VDI IOPS Requirements from VMware 5.29 IOPS from traces Storage for light user! Table: Requirements of a Floating Linked Clone
More fine-grained QoS requirements of a VDI system
Storage for heavy user
20
Center for Research in
Intelligent Storage
Table: Specifications of 4 HP 3PAR Storage Systems
Replica Primary Disk NAS Throughput Read: 3 GB/s – 3.3 GB/s Read: 350 MB/s Write: 600MB/s Read: 70 MB/s Capacity 66 TB IOPS 105,000
Storage requirements of a company with 5000 FLCs
Center for Research in
Intelligent Storage
22
Center for Research in
Intelligent Storage
An orchestrator is essential to deploy and manage applications in containers across multiple hosts.
EuroSys ’15, Burns et al. Queue 14, 1] Kubernetes is the most popular container orchestration platform according to surveys from Cloud Native Computing Foundation (CNCF) 8,9 In this research, we focus on Kubernetes environment based on Docker.
7Kubernetes concepts. https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/. 8Survey Shows Kubernetes Leading as Orchestration Platform. https://www.cncf.io/blog/2017/06/28/survey-shows-kubernetes-leading-orchestration-platform/. 9CNCF Survey: Use of Cloud Native Technologies in Production Has Grown Over 200%. https://www.cncf.io/blog/2018/08/29/cncf-survey-use-of-cloud-native-technolog
ies-in-production-has-grown-over-200-percent.
23
Center for Research in
Intelligent Storage
Basic unit Application scheduling
kubectl create –f app.yaml
Create pods and allocate storage
24
Center for Research in
Intelligent Storage
CPU, Mem, Affinities to apps/nodes Storage resources Error-prone, not resource efficient storage allocation
Storage allocation is static
25
Center for Research in
Intelligent Storage
Gold (SSD) Silver (Hybrid) Bronze (HDD) Storage Cluster
Admins create SCs Users choose SCs Limitations:
changing
Lots of SCs -> Hard to maintain
rate limiting, caching, etc.
How can we make k8s better meet users’ storage requirements & all other requirements, and at the same time save resources?
26
Center for Research in
Intelligent Storage
and discoverable. (manual provisioning of PV)
component of vSphere (VMware hypervisor). Automatically model, estimate storage performance, recommend VM disk placement and migration.
10 REX-Ray. https://rexray.readthedocs.io/en/stable/
27
Center for Research in
Intelligent Storage
We propose K8sES (k8s Enhanced Storage), a system that can dynamically allocate storage to applications in Kubernetes based on users’ storage requirements.
balancing utilization between storage and non-storage
28
Center for Research in
Intelligent Storage
k8sES-scheduler
kubectl create -f app.yaml
kube-apiserver etcd kube-controller- manager Migrator Discovery Host
Driver
Host kubelet kubelet kube-proxy kube-proxy
Driver
pod pod
... Managed Cluster K8sES Master
Monitor Storage Status
Select both host and storage for a pod Discover the available storage resources in the cluster Monitor the running of each pod and storage resource usage The kubelet receives the storage decision from k8es-scheduler and call the Driver to carve out storage resources. Select a pod and its data to migrate
29
Center for Research in
Intelligent Storage
ssbench12 + OpenStack Swift HTTP + Nginx13 Synthetic workloads with uniform distributed I/O throughput Synthetic applications with various requests
12 Swiftstack benchmark suite (ssbench). https://github.com/swiftstack/ssbench. 13 Nginx. https://www.nginx.com/.
30
Center for Research in
Intelligent Storage
We deploy pods E1, E2, and F in sequence, each requiring 10GB, 20MB/s, non- sharing storage + 1 CPU core + 1 GB Mem
31
Center for Research in
Intelligent Storage
10 20 30 40 50 60 500 1000 1500 2000 2500 3000 Throughput (MB/s) Time (s)
I/O throughput of pod B on Worker4 I/O throughput of pod B on Worker3
Throughput of applications over their lifetime
Center for Research in
Intelligent Storage
33
Center for Research in
Intelligent Storage
Internet
Computation Services
…
Storage Services Cloud Network Services E.g., OpenStack (VM), Kubernetes (containers)
…
SAN
34
Center for Research in
Intelligent Storage
storage servers, disks, etc.
35
Center for Research in
Intelligent Storage
storage [Zhe et al. NSDI ’15].
network congestions, but may waste resources in underloaded storage.
control on network between clients and servers.
forward I/Os from over loaded servers onto less loaded servers. But does not consider the status of network.
workload priorities and rate limits. It assumes the system has full visibility and control over all workloads. The priorities are static and at the granularity of workload.
36
Center for Research in
Intelligent Storage
from client to storage to ensure latency SLO.
different components based on the status of each component.
based on the asymmetry property in read and write.
storage, and demonstrate the effectiveness of JoiNS in ensuring latency SLO.
37
Center for Research in
Intelligent Storage
Storage Driver
NIC Kernel APP
...
Flow Table NIC
Storage Driver
...
Client Network Storage
APP
...
Status Monitor
Client Enfocer Flow Table Execute Actions
...
Network Enfocer
Storage Enfocer Kernel
Time Estimator Policy Enforcement Regulator Controller
Collect the status data of each network and storage node Estimate the time needed for each I/O request Determine whether to control I/Os Refine the estimation based
Admit I/Os Mark I/O requests in packet headers and storage commands Differentiated scheduling Differentiated scheduling Mark I/O responses
38
Center for Research in
Intelligent Storage
Control information sent to storage incorporated in SCSI commands.
39
Center for Research in
Intelligent Storage
path.
Client Storage
Write Request Read Request Write Notification Read Data
Request Path Response Path
48B 1024 KB 48B 1024 KB
40
Center for Research in
Intelligent Storage
1 client. 2 network nodes serve as SDN switches with the link speed of 1Gb/s. 1 storage proxy. 1 storage server with one HDD backend.
MSR block traces and synthetic traces
JoiNS: Our primary mechanism Legacy: FIFO in network and storage. Pri_all: prioritize all read requests and write responses regardless of congestion level PM: PriorityMeister (rate limiters + static priorities to workloads)[25]
41
Center for Research in
Intelligent Storage
Request latency of workloads running at the same time at different percentiles
100 200 300 400 500 600 700 50% 90% 99% 99.90% 99.99%
Latency (ms)
Legacy JoiNS PM Pri_all 100 200 300 400 500 600 700 50% 90% 99% 99.90% 99.99%
Latency (ms)
Legacy JoiNS PM Pri_all 100 200 300 400 500 600 50% 90% 99% 99.90% 99.99%
Latency (ms)
Legacy JoiNS PM Pri_all 50 100 150 200 250 300 350 400 50% 90% 99% 99.90% 99.99%
Latency (ms)
Legacy JoiNS PM Pri_all 200 400 600 800 1,000 1,200 1,400 1,600 50% 90% 99% 99.90% 99.99%
Latency (ms)
Legacy JoiNS PM Pri_all
Workload A Workload B Workload C Workload D Workload E
Only prioritize an I/O when the system is close to congestion for that I/O Only prioritize read requests and write responses
42
Center for Research in
Intelligent Storage
Machine Environment
meet the storage requirements.
as other requirements in k8s, and improve the storage utilization efficiency.
Storage
meet latency SLO in a networked storage environment
43
Center for Research in
Intelligent Storage
…
App in Containers Systematic control over client, network, storage for app in networked storage Network Function Virtualization
Encryption Firewall DNS
App in VMs Ability to control all resources Resource allocation Storage Function Virtualization
Encryption Backup Analytics
44
Center for Research in
Intelligent Storage
Published
applications with guaranteed quality of service. In Parallel Processing (ICPP), 2016 45th International Conference
deduplication systems using adaptive look-ahead window assisted chunk caching. In 16th USENIX Conference on File and Storage Technologies (FAST 18), pages 309-324, Oakland, CA, 2018. USENIX Association.
synchronized trace-driven replayer for network-storage system evaluation. Performance Evaluation, 130, 86- 100.
Management Design for Interlaced Magnetic Recording", HotStorage'18.
Integrated Control for Networked Storage", In Proceedings of the 26th IEEE International Symposium on the Modeling, Analysis, and Simulation of Computer and Telecommunication Systems MASCOTS'18.
deduplication-based file system. ACM Trans. Storage,15(1):4:1-4:26, February 2019.
to Identify Storage Requirements. IEEE Transactions on Cloud Computing.
45
Center for Research in
Intelligent Storage
On-going
Wu, Jim Diehl. K8sES: Kubernetes with Enhanced Storage Service-Level Objectives. [Under Revision]
ZoneAlloy: Elastic Data and Space Management for Hybrid SMR Drives. [Under Submission]
46
Center for Research in
Intelligent Storage
[1] Lxc. https://help.ubuntu.com/lts/serverguide/lxc.html. [2] BERNSTEIN, D. Containers and cloud: From lxc to docker to kubernetes. IEEE Cloud Computing 1, 3 (2014), 81–84. [3] VERMA, A., PEDROSA, L., KORUPOLU, M., OPPENHEIMER, D., TUNE, E., AND WILKES, J. Large-scale cluster management at google with
[4] BURNS, B., GRANT, B., OPPENHEIMER, D., BREWER, E., AND WILKES, J. Borg, omega, and kubernetes. Queue 14, 1 (2016), 10. Gulati A, Shanmuganathan G, Ahmad I, et al. Pesto: online storage performance management in virtualized datacenters[C]//Proceedings of the 2nd ACM Symposium on Cloud Computing. ACM, 2011: 19. [5] Joe Beda. Containers at scale. https://speakerdeck.com/jbeda/containers-at-scale?slide=2. [6] Zhe Wu, Curtis Yu, and Harsha V. Madhyastha. Costlo: Cost-effective redundancy for lower latency variance on cloud storage services. In Proceedings of the 12th USENIX Conference on Networked Systems Design and Implementation, pages 543-557, 2015. [7] Sally Floyd and Van Jacobson. Random early detection gateways for congestion avoidance. IEEE/ACM Transactions on Networking (ToN), 1(4):397-413, 1993. [8] Sally Floyd. Tcp and explicit congestion notification. ACM SIGCOMM Computer Communication Review, 24(5):8-23, 1994. [9] Eno Thereska, Hitesh Ballani, Greg O‘Shea, Thomas Karagiannis, Antony Rowstron,Tom Talpey, Richard Black, and Timothy Zhu. Ioflow: A software-defined storage architecture. In Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles, pages 182-196, 2013. [10] Ioan Stefanovici, Bianca Schroeder, Greg O‘Shea, and Eno Thereska. sroute: Treating the storage stack like a network. In 14th USENIX Conference on File and Storage Technologies (FAST 16), pages 197-212, 2016. [11] Zhu T, Tumanov A, Kozuch M A, et al. Prioritymeister: Tail latency qos for shared networked storage[C]//Proceedings of the ACM Symposium on Cloud Computing. ACM, 2014: 1-14.
47
Center for Research in
Intelligent Storage
48
Center for Research in
Intelligent Storage
and how much data will be written to each virtual disk.
(1) Model of a single VM
Target: the virtual disk that IOs will reach Stage: the stage in VM life cycle 𝑆𝑋𝑞𝑓𝑠𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢: read ratio or write ratio during different stages on different targets 𝑇𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢
𝑗
: Significant IO sizes 𝑄𝑡𝑗𝑨𝑓𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢
𝑗
: Percentage of each significant IO size 𝐹𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢(t): the expected number of IOs at time t
𝑗
𝐹𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢(t) × 𝑒𝑢 × 𝑆𝑋𝑞𝑓𝑠
𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢
× 𝑇𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢
𝑗
× 𝑄𝑡𝑗𝑨𝑓𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢
𝑗
49
Center for Research in
Intelligent Storage
𝑦=𝑢1 𝑢2 [𝑂(𝑦) × 𝑗
𝐹𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢(t) × 𝑒𝑢 × 𝑆𝑋𝑞𝑓𝑠
𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢
× 𝑇𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢
𝑗
× 𝑄𝑡𝑗𝑨𝑓𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢
𝑗
]
N(x): indicates the number of VMs arriving at time x(x<t) (VM arrival rate) 𝐹𝑡𝑢𝑏𝑓,𝑢𝑏𝑠𝑓𝑢(t): For each group of N(x) VMs that arrive at time x, it describes the expected number of IOs at time t for that particular group of VMs. [𝑢1, 𝑢2]: For all those VMs that are now at stage, they arrive during time interval [t1, t2]
50
Center for Research in
Intelligent Storage
𝑃𝑇 = 𝑃𝑇1, 𝑃𝑇2, … , 𝑃𝑇𝑜 𝑊𝐸 = 𝐺𝑀𝐷, 𝐸𝑀𝐷, 𝐺𝐷 𝐵𝑄𝑄 = 𝑏𝑞𝑞1, 𝑏𝑞𝑞2, … , 𝑏𝑞𝑞𝑜 𝑊𝑁 = 𝑃𝑇 × 𝑊𝐸 × 𝐵𝑄𝑄
average of all VM types to get the overall size of data accessed
51
Center for Research in
Intelligent Storage
52
Center for Research in
Intelligent Storage
500 1000 1500 2000 2500 3000 3500 11 61 111 161 211 261 311 361 411 461 511
Size (MB) Time (s)
Read Write
100 200 300 400 500 600 700 3 53 103 153 203 253 303 353 403 453 503 553 603 653
Size (MB) Time (s)
Read Write
20 40 60 80 100 45 95 145 195 245 295 345 395 445 495 545
Size (MB) Time (s)
Read Write
Replica Primary Disk NAS
53
Center for Research in
Intelligent Storage
and the experimental results from two Hewlett Packard Enterprise (HPE) systems
clones arriving at the same time.
to match the total number of virtual desktops from the HPE. Result: ✓Peak IOPS 141 ✓ Peak read IOPS 8.8x peak write IOPS ✓ Peak IOPS happens during the boot stage. ✓ Avg IOPS during active stage: 5.29. In HPE systems:
benchmark tool to generate VDI workloads. Result: Peak IOPS per virtual desktop: 139 Peak IOPS happens at boot stage. The peak read IOPS 9x of the peak write IOPS. Avg IOPS during active stage: 6.26.
54
Center for Research in
Intelligent Storage
kubectl create –f <manifest>
Goal: Keep the interface but also enable users specify various storage requirements? Current: Ask the admin to create a suitable SC first! Why don’t we enable users to put their requests directly in the manifest?
55
Center for Research in
Intelligent Storage
StorageClass (SC), which have limited storage support.
By Admin By User By User
56
Center for Research in
Intelligent Storage
(1) SC is static and cannot be used to efficiently schedule storage resources.
storage resources. (2) Hard to decide a proper number of SCs that just satisfies users’ requirements without wasting resources.
(3) Do not support advanced storage requirements, e.g., rate limiting, caching, etc. (4) Not user friendly and error prone.
57
Center for Research in
Intelligent Storage
Users deploy their stateful applications in containers in k8s. They have service-level
storage requirements along with all other requirements, and at the same time save resources? Challenges:
save resources at the same time in k8s. (issues of fewer SCs or more SCs)
and other k8s specific requirements, e.g., node affinity, pod affinity, etc. How can we integrate the intelligent storage allocation into the current pod scheduling process?
58
Center for Research in
Intelligent Storage
Predicate Priority Select
predicates, e.g. Mem > 1 GB {host list}
{storage list}
{host:{storage list}}
for storage least_storage_usage: (10 ×
𝑇𝑗𝑨𝑓𝑢𝑝𝑢𝑏𝑚−𝑇𝑗𝑨𝑓𝑠𝑓𝑟 𝑇𝑗𝑨𝑓𝑢𝑝𝑢𝑏𝑚
+ 10 ×
𝐶𝑋𝑢𝑝𝑢𝑏𝑚−𝐶𝑋
𝑠𝑓𝑟
𝐶𝑋𝑢𝑝𝑢𝑏𝑚
)/2 usage_leveling: 10 − 10 × 𝐷𝑄𝑉𝑣𝑡𝑏𝑓 + 𝑁𝑓𝑛𝑣𝑡𝑏𝑓 − 𝑇𝑗𝑨𝑓𝑣𝑡𝑏𝑓 − 𝐶𝑋
𝑣𝑡𝑏𝑓
and add it to the score of host.
59
Center for Research in
Intelligent Storage
Compared with VM, servers and storage see a higher application consolidation in containers.
Balance the usage between storage and other resources.
Allocate a portion (𝜍) of the request capacity initially and increase (μ) when the utilization reaches a threshold (𝜄).
Monitor the average throughput over a time interval 𝜐 (e.g., six hours) for device j: 𝑈𝑄𝑘 Literal bandwidth: 𝐶𝑢𝑝𝑢𝑏𝑚
𝑘
. Requested bandwidth: 𝐶𝑠𝑓𝑟
𝑘
Amplification factor 𝛽𝑘:
1 𝛽𝑘 = 𝑈𝑄𝑘 𝐶𝑠𝑓𝑟
𝑘
(cap to e.g., 120%) K8sES-scheduler schedules pods as if storage j has BW of 𝛽𝑘 ∙ 𝐶𝑢𝑝𝑢𝑏𝑚
𝑘
60
Center for Research in
Intelligent Storage
granularities of pods and devices.
storage in case of SLO violation, or software failure on nodes and storage.
61
Center for Research in
Intelligent Storage
n PVs: divide each SC into n PVs evenly
evenly divide the SC. k8sES-no-leveling: if we do not balance the usage between storage and
3 3 3 1 7 6 5 19 12 5 1 18 7 25 14 6 3 27 15 8 4 5 10 15 20 25 30 App 1 App 2 App 3 App 4 Number of App Instances 1 PV 2 PVs
k8sES-no-leveling k8sES
62
Center for Research in
Intelligent Storage
distributed.
network stack.
63
Center for Research in
Intelligent Storage
will collect 𝑈
𝑠𝑟 𝑠 , 𝑈𝑠𝑢 𝑠
𝑢𝑓𝑡𝑢
𝑠
= 𝑢𝑠𝑟
𝑠 + 𝑢𝑠𝑢 𝑠 + 𝑢𝑡 𝑠 + 𝜀
𝑢𝑓𝑡𝑢 < 𝛾𝐸 Not congested. Issue. 𝛾𝐸 < 𝑢𝑓𝑡𝑢 < 𝐸 Close to congested. Control. 𝐸 < 𝑢𝑓𝑡𝑢 Fully congested. Throttle.
64
Center for Research in
Intelligent Storage
Mainframe (1980s) Terminal Access Multiple Distributed Servers (1990s) Desktop Applications Large Individual Servers (1990s, 2000s) Client-Server Applications Multiple Distributed Servers (2000s) Web Applications High-density Server Farms (2000s) Internet Applications Virtualized and Cloud (2010s) Cloud Applications
65
Center for Research in
Intelligent Storage
Client Architecture Applications Network SVC Storage SVC Compute SVC
Internet Cloud Computation: Network: Storage:
Powerful Units Large Scale Virtualized (VM) Large (10K- 100K switches) On I/O path Software Defined Heterogeneous (HDD,SSD,SMR) High capacity Distributed Containerized
What’s the impact on data access performance?
66
Center for Research in
Intelligent Storage
management complexity.
access performance.