THESIS
2019
xvi, 102 pages : illustrations ; 30 cm
Abstract
Migrating applications to clouds has been a trend over the past decade or so. From a
cloud operator's perspective, existing works have focused on how resources should be
provisioned to users with a guaranteed level of quality-of-service (QoS). However, most of
these works are for generic applications, where the distinct features of specific applications
are not considered.
In this thesis, we take advantage of applications' distinct features in the resource
provisioning procedure to improve the total revenue of the operator and provide better
QoS to users. We first focus on two types of applications that are widely deployed in
clouds, namely, applications with bursty workloads and stream data analytics. In the fiirst technical chapter, we investigate how burstable instances, an i...[
Read more ]
Migrating applications to clouds has been a trend over the past decade or so. From a
cloud operator's perspective, existing works have focused on how resources should be
provisioned to users with a guaranteed level of quality-of-service (QoS). However, most of
these works are for generic applications, where the distinct features of specific applications
are not considered.
In this thesis, we take advantage of applications' distinct features in the resource
provisioning procedure to improve the total revenue of the operator and provide better
QoS to users. We first focus on two types of applications that are widely deployed in
clouds, namely, applications with bursty workloads and stream data analytics. In the fiirst technical chapter, we investigate how burstable instances, an instance type that was
recently introduced by leading cloud operators, can match the time-varying workloads of
applications. We present the first unified framework to model, analyze, and optimize the
operation of burstable instances.
In the second technical chapter, we consider a heterogeneous cloud-based cluster for
stream data analytics, which is shared by multiple analytics jobs. An efficient resource
allocation scheme is proposed to achieve max-min fairness in the utilities of the throughput
for the jobs.
Finally, we move to Internet of Things (IoT) applications in the third technical chapter.
In view of the stringent communication delay requirements between the computation
facilities and end users from many IoT applications, fog computing has recently been
proposed, where computation tasks can be offloaded extensively along the cloud-to-things continuum. In this chapter, we derive an offloading scheme for a heterogeneous fog computing
network shared by multiple tasks that have heterogeneous delay requirements,
where lexicographic max-min fairness is enforced in the offloading procedure.
Post a Comment