THESIS
2021
1 online resource (xiv, 130 pages) : illustrations (some color)
Abstract
With the burst of data volume and application complexity, applications running in cloud data
centers are scheduled with two categories: data-intensive batch jobs that strive for fast completions,
and customer-facing online services that pursue low response latencies. In this dissertation,
we aim to separately identify the key factors when scheduling each of the two workloads, and
optimize their performances with tailored scheduling designs.
For data-parallel batch jobs, the communication is often the bottleneck, in which a collection
of concurrent flows, termed coflow, transfer intermediate data between computation stages
(e.g., shuffle phase in a MapReduce job). Scheduling coflows in a shared cluster is hard, where
efficiency–minimized average coflow completion times (CCTs) and fairnes...[
Read more ]
With the burst of data volume and application complexity, applications running in cloud data
centers are scheduled with two categories: data-intensive batch jobs that strive for fast completions,
and customer-facing online services that pursue low response latencies. In this dissertation,
we aim to separately identify the key factors when scheduling each of the two workloads, and
optimize their performances with tailored scheduling designs.
For data-parallel batch jobs, the communication is often the bottleneck, in which a collection
of concurrent flows, termed coflow, transfer intermediate data between computation stages
(e.g., shuffle phase in a MapReduce job). Scheduling coflows in a shared cluster is hard, where
efficiency–minimized average coflow completion times (CCTs) and fairness–predictable networking
performance are conflicting with each other. In this regard, we make the following contributions.
First, we present Utopia, a coflow scheduling mechanism that minimizes the average CCT
while ensuring predictable performance with isolation guarantees. Utopia achieves the best of
both worlds by preferentially scheduling coflows in ascending order of their CCTs under fair-sharing
alternatives, and providing provable network isolations in the long run. Second, for non-clairvoyant coflow scheduling where the coflow size is unavailable in advance (e.g., multi-stage applications with pipelines), we present non-clairvoyant DRF (NC-DRF), the other scheduling
policy that provides predictable coflow completions. NC-DRF enforces fair-sharing scheduling
based on the amount of flows a coflow has on each link, and outperforms alternatives by being
aware of the coflow-level communication patterns. Trace-driven simulations and EC2 deployments
have empirically confirmed that both Utopia and NC-DRF outperform existing alternatives
and achieves long-term isolation guarantee.
Online cloud services, on the other hand are deployed as long-running applications (LRAs) in
containers, where the container placement is of paramount importance. Placing LRA containers
are known to be difficult as they often have sophisticated performance interferences ( e.g., resource
competitions and I/O dependencies) that are hard to be quantitatively expressed. We show
that optimal LRA placement can be automatically learned using deep reinforcement learning (RL)
techniques. We first present Metis, a general-purpose RL-based scheduler that achieves scalable
LRA scheduling to large clusters where tens of thousands of LRA containers run on thousands of
machines. To this end, Metis employs novel hierarchical learning techniques that decomposes a
complex container placement problem into a hierarchy of subproblems with significantly reduced
state and action space. We show that many subproblems have similar structures and can hence
be solved by training a unified RL agent offline. We second propose the other LRA scheduler,
George that achieves high-quality container performance subject to operation constraints, such
as fault tolerance, disaster avoidance and incremental deployment. We design a projection-based
proximal policy optimization (PPPO) algorithm in combination with the Integer Linear optimization
technique to intelligently schedule LRA containers under operation constraints. In order to
reduce the training time, we apply the transfer learning technique by taking advantage of the
similarity in LRA scheduling events. We prove theoretically that our proposed algorithm is effective,
stable, and safe. Both Metis and George are implemented as a plug-in services in Docker
Swarm. Large-scale EC2 deployments confirm that they improve container performance and scale
drastically by requiring less than 1 hour scheduling time in a large cluster with 700 machines.
Post a Comment