THESIS
2012
xii, 102 p. : ill. ; 30 cm
Abstract
Massive data centers are being built around the world to provide various cloud computing services. As a result, data center networking has recently been a hot research topic in both academia and industry. A fundamental challenge in this area is the design of the data center network that interconnects the massive number of servers, and provides an efficient and robust platform. In response to this challenge, the research community has begun exploring novel interconnection network topologies. One approach is to use commodity electronic switches or servers to scale out the network, such as Portland, VL2, DCell, BCube, FiConn, etc. The other approach is to exploit optical devices to build high-capacity switches, such as OSA, Helios, HyPaC, PETASW, Data Vortex, OSMOSIS, etc. Understandably,...[
Read more ]
Massive data centers are being built around the world to provide various cloud computing services. As a result, data center networking has recently been a hot research topic in both academia and industry. A fundamental challenge in this area is the design of the data center network that interconnects the massive number of servers, and provides an efficient and robust platform. In response to this challenge, the research community has begun exploring novel interconnection network topologies. One approach is to use commodity electronic switches or servers to scale out the network, such as Portland, VL2, DCell, BCube, FiConn, etc. The other approach is to exploit optical devices to build high-capacity switches, such as OSA, Helios, HyPaC, PETASW, Data Vortex, OSMOSIS, etc. Understandably, this research is still in its infancy. For the first approach, the solutions proposed so far either scale too slow or suffer from performance bottlenecks, are server-location dependent, inherit poor availability, can be too complex/expensive to be constructed. For the second approach, where the entire interconnection network can be regarded as a “giant” switch, its performance heavily relies on well-designed packet buffers that support multiple queues, provide large capacity and short response time.
In this thesis, five algorithms/architectures are presented, addressing on both of these issues respectively. Using commodity switches only, we propose two cost-effective and gracefully scalable Data Center Interconnection networks (DCIs) called HyperBCube and FlatNet which yield robust performance and inherit low-time-complexity routing based on simple network topologies. On the other hand, aiming at scalable packet buffers, we propose three packet buffer architectures along with their memory management algorithms based on distributed, parallel and hierarchical memory structures respectively. Both mathematical analysis and simulation results indicate that the proposed packet buffer architectures outperform the traditional packet buffer architectures significantly in terms of low time complexity, short access delay and guaranteed performance.
Keywords: Data Center, Interconnection Network, Router memory, SRAM / DRAM.
Post a Comment