THESIS
1997
xiii, 137 leaves : ill. ; 30 cm
Abstract
The recent increase of distributed real-time multimedia applications requires the network to guarantee quality of service (QoS) such as delay, loss rate, jitter and throughput. To guarantee end-to-end QoS, resources (bandwidth and buffer) are reserved at each server node along the path of each connection. Previous work focuses on either restricted resource allocation schemes for specific scheduling disciplines or analysis of the corresponding resource requirements. Little work addresses the general problem of resource allocation for deterministic QoS guarantees and solutions that optimizes utilization of the remaining resources. Based on Cruz's service curve approach, we developed a general theory of optimal resource allocation that guarantees worst case network delays and aims at high...[
Read more ]
The recent increase of distributed real-time multimedia applications requires the network to guarantee quality of service (QoS) such as delay, loss rate, jitter and throughput. To guarantee end-to-end QoS, resources (bandwidth and buffer) are reserved at each server node along the path of each connection. Previous work focuses on either restricted resource allocation schemes for specific scheduling disciplines or analysis of the corresponding resource requirements. Little work addresses the general problem of resource allocation for deterministic QoS guarantees and solutions that optimizes utilization of the remaining resources. Based on Cruz's service curve approach, we developed a general theory of optimal resource allocation that guarantees worst case network delays and aims at high overall network utilization.
Two versions of the optimal resource allocation problem were solved. The first problem considers bandwidth only. A feasibility test was derived to check if there is a resource allocation that satisfies both the resource constraint and the delay requirement. Instantaneous available bandwidth (IAB) which is the peak rate of the remaining end-to-end bandwidth was used to determine the optimal allocation. An optimal resource allocation method was developed and studied. The optimal allocation method was also shown to use minimal resources.
The second problem considers both bandwidth and buffer resources. Each server node has a limited buffer to hold backlog packets. Traffic regulation was studied to minimize the buffer requirement. An optimal regulator design was developed. It was shown that minimal allocation can be achieved by independent traffic regulation at each server node. By bounding burstiness of output traffic at each node, a feasibility test was derived. Finally, a general optimal allocation method was developed and studied.
Post a Comment