Date/Time: February 25, 2014, 3:00PM
Location: DBH 3011
Alexander V. Veidenbaum (Chair)
Abstract: Designing an effective cache management policy for a multi-level cache hierarchy has been a hot research topic to bridge the gap between the fast microprocessor and the long memory latency. It is becoming critically important in modern microprocessors due to the emerge of new hardware architectures, new technologies and new applications. However, it has been shown that this is not an easy task due to the complexity of the inputs that computer architects must take into account.
The bottom-up approach has been used in designing such a policy. Rather than tackling the problem with all the possible inputs, a management policy is broken down into smaller problems, each targeting a smaller number of inputs. Sub-policies, such as replacement, bypass, migration and partitioning, are studied for a specific event or configuration. Solutions have been proposed for these individual policies or their combinations. In either case, the new policies must be shown to work well with existing baseline policies.
In this dissertation, using the bottom-up approach, we propose new management policies for a multi-level cache hierarchy. Using new ideas about an effective policy or improving existing methods for new hardware architectures, we further optimize the current state-of-the-art policies. Specifically, we propose three new sets of management policies. First, new migration policies are proposed for an L0/L1 cache hierarchy for embedded processors. We propose two new cache designs to enhance operations of the L1 caches. Second, we propose a new combined replacement, bypass and partitioning policy for a last-level cache by achieving the balance between cache reuse and pollution. The new policy is shown to reduce pollution due to keeping cache lines too long in the cache, a problem which was not addressed by prior work. And third, we propose a new coordinated bypass policy for a multi-level cache hierarchy using the new classifications of cache lines and their reuse probabilities. This policy works well with any existing policies and architectures which allow bypass.
The new cache management policies are shown to improve existing policies or optimize the design with an acceptable performance loss. They are shown to work well with the baseline policies. We also present hardware architectures and application classes that each new policy will be applied to. Hardware design is also described and is shown to be feasible and have low overhead.