ECE
Abstract
The advance in the semiconductor technology has paved way for increasing the density of transistors per chip. The amount of information storable on a given amount of silicon has roughly doubled every year since the technology was invented. Thus the performance of the processor improved and the chips’ energy dissipation increased in each processor generation. This created awareness for designing low power circuits. Low power is important in portable devices because the weight and size of the device is determined by the amount of battery needed which in turn depends on the amount of power dissipated in the circuit. The cost involved in providing power and associated cooling, reliability issues, expensive packaging made low power a concern in nonportable applications like desktops and servers too. Even though most power dissipation in CMOS CPUs is dynamic power dissipation (which is a function of frequency of operation of the device and the switching capacitance), leakage power (function of number of on-chip transistors) is also becoming increasingly significant as leakage current flows in every transistor that is on, irrespective of signal transition. Most of the leakage energy comes from memories, since cache occupies much of CPU chip’s area and has more number of transistors, reducing leakage in cache will result in a significant reduction in overall leakage energy of the processor. This paper suggests an architectural approach for reducing leakage energy in caches.
Various approaches have been suggested both in architecture and circuit level to reduce leakage energy. One approach is to count the total number of misses in a cache and upsize/ downsize the cache depending on whether the miss count is greater or lesser than a preset value. The cache dynamically resizes to the application’s required size and the unused sections of the cache are shut off. Another method called cache decay turns off the cache lines when they hold data not likely to be reused. The cache lines are shut off during their dead time that is during the time after the last access and before the eviction. After a specific number of cycles have elapsed and still if the data is unused then that cache line is shut off. Another approach was to disable portion of the cache ways called selective cache ways. This method, which is application sensitive, enables all the cache ways (a way is one of the n sections in an n-way set associative cache) when high performance is required and enables only a subset of the ways when cache demands are not high.
This paper is organized as follows, section 1 narrates the work done related to this problem, section 2 describes our approach, which uses a time based decay policy in a partitioned architecture of level – 2 cache and finally section 3 presents the conclusion.
Click Here to Download