Open Access Open Access  Restricted Access Subscription Access

Cache Design and Optimization Techniques

Rishabh Sancheti, K. B. Ramesh

Abstract


In recent innovation particularly in the modern fields, the PCs are taken advantage of as controlling and checking instrument to help in system improvement. Thus, a large portion of the PC issues in different applications are slow speed and poor showing. In this paper, the plan procedure was presented to tell the best way to help with working on both speed and execution. There are many variables that influence the exhibition of the PC, for example, the processor speed, the size of RAM, and the shortcoming of the cache memory system for the processor. These variables are the most persuasive elements on the speed and execution of the processor, which are bring about performance deterioration. Most cache memories are planned external the processor units which are influencing the data transfer speed to/from the processor, postponed processor data access time, and processor access time. The C++ program was utilized as simulation tool for execution assessment to obviously show the impact of the cache memory. The simulation results unequivocal the extraordinary effect of the additional cache memory on both the processor speed and the PC execution when the cache was planned inside the processor unit. Likewise, shows adverse outcomes while planning the reserve outside the processor unit. When compared to the access latency of main memory, processor speed is improving at a rapid rate. The impact of this gap can be mitigated by making optimal use of cache memory. The purpose of this paper is to explain methods to increase cache performance in terms of miss rate, hit rate, latency, efficiency, and cost.

 

Keywords: Cache design, latency, LFU, LRU


Full Text:

PDF

References


Nori, A. V., Gaur, J., Rai, S., Subramoney, S., & Wang, H. (2018, June). Criticality aware tiered cache hierarchy: a fundamental relook at multi-level cache hierarchies. In 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA) (pp. 96-109). IEEE.

Yan, Q., & Tuninetti, D. (2021). Fundamental limits of caching for demand privacy against colluding users. IEEE Journal on Selected Areas in Information Theory, 2(1), 192-207.

Wan, K., & Caire, G. (2020). On coded caching with private demands. IEEE Transactions on Information Theory, 67(1), 358-372.

Aravind, V. R., Sarvepalli, P. K., & Thangaraj, A. (2020, February). Subpacketization in coded caching with demand privacy. In 2020 National Conference on Communications (NCC) (pp. 1-6). IEEE.

Kamath, S. (2019). Demand private coded caching. arXiv preprint arXiv:1909.03324.

Wan, K., Sun, H., Ji, M., Tuninetti, D., & Caire, G. (2020, June). Device-to-device private caching with trusted server. In ICC 2020-2020 IEEE International Conference on Communications (ICC) (pp. 1-6). IEEE.

Zhang, X., Wan, K., Sun, H., & Ji, M. (2020, June). Cache-aided multiuser private information retrieval. In 2020 IEEE International Symposium on Information Theory (ISIT) (pp. 1095-1100). IEEE.

Zhang, X., Wan, K., Sun, H., Ji, M., & Caire, G. (2020, June). Private cache-aided interference alignment for multiuser private information retrieval. In 2020 18th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOPT) (pp. 1-8). IEEE.


Refbacks

  • There are currently no refbacks.