Cache memory: Difference between revisions

From Computer Science Wiki
No edit summary
Line 13: Line 13:


There is a wonderful analogy I found [https://www.quora.com/Computer-Architecture-What-is-the-L1-L2-L3-cache-of-a-microprocessor-and-how-does-it-affect-the-performance-of-it here]. If you are confused about cache memory, I suggest you read the top part of this story.  
There is a wonderful analogy I found [https://www.quora.com/Computer-Architecture-What-is-the-L1-L2-L3-cache-of-a-microprocessor-and-how-does-it-affect-the-performance-of-it here]. If you are confused about cache memory, I suggest you read the top part of this story.  
Cache memory is fast because
* In the case of a CPU cache, it is faster because it's on the same die as the processor. In other words, the requested data doesn't have to be bussed over to the processor; it's already there.
* In the case of the cache on a hard drive, it's faster because it's in solid state memory, and not still on the rotating platters.
* In the case of the cache on a web site, it's faster because the data has already been retrieved from the database (which, in some cases, could be located anywhere in the world).
So it's about locality, mostly. Cache eliminates the data transfer step.
Locality is a fancy way of saying data that is "close together," either in time or space. Caching with a smaller, faster (but generally more expensive) memory works because typically a relatively small amount of the overall data is the data that is being accessed the most often.<ref>http://programmers.stackexchange.com/questions/234253/why-is-cpu-cache-memory-so-fast</ref>


== Do you understand this topic? ==  
== Do you understand this topic? ==  

Revision as of 10:21, 5 September 2016

This is a basic concept in computer science

Cache is very fast and small memory that is placed in between the CPU and the main memory. Cache memory is used to reduce the average memory access times. This is done by storing the data that is frequently accessed in main memory addresses therefore allowing the CPU to access the data faster. This is due to the fact that cache memory can be read a lot faster than main memory. There are different types of cache (e.g. L1,L2 and L3)[1]


The steps to access the data from cache memory are:

  • A request is made by the CPU
  • Cache is checked for data
  • If the data is found in the cache it is returned to the CPU (this is called a cache hit)
  • If the data is not found in the cache then the data will be returned from the main memory.

There is a wonderful analogy I found here. If you are confused about cache memory, I suggest you read the top part of this story.

Cache memory is fast because

  • In the case of a CPU cache, it is faster because it's on the same die as the processor. In other words, the requested data doesn't have to be bussed over to the processor; it's already there.
  • In the case of the cache on a hard drive, it's faster because it's in solid state memory, and not still on the rotating platters.
  • In the case of the cache on a web site, it's faster because the data has already been retrieved from the database (which, in some cases, could be located anywhere in the world).

So it's about locality, mostly. Cache eliminates the data transfer step.

Locality is a fancy way of saying data that is "close together," either in time or space. Caching with a smaller, faster (but generally more expensive) memory works because typically a relatively small amount of the overall data is the data that is being accessed the most often.[2]

Do you understand this topic?[edit]

  • Explain the use of cache memory

Do you have an advanced understanding about this topic?[edit]

  • List the differences between L1, L2, and L3 cache memories

References[edit]