We examined early digital computer memory, see Computer History – Core Memory, and mentioned that today’s standard RAM (Random Access Memory) is chip memory. This fits the commonly cited application of Moore’s Law (Gordon Moore was one of the founders of Intel). It states that the density of components in integrated circuits, which can be paraphrased as yield per unit cost, doubles every 18 months. Early kernel memory had cycle times in microseconds, today we are talking in nanoseconds.

You may be familiar with the term cache, applied to PCs. It is one of the performance characteristics mentioned when talking about the latest CPU or hard drive. You can have L1 or L2 cache on processor and disk cache of various sizes. Some programs also have a cache, also known as a buffer, for example, when data is written to a CD recorder. The first CD recording programs had “overflows.” The end result of these was a good supply of coasters!

Mainframe systems have used cache for many years. The concept became popular in the 1970s as a way to speed up memory access time. This was the time when central memory was phased out and replaced with integrated circuits or chips. Although the chips were much more efficient in terms of physical space, they had other reliability and heat generation issues. Chips of one design were faster, hotter, and more expensive than chips of another design, which were cheaper, but slower. Speed ​​has always been one of the biggest factors in computer sales, and design engineers have always been looking for ways to improve performance.

The concept of cache is based on the fact that a computer is inherently a sequential processing machine. Of course, one of the great advantages of the computer program is that it can “fork” or “jump” out of sequence, the subject of another article in this series. However, there are still enough times when one instruction follows another to make a buffer or cache a useful addition to the computer.

The basic idea of ​​the cache is to predict what data is required from memory to be processed by the CPU. Consider a program, which is made up of a series of instructions, each of which is stored in a location in memory, say from address 100 onwards. The instruction at location 100 is read from memory and executed by the CPU, then the next instruction is read from location 101 and executed, then 102, 103, etc.

If the memory in question is central memory, it will take maybe 1 microsecond to read an instruction. If the processor takes, say, 100 nanoseconds to execute the instruction, then it has to wait 900 nanoseconds for the next instruction (1 microsecond = 1000 nanoseconds). The effective repetition rate of the CPU is 1 microsecond. (The times and speeds indicated are typical, but do not refer to any specific hardware, they simply give an illustration of the principles involved.)

Leave a Reply

Your email address will not be published. Required fields are marked *