Pages

Monday, January 24, 2011

MEMORY ORGANIZATION AND MEMORY HEIRARCHY

MAIN MEMORY

The main memory is the central storage unit in a computer system. It is a relatively large and fast memory used to store programs and data during the computer operation. The principal technology used for the main memory is based on semiconductor integrated circuits. Integrated circuit RAM chips are available in two possible operating modes, static and dynamic.
The static RAM consists essentially of internal flip-flops that store the binary information. The stored information remains valid as long as power is applied to the unit.
The dynamic RAM stores the binary information in the form of electric charges that are applied to capacitors. The capacitors are provided inside the chip by MOS transistors. The stored charge on the capacitors tend to discharge with time and the capacitors must be periodically recharged by refreshing the dynamic memory. Refreshing is done by cycling through the words every few milliseconds to store the delying charge. The dynamic RAM offers reduced power consumption and larger storage capacity in a single memory chip.
The static RAM is easier to use and has shorter read and write cycles.
Most of the main memory in a general –purpose computer is made up of RAM integrated circuit chips, but a portion of the memory may be constructed with ROM chips. Originally, RAM was used to refer to a random-access memory, but now it is used to designate a read/write memory to distinguish it from a read-only memory, although ROM is also random access. RAM is used for storing the bulk of the programs and data that are subject to change. ROM is used for storing programs that are permanently resident in the computer and for tables of constants that do not change in value once the production of the computer is completed.
The ROM portion of main memory is needed for storing an initial program called a bootstrap loader. The bootstrap loader is a program whose function is to start the computer software operating when power is turned on. Since RAM is volatile, its contents are destroyed when power is turned off. The contents of ROM remain unchanged after power is turned off and on again. The startup of a computer consists of turning the power on and starting the execution of an initial program. Thus, when power is turned on, the hardware of the computer sets the program counter to the first address of the bootstrap loader. The bootstrap program loads a portion of the operating system from disk to main memory and control is then transferred to the operating system, which prepares the computer for general use.
RAM and ROM chips are available in a variety of sizes. If the memory needed for the computer is larger than the capacity of one chip, it is necessary to combine a number of chips to form the required memory size.

AUXILIARY MEMORY

The Auxiliary memory devices used in a computer system are magnetic disk, tapes, magnetic drums and more recently optical disks. The important features of any devices are its access mode, access time, transfer rate, capacity and cost.
The average time to reach a storage location in memory and obtain its contents is the access time. In electro-mechanical devices access time is equal to the sum of seek time (time required to position the read, write heads and on the location) and transfer time(the time required to transfer data to and from any device). Auxiliary storage is logically divided into records or blocks. A record/block consists of a number of words. Input/Output from /on auxiliary memory is always done to enter blocks.

ASSOCIATIVE MEMORY

Many data processing applications require that the time required to find an item stored in memory be reduced to a minimum for efficiency. This can be done if stored data can be identified for access by content of the data rather than by its address. A memory unit addressed by its contents is called associative memory or content addressable memory (CAM). This type of memory can be accessed simultaneously in parallel on basis of data content. When a word is to be written into associative memory no address is specified, it is capable of finding an empty unused location to store the word. When a word is read from the memory, the content or part of the word is specified, the memory hardware locates all words which match the content and mark them for reading.

CACHE MEMORY

Analysis of a large number of typical programs has shown that the reference to memory at any given interval of time tend to be confined within a few localized areas in memory. This phenonmenon is known as the property of locality of reference. The reason for this property may be understood considering that a typical computer program flows in a straight line fashion with program loops and subroutine calls encountered frequently?
When a program loop is executed, the CPU repeatedly refers to the set of instructions in memory that constitute the loop. Every time a given subroutine is called, its set of instructions are fetched from memory. Thus, loops and subroutines tend to localize the references to memory for fetching instructions. To a lesser degree, memory references to data also send to be localized. Table-loopup procedures repeatedly refer to that portion in memory where the table is stored. Iterative procedures refer to common memory locations and array of numbers are confined within a local portion of memory. The result of all these observations is the locality of reference property, which states that over a short interval of time, the addresses generated by a typical program refer to a few localized areas of memory repeatedly, while the remainder of memory is accessed relatively infrequently.
If the active portions of the program and data are placed in a fast small memory, the average memory access time can be reduced, thus reducing the total execution time of the program. Such a fast small memory is referred to as a cache memory. It is placed between the CPU and main memory as shown in the figure. The cache is the fastest component in the memory hierarchy and approaches the speed of CPU components.
The fundamental idea of cache organization is that by keeping the most frequency accessed instructions and data in the fast cache memory, the average age memory access time will approach the access time of the cache. Although the cache is only a small fraction of the size of main memory, a large fraction of memory requests will be found in the fast cache memory because of the locality of reference property of programs.
The basic operation of the cache is as follows. When the CPU needs to access memory, the cache is examined. If the word is found in the cache, it is read from the fast memory. If the word addressed by the CPU is not found in the cache, the main memory is accessed to read the word. A block of words containing the one just accessed is then transferred from main memory to cache memory. The block size may vary from one word(in one just accessed) to about 16 words adjacent to the one just accessed. In this manner, some data are transferred to the cache so that future references to memory find the required words in the fast cache memory
The performance of cache memory is frequently measured in terms of a quantity called hit ratio. When the CPU refers to memory and finds the word in cache, it is said to produce a hit. If the word is not found in cache, it is in main memory and it counts as a miss. The ratio of the number of hits divided by the total CPU references to memory(hits plus misses) is the hit ratio. The hit ratio is best measured experimentally by running representative programs in the computer and measuring the number of hits and misses during a given interval of time. Hit ratios of 0.9 and higher have been reported. This high ratio verified the validity of the locality of reference property.
The average memory access time of a computer system can be improved considerably by use of a cache. If the hit ratio is high enough so that most of the time the CPU accesses the cache instead of main memory, the average access time is closer to the access time of the fast cache memory. For example, a computer with cache access time of 100 ns, a main memory access time of 1000 ns, and a hit ratio of 0.9 produces an average access time of 200 ns. This is a considerable improvement over a similar computer without a cache memory, whose access time is 1000 ns.
The basic characteristic of cache memory is its fast access time. Therefore, very little or no time must be wasted when searching for words in the cache. The transformation of data from main memory to cache memory is referred to as a mapping process. Three types of mapping procedures are of practical interest when considering the organization of cache memory:
1.      Associative mapping
2.      Direct mapping
3.      Set-associative mapping