Pages

Thursday, June 2, 2011

new Q

Briefly Explain the following:-
1)Micro Operation :- The Operations executed on data stored in registers are called micro operations. A micro operation is an elementary operation performed on the Information stored in one or more registers. The result of the operation may replace the previous binary information of a register or may transfer to another register. A micro operation is an elementary operation performed with the data stored in registers. The micro operations most often encountered in digital computers are classified into four categories:
1. Register transfer microoperations transfer binary information from one register to another
2. Arithmetic microoperations perform arithmetic operations on numeric data stored in registers
3. Logic microoperations perform bit manipulation operations on nonnumeric data stored in registers
4. Shift microoperations perform shift operations on data stored in registers.
2) Page Replacement : - A virtual memory system is a combination of shardware and software techniques. The memory management software system handles all the software operations for the efficient utilization of memory space. It must decide:
1) which page in main memory ought to be removed to make room for a new page.
2)When a new page is to be transferred from auxiliarly memory to main memory.
3) Where the page is to be placed in main memory
3) Page Fault : When a program starts execution, one or more pages are transferred into main memory and the page table is set to indicate their position. The program is executed from main memory until it attempts to reference a page that is still in auxiliary memory. This condition is called page fault when page fault occurs, the execution of the present program is suspended until the required page is brought into main memory.
4)Replacement Algorithms:• When a new block is to be brought into the cache and all the positions that it may occupy are full, the cache controller must decide which of the old blocks to overwrite. In a direct mapped cache, the position of each blocks is predetermined, therefore, no replacement strategy required. But in associative and set associative caches there exists some flexibility.
• In general, the aim is to keep blocks in the cache that are likely to be referenced but determining which blocks are about to be referenced is difficult . since programs usually resides in localized areas for reasonable periods of time, there is a high probability that blocks that have been referenced recently will be referenced again.
• When a block is to be overwritten, it is easy to overwrite the one that has gone the longest time without being referenced. This block is called the least recently used block and the technique is called the least recently used replacement algorithm.
• To perform Least recently used function, the cache controller must record references to all blocks as computation progresses.
For Example: It is required to record the Least Recently Used blocks of four-blocks set in a set-associative cache. A 2-bit counter can be used for each block. When a hit occurs, the counter of the block that is referenced is set to 0. Counters with values originally lower than the referenced one are incremented by one, and all others remain unchanged.
• When a miss occurs and the set is not full, the counter associated with the new block loaded from the main memory is set to 0, and the values of all other counters are increased by one. When a miss occurs and the set is full, the block with the counter value 3 is removed, the new block is put in its place, and its counter is set to 0. The other three block counters are incremented by one. It can be easily verified that the counter values of occupied blocks are always distinct.
• The Least Recently Used algorithm has been used extensively. Although it performs well for many access patterns, it can lead to poor performance in some cases. Performance of the Least Recently Used algorithm can be improved by introducing a small amount of randomness in deciding which block to replace.
5. Advantages of using Virtual Memory • Virtual memory is the separation of user logical memory from physical memory. This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available.
• Virtual memory makes the task of programming much easier, because the programs no longer needs to worry about the amount of physical memory available or about what code can be placed in overlays , but can concentrate instead on the problem to be programmed.
• On system which support virtual memory, overlays have virtually disappeared.
• Virtual memory is commonly implemented by demand paging. It can also be implemented in a segmentation system with several systems provided a paged segmentation scheme, where segments are broken into pages. Demand segmentation can also be used to provide virtual memory.
• To free user from the need to carry out storage reallocation and permit the efficient sharing of available memory space by different users.
• To make the program independent of the configuration and capacity of the physical memory for execution.
• To achieve the very low cost per bit and low access time that are possible with memory hierarchy.
6. Vectored Interrupts :
There are two possible implementations of the vectored interrupts depending on the type of microprocessor.
1) Vector address is Fixed
• Here, the interrupt request coming from I/O devices are stored in interrupt register by setting the corresponding bits.
• The interrupt mask register is user programmable which is used to disable/mask corresponding interrupt requests.
• When multiple interrupts are forwarded to the input of priority encoder, it will encode only higher priority interrupt input using priority encoder and accordingly code will be generated. This code is inserted at predefined locations in program counter(PC)
• This defines the vector location for the interrupt, interrupt request is sent to the processor from priority encoder.
2) Vector address given by the I/O device• The system is implemented in multi level interrupt where separate interrupt request line(IRQ) and their acknowledge (ACK) are used for each of the I/O device.
• When multiple interrupt requests come from multiple I/O devices then priority control logic will resolve the priority and will send common interrupt request to the CPU.
• CPU sends interrupt acknowledge to the priority control logic, which in turn gives acknowledgement to the higher priority interrupting device.
7. Bus Arbitration : Bus systems with more than one potential bus master need a bus arbitration mechanism to allocate the bus to a bus master. Although the CPU is the bus master most of the time, the DMA controller acts as the bus master during certain I/O transfers. In principle, bus arbitration can be done either statically or dynamically
Static bus arbitration : In static bus arbitration, bus allocation among the masters is done in a predetermined way. For example, we might use a round-robin allocation that rotates the bus among the masters. The main advantage of a static mechanism is that it is easy to implement. How ever, since bus allocation follows a predetermined pattern rather than the actual need, a master may be given the bus even if it does not need it. This kind of allocation leads to inefficient use of the bus. Consequently, most implementations use a dynamic bus arbitration, which uses a demand-driven allocation scheme.
Dynamic Bus arbitration : In dynamic bus arbitration, bus allocation is done in response to a request from a bus master. To implement dynamic arbitration, each master should have a bus request and grant lines. A bus master uses the bus request line to let others know that it needs to bus to perform a bus transaction. Before it can initiate the bus transaction, it should receive permission to use the bus via the bus grant line. Dynamic arbitration consists of bus allocation and release policies.
Bus arbitration can be implemented in one of two basic ways : Centreaized or distributed.
In Centralized scheme, a central arbiter receives bus requests from all masters. The arbiter using the bus allocation policy in effect, determines which bus request should be granted. This decision is conveyed through the bus grant lines. Once the transaction is over, the master holding the bus would release the bus; the release policy determines the actual release mechanism.
In the Distributed implementation, arbitration hardware is distributed among the masters. A distributed algorithm is used to determine the master that should get the bus.
8. Bus Design Issues :
• Bus width : Bus width refers to the data and address bus widths, System performance improves with a wider data bus as we can move more bytes in parallel. We increase the addressing capacity of the system by adding more address lines.
• Bus Type : There are two types of buses: dedicated and multiplexed
• Bus Operations : Bus systems support several types of operations to transfer data. These include the read, write, block transfer, read-modify-write, and interrupt.
• Bus Arbitration : Bus arbitration can be done in one of two ways : Centralized or distributed.
• Bus Timing : As mentioned in the last section, buses can be designed as either synchronous or asynchronous.
9. Explain BIOS and CMOS
BIOS Means Basic Input /Output Services, To communicate with different devices of computer and different parts of motherboard, the processor goes to the ROM chip to access the proper program. Understand that there are many programs on the ROM chip. To communicate with the basic hardware it requires hundred of little programs (2 to 30 lines of code each). These hundreds of little programs stored on the ROM chip are called, collectively, the BIOS, each tiny program is called a service.
CMOS means Complementary Metal-Oxide Semiconductor , the BIOS information that can change will be stored in special RAM chip is called CMOS chip . So CMOS must be changed when you make certain hardware changes, you need to able to access and update the data on the CMOS chip. This function is called CMOS setup program. CMOS setup is a special program that allows you to make changes on the CMOS chip. It is stored on the System ROM. There are many ways to start CMOS setup program, depending on the brand of BIOS you have on your computer.
10. Interrupt Nesting
The device raises an interrupt request, The processor interrupts the program currently being executed. The interrupts should be disabled during the execution of an interrupt-service routine, to ensure that a request from one device will not cause more than one interruption. The same arrangement is often used when several devices are involved, in which case execution of a given interrupt-service routine, once started, always continues to completion before the processor accepts an interrupt request from a second device.