Pages

Saturday, May 7, 2011

q1.f

Explain in detail about virtual memories and Address translation scheme.
A simple method for translating virtual addresses into physical addresses is to assume that all programs and data are composed of fixed-length units called pages. Each of which consists of a block of words that occupy contiguous locations in the main memory. Pages commonly range from 2k to 16k bytes in length. They constitute the basic unit of information that is moved between the main memory and the disk whenever the translation mechanism determines that a move is required. Pages should not be too small, because the access time of a magnetic disk is much longer than the access time of the main memory. The reason for this is that it takes a considerable amount of time to locate the data on the disk, but once located, the data can be transferred at a rate of several megabytes per second. On the other hand, if pages are too large it is possible that a substantial portion of a page may not be used, yet this unnecessary data will occupy valuable space in the main memory.
The cache bridges the speed gap between the processor and the main memory and is implemented in hardware. The virtual-memory mechanism bridges the size and speed gaps between the main memory and secondary storage and is usually implemented in part by software techniques. Conceptually, cache techniques and virtual memory techniques are very similar. They differ mainly in the details of the implementation.
A virtual memory address translation method based on the concept of fixed-length pages. Each virtual address generated by the processor, whether it is for an instruction fetch or an operand fetch/ store operation, is interpreted as a virtual page number(high ordered bits) followed by an offset(low ordered bits) that specifies the location of a particular byte(or word) within a page. Information about the main memory location of each page is kept in a page table. This information includes the main memory address where the page is stored and the current status of the page. An area in the main memory that can hold one page is called page frame. The starting address of the page table is kept in a page table base register. By adding the virtual page number to the contents of this register, the address of the corresponding entry in the page table is obtained. The contents of this location give the starting address of the page if that page currently resides in the main memory.
Each entry in the page table also includes some control bits that describe the status of the page while it is in the main memory. One bit indicates the validity of the page, that is, whether the page is actually loaded in the main memory, this bit allows the operating system to invalidate the page without actually removing it. Another bit indicates whether the page has been modified during its residency in the memory. As in cache memories this information is needed to determine whether the page should be written back to the disk before it is removed from the main memory to make room for another page. Other control bits indicate various restrictions that may be imposed on accessing the page. For example, a program may be given full read and written permission, or it may be restricted to read accesses only.

q1.e

3. Explain any two mapping procedures in cache memory
Direct Mapping: Direct mapping is used for random access memory since associative memories are not economical and added associative logic with each cell. The following figures demonstrates the direct mapping.
The CPU address of 15-bits is divided into two fields. The nine least significant bits forms the index field and the remaining six bits form the Tag field. From the above figure, main memory requires and address that includes both the tag and the index bits. The number of bits in the field is equal to the number of address bits required to access the cache memory.
Generally these are 2K words in cache memory and 2n words in main memory. The n-bit memory address is divided into index and tag field. K-bit for index field and (n-k) bits for tag field. The cache with direct mapping uses n-bits address to access main memory organization of the words and cache consists of the data word and it’s associated tag only when a new word is first brought into the cache, the tag bits are stored alongside the data bits.
When CPU generates a memory request, the index field is used for the address to access the cache. The tag field of the CPU address is compared with the word read from the cache. If the two tags match, ahit is produced and the required data word is in cache. If there is no match, there is a miss and the desired data word is read from the main memory.
It is then stored in the cache together with new tag replaced by the previous value. The disadvantage of direct mapping is the hit ratio drop if two or more words whose addresses have the same index but different tags are accessed repeatedly. However this can be minimized by the fast technique, such that words are relatively for a part in the address range 512 location are here in this example.

Consider a numerical example to know how the direct mapping operates. The word 1220 is stored at address 00 000 in the main memory, and the same address is stored in the cache as index = 000, tag=00, data = 1220. If CPU wants to access the word stored at address 02000(word =5670). The index address is 000, so it is used to access the cache. The two tags are compared. The cache tag is 00 but the address tag is 02, which does not produce a match. Therefore the main memory is accessed and the data word 5670 is transferred to the CPU. The cache word at index address 000 is then replaced with a tag of 02 and data of 5670.

The above figure shows Direct Mapping Cache organization
The direct –Mapping is expressed as i=j modulo m
Where, i= Cache line number, j=Main memory block number, n=Number of lines of Cache memory.
Set Associative Mapping : The main draw back of direct mapping is two words with same index address but different values cannot reside in cache memory at the same time.
Therefore an improved cache organization implemented called as set associative mapping. And is used to store two or more words of memory under the same index address. Each data word is stored together with its tag and the number of tag-data pairs in one word of cache is called a set.
For example, consider a set associative cache organization for a set size of two as shown in figure, Each tag requires six bits and each data word has 12 bits, therefore word length becomes 2(6+12) = 36 bits. An index address of a bits can accomadate 1024 words of main memory since each word of cache contains two data word.
In general, a set-associative cache of set size k will accomadate k words main memory.


The octal numbers listed in figure are with reference to the main memory contents. The words stored at addresses 01000 and 02000 of main memory are stored in cache memory at index address 000. Similarly, the words stored at addresses 02777 and 00777 are stored in at index address 777 of cache memory.
As the CPU generates a memory request, the index address is used to access the cache. The tag field of the CPU address is then compared with both tags of the cache to see if a match occurs. The comparison is done by an associative search of the tags in the set similar to an associative memory search. Thus the name “Set associative”.
At the set size increases, bit ration also increases because more words, with same index address but different tags can be stored in cache. However as set size increases and requires complex comparision logic.
The relationship are m = v * k
i = j MODULO V
Where i = Cache Set number
j = main memory block number
m = number of lines in the cache.

Friday, May 6, 2011

q1.d

f) Associative Mapping : If the active portions of the program and data are placed in a fast small memory to reduce the average memory access time. Such a fast small memory is referred as Cache memory. Fast access time is the feature of cache memory. Thus the process of transferring of data from main memory to cache is referred as mapping function, The Associative Mapping is used for fastest and flexible cache organization. Below figure demonstrates the associative mapping. The Associative memory stores both content (data) and address of the memory word. Let us consider that three words currently stored in the cache.

The address value of 15-bits is shown as five digit octal number and its corresponding 12-bit word as a four digit octal number. A CPU address and the associative memory is searched for matching address. If the address is found, the relevant(corresponding) 12-bit data is read and send to the CPU. If the address is not found then main memory is accessed for the word and hence address-data pair is transferred to the associative cache memory.
In case, if the cache is full and cannot hold address-data pair of main memory, a room is formed for a pair that is needed and not present in the cache. The decision about which pair should be replaced can be determined form the replacement algorithm. Whenever a new word is requested by main memory, a first-in-first-out(FIFO) replacement policy is used.

q1.c

Explain in detail about virtual memories and Address translation scheme.
A simple method for translating virtual addresses into physical addresses is to assume that all programs and data are composed of fixed-length units called pages. Each of which consists of a block of words that occupy contiguous locations in the main memory. Pages commonly range from 2k to 16k bytes in length. They constitute the basic unit of information that is moved between the main memory and the disk whenever the translation mechanism determines that a move is required. Pages should not be too small, because the access time of a magnetic disk is much longer than the access time of the main memory. The reason for this is that it takes a considerable amount of time to locate the data on the disk, but once located, the data can be transferred at a rate of several megabytes per second. On the other hand, if pages are too large it is possible that a substantial portion of a page may not be used, yet this unnecessary data will occupy valuable space in the main memory.
The cache bridges the speed gap between the processor and the main memory and is implemented in hardware. The virtual-memory mechanism bridges the size and speed gaps between the main memory and secondary storage and is usually implemented in part by software techniques. Conceptually, cache techniques and virtual memory techniques are very similar. They differ mainly in the details of the implementation.
A virtual memory address translation method based on the concept of fixed-length pages. Each virtual address generated by the processor, whether it is for an instruction fetch or an operand fetch/ store operation, is interpreted as a virtual page number(high ordered bits) followed by an offset(low ordered bits) that specifies the location of a particular byte(or word) within a page. Information about the main memory location of each page is kept in a page table. This information includes the main memory address where the page is stored and the current status of the page. An area in the main memory that can hold one page is called page frame. The starting address of the page table is kept in a page table base register. By adding the virtual page number to the contents of this register, the address of the corresponding entry in the page table is obtained. The contents of this location give the starting address of the page if that page currently resides in the main memory.
Each entry in the page table also includes some control bits that describe the status of the page while it is in the main memory. One bit indicates the validity of the page, that is, whether the page is actually loaded in the main memory, this bit allows the operating system to invalidate the page without actually removing it. Another bit indicates whether the page has been modified during its residency in the memory. As in cache memories this information is needed to determine whether the page should be written back to the disk before it is removed from the main memory to make room for another page. Other control bits indicate various restrictions that may be imposed on accessing the page. For example, a program may be given full read and written permission, or it may be restricted to read accesses only.

q1.b

2Q. Explain the Single and multiple Bus Organization of C.P.U
A group of lines that serves as a connecting path for several devices is called bus. In addition to the lines that carry the data, the bus must have lines for address and control purposes. The simplest way to interconnect functional units is to use a single bus, all units are connected to bus, because the bus used for only one transfer at a time, only two units can actively use the bus at any given time. Bus control lines are used to arbitrate multiple requests for use of the bus. The main virtue of the single-bus structure is its low cost and its flexibility for attaching peripheral devices. Systems that contain multiple buses achieve more concurrency in operations by allowing two or more transfers to be carried out at the same time. This leads to better performance but at an increased cost. The devices connected to a bus vary widely in their speed of operation. Some electromechanical devices, such as keyboards and printers, are relatively slow. Others like magnetic or optical disks, are considerably faster. Memory and processor units operate at electronic speeds, making them the fastest parts of a computer. Because all these devices must communicate with each other over a bus, an efficient transfer mechanism that is not constrained by the slow devices and that can be used to smooth out the differences in timing among processors, memories, and external devices is necessary.
A common approach is to include buffer registers with the devices to hold the information during transfers. To illustrate this technique, consider the transfer of an encoded character from a processor to a character printer. The processor send the character over the bus to the printer buffer. Since the buffer is an electronic register, this transfer requires relatively little time. Once the buffer is loaded, the printer can start printing without further intervention by the processor. The bus and the processor are no longer needed and can be released for other activity. The printer continues printing the character in its buffer and is not available for further transfers until this process is completed. Thus, buffer registers smooth out timing differences among processors, memories and I/O devices. They prevent a high-speed processor from being locked to a slow I/O device during a sequence of data transfers. This allows the processor to switch rapidly from one device to another, interweaving its processing activity with data transfers involving several I/O devices.
Multiple Bus Organization : The three bus structure used to connect the registers and the ALU of a processor. All general-purpose registers are combined into a single block called the register file. In VLSI technology, the most efficient way to implement a number of registers is in the form of an array of memory cells similar to those used in the implementation of random-access memories. The register file is in the figure is said to have three ports. There are two outputs, allowing the contents of two different registers to be accessed simultaneously and have their contents placed on buses A and B. The third port allows the data on bus C to be loaded into a third register during the same clock cycle. Buses A and B are used to transfer the source operands to the A and B inputs of the ALU, where an arithmetic or logic operation may be performed. The result is transferred to the destination over bus C. If needed, the ALU may simply pass one of its two input operands unmodified to bus C. We will call the ALU control signals for such an operation R=A or R=B. The three-bus arrangement obviates the need for registers Y and Z.
A second feature is the introduction of the Incrementer unit, which is used to increment the PC by 4. Using the Incrementer eliminates the need to add 4 to the PC using the main ALU, as was done. The source for the constant 4 at the ALU input multiplexer is still useful. It can be used to increment other addresses, such as the memory addresses in LoadMultiple and Store Multiple instructions.
In step 1, the contents of the PC are passed through the ALU, using the R=B control signal, and loaded into the MAR to start a memory read operation. At the same time the PC is incremented by 4. Note that the value loaded into MAR is the original contents of the PC. The incremented value is loaded into the PC at the end of the clock cycle and with not affect the contents of MAR. In step 2, the processor waits for MFC and loads the data received into MDR, then transfers them to IR in step 3. Finally, the execution phase of the instruction requires only one control step to complete, step-4. By providing more paths for data transfer a significant reduction in the number of clock cycles needed to execute an instruction is achieved.

q1.a

g) Virtual Memory : Virtual memory is a concept which has an ability to address a storage space much larger than primary storage of a particular computer system. Virtual memory is present in the paging sytem. The paging system divides the program into a number of small parts, each part referred as page. And this division is done on logical addresses. So that the programmer does not have to make any decisions about how to divide the program. Paging is entirely transparent to the programmer. The hardware is already ensures that each memory reference and picking up the page base address to use. Using the protection bits. It is already make sure that each page referred to its validity. Each page must be in memory when a program is running. If a process page is not present in memory, an interrupt is generated. The operating system is employed to figure out which page was found to be missing, collect it from disk and changes the page table to indicate the page is now in memory and restarts the program. Swapping allows a number of programs to run at the same time and it is handled entirely by the system-overlays. And it also allows the user to move small parts of program in and out of memory. Virtual memory combines the good features of swapping and overlays. It is like overlays, that only part of the program is in memory at an instance, and the rest of its is not present on disk. The user can sees a large linear virtual address space. Only some parts of the virtual address space are present in physical memory. The rest of its “Virtual” and is reserved on the disk until needed. The disk contains an image of the entire virtual address space, even the parts that are in physical memory. The virtual memory system modules of the operating system maintains the illusion of the virtual memory by moving pages from disk to physical memory when they are needed.
h) Cache Memory : A special very-high speed memory called a cache is sometimes used to increase the speed of processing by making current programs and data available the CPU and a rapid rate. The Cache memory is employed in computer systems to compensate for the speed differential between main memory access time and processor logic. CPU logic is usually faster than main memory access time, with the result the processing speed is limited primarily by the speed of main memory. A technique used to compensate for the mismatch in operating speeds is to employ an extremely fast, a small cache between the CPU and main memory whose access time is close to processor logic clock cycle time. The cache is used for storing segments of programs currently being executed in the CPU and temporary data frequently needed in the present calculations. By making programs and data available at a rapid rate, it is possible to increase the performance rate of the computer. While the I/O processor manages data transfers between auxiliary memory and main memory, the cache organization is concerned with the transfer of information between main memory and CPU. Thus each is involved with a different level in the memory hierarchy system. The reason for having two or three levels of memory hierarchy is economics. As the storage capacity of the memory increases, the cost per bit for storing binary information decreases and the access time of the memory becomes longer. The auxiliary memory has a large storage capacity, is relatively inexpensive, but has low access speed compared to main memory. The cache memory is very small, relatively expensive, and has very high access speed. Thus as the memory access speed increase, so does its relative cost. The overall goal of using a memory hierarchy is to obtain the highest-possible average access speed while minimizing the total cost of the entire memory system. Auxiliary and cache memories are used for different purposes. The cache holds those parts of the program and data that are most heavily used, while the auxiliary memory holds those parts that are not presently used by the CPU. Moreover, the CPU has direct access to both cache and main memory but not to auxiliary memory.
i) Interrupt : Interrupts are used for any infrequent or exceptional event that causes a CPU to temporarily transfer control from its current program to another program. Interrupt handler services the event. Interrupt are the primary mean by which I/O device obtained the services. I/O interrupts are external requests to CPU to initiate or terminate an I/O operation. Interrupts are also produced by hardware or software error detection circuits that invokes error handling routines within the operating system. A power supply failure at any instance, generate an interrupt that request execution of an interrupt handler designed to save critical data about the system’s state. Interrupts generated internally by the CPU are called traps. An operating system will interrupt a user program that exceeds its allotted time. The basic method of interrupting the CPU is by activating a control line with the generic name INTERRUPT REQUEST that connects the interrupt source to CPU. An Interrupt indicator is stored in a CPU register. CPU register is tested periodically, usually at the end of every instruction cycle. On recognizing the presence of interrupt, CPU must execute a specific interrupt servicing program. A problem is caused by the presence of interrupt, CPU must execute a specific interrupt servicing program. A program is caused by the presence of two or more interrupt requests at the same time. Priorities must be assigned to the interrupts and the interrupt with higher priority is selected for service. When interrupt occurs, the following steps are taken by the CPU:
1. CPU identifies the source of the interrupt by polling I/O device.
2. The CPU obtains the memory address of the required interrupt handler
3. This address can be provided by interrupting device along with its interrupt request
4. The program counter(PC) and other CPU status information are saved in memory.
5. The Program counter (PC) is loaded with the address of interrupt handler. Execution proceeds until a return instruction is encountered, which transfer control back to the interrupted program.
j) Half duplex and full duplex transmission :Half duplex transmission system is one that is capable of transmitting in both directions but data can be transmitted in only one direction at a time. A pair of wires is needed for this mode. A common situation is for one modem to act as the transmitter and the other as the receiver. When transmission in one direction is completed, the role of the modems is reversed to enable transmission in the reverse direction. The time required to switch a half-duplex line from one direction to the other is called the turnaround time. A Full duplex transmission can send and receive data in both directions simultaneously. This can be achieved by means of four-wire link, with a different pair of wires dedicated to each direction of transmission. Alternatively , a two-wire circuit can support full-duplex communication if the frequency spectrum is subdivided into two non overlapping frequency bands to create separate receive and transmit channels in the same physical pair of wires.
The communication lines, modems, and other equipment used in the transmission of information between two or more stations is called a data link. The orderly transfer of information in a data link is accomplished by means of a protocol.

q1

Question paper -1
1.Briefly explain the following :-
a) Zero address machine Instruction : A Computer organized using only stack does not use an address field for the arithmetic instructions. However, the PUSH and POP instructions require one address field to specify the operand that moves from/ to the top of stack. Arithmetic expressions re-evaluated by using the top two operands(that is from TOP Stack and one Location below it) and the result is stored on the top of Stack. This implied that the Stack utilized is reduced by one when ever an arithmetic expression is evaluated. The Zero address instructions are so called because of the use of Zero addresses for arithmetic instructions.
Example : x = (A + B) * (C + D)
PUSH A TOS  A
PUSH B TOS B
AND TOS(A+B)
PUSH C TOSC
PUSH D TOSD
ADD TOS(C+D)
MUL TOS(C+D) * (A+B)
POP X TOSM[X] TOS.
b) Microprogramming : A Sequence of micro instructions constitutes a microprogramming. The control unit initiates a series of sequential steps of micro operations during any given time, certain microoperations are to be initiated, while other remain idle. The control variables at any given time can be represented by a string of 1’s and 0’s called a control word. As such, control words can be programmed to perform various operations on the components of the system. A control unit whose binary control variables are stored in memory is called micro programmed control unit. Each word in control memory contains within it a micro instruction. The micro instruction specifies one or more micro operations for the system. A sequence of micro instructions constitutes a micro programming.
c) Bus arbitration: The device that is allowed to initiate data transfers on the bus at any given time is called the bus master. When the current master relinquishes control of the bus, another device can acquire this status. Bus arbitration is the process by which the next device to become the bus master is selected and bus mastership is transferred to it. The selection of the bus master must take into account the needs of various devices by establishing a priority system for gaining access to the bus.
There are two approaches to bus arbitration: Centralized and distributed. In Centralized arbitration, a single bus arbiter performs the required arbitration. In distributed arbitration, all devices participate in the selection of the next bus master.
d) Compiler : Compiler is a program which takes one language(source program) as input and translates it into an equivalent another language(target program). During this process of translation if some errors are encountered then compiler display them as error messages. The basic model of computer represented as follows
Input (source program)  compiler  output (Target Program)

e) Hit rate and miss penalty: The performance of cache memory is measured in terms of quality called hit ratio. Cache refers to as an intermediate fast memory space within its memory. It acts as buffer between CPU and Memory. Cache memory provides memory space with greater speed than the memories available. Besides this it provides large memory size with less economical semi-conductor memories. When CPU finds the word in Cache memory is said to produce a “hit”. If the word is not found in cache, and if it is in main memory then it counts as “miss”, the ratio of the number of hits divided by the total CPU references to memory (hit plus misses) is called hit ratio. The average memory-access time of a computer system can be increased by use of a Cache memory. If the hit ratio is high then CPU access cache instead of main memory. The average access time is closer to the access time of the fast cache memory.

Tuesday, May 3, 2011

assembly language

List and Explain Assembler directives
The following directives are commonly used in the assembly language programming practice using Micro Soft Micro Assembler or Turbo Assembler.
DB: Define Byte The DB directive is used to reserve byte or bytes of memory location in the available memory. While preparing the EXE file, this directive directs the assembler to allocate the specified number of memory bytes to the said data type that many be a constant, variable, string etc.,
The following examples show how the DB directive is used for different purposes.
Example:
RANKS DB 01H, 02H, 03H, 04H.
This statement directs the assembler to reserve four memory locations for a list named RANKS and initialize them with the above specified values.
MESSAGE DB ‘good morning’
This makes the assembler reserves number of bytes of memory equal to the number of characters in the string named MESSAGE and initialize those locations by the ASCII equivalent of these characters
VALUE DB 50H
This statement directs the assembler to reserve 50H memory bytes and leave them uninitialized for the variable named VALUE.
DW: Define word
The DW directive directs the same purpose as the DB directive, but it now make the assembler reserve the number of memory words(16 bit) instead of bytes.
Example :
WORDS DW 1234H, 4567H, 78ABH, 645CH
The DW makes the assembler reserve four words in memory(8 bytes), and initialize the words with the specified values in the statements. The lower bytes are stored in the lower memory addresses, while the upper bytes and stored at the higher addresses, during initialization. Another option of the DW directive is explained with the DUP operator.
WDATA DW 5 DUP (6666H)
This statement reserves five words, that is 10-bytes of memory for a word label WDATA and initializes all word locations with 6666H.
DQ: Define Quadword : DQ directive is used to direct the assembler to reserve 4 words (8 bytes) of memory for the specified values.
DT: Define ten bytes : The DT directive directs the assembler to define the specified variable requiring 10-bytes for its storage and initialize the 10-bytes with the specified values.
ASSUME: Assume logical segment Name : Assume directive is used to inform the assembler, the names of the logical segment to be assumed for different segments which is used in the program. In the assembly language program, each segment is given a name. For example, the code segment may be given the name DATA etc. The statement ASSUME CS: CODE directs the assembler that the machine code are available in a segment named CODE, and thus the CS register is to be loaded with the address(segment) allotted by the operating system for the label CODE, while loading. The ASSUME statement is must to start each assembly language program, without a message. ‘CODE/DATA EMITTED WITHOUT SEGMENT’ may be issued by an assembler.
END: End of program : The END directive marks the end of an assembly language program. When the assembler comes across END directive, it ignores the source lines available later on.
ENDP: END of procedure: The subroutines are called procedures in assembly language programming. They may be independent program modules which return the particular result or values to the calling programs. The ENDP directive is used to end of a procedure. A procedure is usually assigned as a name that is label.
To mark the end of a particular procedure, the name of the procedure, that is label may appear as a prefix with the directive ENDP. The statements appearing in the same module but after the ENDP directive are neglected from that procedure. The following illustration explains the use of ENDP.
PROCEDURE STAR
.
.
.
STAR ENDP
ENDS: End of Segment
The ENDS directive marks the end of a logical segment. The logical segments are assigned with the names using the ASSUME directive. The names appear with the ENDS directive as prefixes to mark the end of those particular segments. Whatever are the contents of the segment they should appear in the program before ENDS. Any statement appearing after ENDS will be neglected from the segment. The following structure explains the above mention detail more clearly.
DATA SEGMENT
..
..
DATA ENDS
ASSUME CS: CODE, DS: DATA
..
..
CODE ENDS
END
EVEN: Align on Even Memory Address
The assembler, while starting the assembling procedure of any program, initializes a location counter and goes on updating it, as the assembly proceeds. It goes on assigning the available addresses, that is the contents of the location counter, sequentially to the program variables, constants and modules as per their requirements, in the sequence in which they appear in the program. The EVEN directive updates the next even address of the current location, counter contents are not even, and assign the following routine or variable or constant to that address. The following structure explains the directive.
EVEN
PROCEDURE ROOT
..
..
ROOT ENDP
EQU : Equate
The EQU directive is used to assign a label with a value or a symbol. It is used to reduce the recurrence of the numerical values or constants in a program code. Using the EQU directive, even an instruction mnemonics can be assigned with a label, which can be used in the program in place of that mnemonic.
The following example shows the syntax.
Example :
LABEL EQU 0500H
ADDITION EQU ADD
The first statement assign the constant 500H with the label LABEL, while in the second statement assigns another label ADDITION with mnemonic ADD.
EXTRN : External and PUBLIC : Public
The EXTRN directive informs the assembler that the names, procedures and labels declared after this directive have already been defined in some other assembly language modules. While in the other module, where the names, procedures and lables appear they must be declared as public, using the PUBLIC directive.
MODULE1 SEGMENT
PUBLIC FACTORIAL FAR
MODULE1 ENDS
MODULE2 SEGMENT
EXTRN FACTORIAL FAR
MODULE2 ENDS.

GROUP: Group the Related Segments
The GROUP directive is used to form logical groups of segments with similar purpose or type. It is used to inform the assembler to form a logical group of the following segment names. Thus all such segments and labels can be addressed using the same segment base.
Program group code, data, stack
The above statement directs the loader/linker to prepare an EXE file such that CODE, DATA and STACK segment must be within a 64k byte memory segment that is named as PROGRAM. Now, for ASSUME statement, one can use the label PROGRAM rather than CODE, DATA and STACK as shown
ASSUME CS: PROGRAM, DS; PROGRAM, SS: PROGRAM.
LABEL: Label
The LABEL directive is used to assign a name to the current content of the location counter. When the assembly process starts the assembler initializes a location counter to keep track of memory locations assigned to the program. As the program assembly proceeds, the contents of the location counter are updated. During the assembly process, whenever the assembler comes across the LABEL directive, it assigns the declared label with the current contents of the location counter.
LENGTH: Byte length of a label
This directive is not available in MASM, and used to refer the length of a data array or a string.
MOV CX, LENGTH ARRAY
This statement, when assembled, will substitute the length of the array ARRAY in bytes, in the instruction.
LOCAL
The labels, variables, constants or procedures declared LOCAL in a module are to be used only by that particular module.
Example:
LOCAL a,b DATA, ARRAY, ROUTINE.
NAME: Local NAME of a Module
The NAME directive is used to assign a name to an assembly language program module. The module, may now be referred by its declared name.
OFFSET: Offset of a Label
When the assembler comes across the OFFSET operator along with a Label, it first computes the 16-b it displacement(also called as offset interchangeably) of the particular label, and replaces the string ‘OFFSET LABEL’ by the computed displacement. This operator is used with arrays, strings, labels and procedures to decide their offsets in their default segments.
The examples of OFFSET operator are as follows:
Example :
CODE SEGMENT
MOVE SI, OFFSET LIST
CODE ENDS
DATA SEGMENT
LIST DB 10 H
DATA ENDS
ORG: Origin
The ORG directive directs the assembler to start the memory allotment for the particular segment, block or code from the declared address in the ORG statement. The location counter is initialized to 0000 if the ORG statement is not written in the program. If an ORG 200H statement is present at the starting of the code segment of the module, then the code will start from 200H address in code segment.
PROC: Procedure
The PROC directive marks the start of a named procedure in the statement. And also the types NEAR or FAR specify the type of the procedure. That is, whether it is to be called by the main program located within 64k of physical memory or not.
Example :
RESULT PROC NEAR
ROUTINE PROC FAR
PTR: Pointer
The POINTER Operator is used to declare the types of label, variable or memory operand. The operator PTR is prefixed by either BYTE or WORD.
PUBLIC:
The PUBLIC directive is used along with the EXTRN directive. And informs the assembler that the labels, variables, constants or procedures declared PUBLIC may be accessed by other modules. But while using the EXTRN declared labels, variables, constants or procedures, the user must declare them external using the EXTRN directive.
SEG: Segment of a Label
The SEG operator is used to decide the segment address of the label, variable, or procedure and substitutes the segment base address in place of “SEG” label.
Example:
MOV AX, SEG ARRAY ; This statement moves the segment address of ARRAY in
MOV DS,AX ; Which it is appearing, to register AX and then to DS.
SEGMENT: Logical Segment
The SEGMENT directive marks the starting of a logical segment. The started segment is also assigned a name that is label, by the statement. In some cases, the segment may be assigned a type like PUBLIC or GLOBAL (can be accessed by any other modules)
Examples:
EXE CODE SEGMENT GLOBAL
; start of segment named EXE.CODE
; that can be accessed by any other module
EXE. CODE ENDS; END of EXE.CODE logical segment
SHORT
The SHORT operator indicates to the assembler that only one byte is required to code the displacement for a jump(that is displacement is within -128 to +127 bytes from the address of the byte next to the jump.
Example:
JMP SHORT LABEL
TYPE:
The TYPE operator directs the assembler to decide data type of the specified label and replaces the ‘TYPE’ label by the decided data type. For the word type variable, the data type is 2, for the double word type, it is 4, and for byte type 1.
GLOBAL
The labels, variables, constants or procedures declared GLOBAL may be used by other modules of the program.
Example:
ROUTINE PROC GLOBAL
‘+’ & ‘-‘ Operators:
These operators represent arithmetic addition and subtraction respectively and they are typically used to add or substract displacements(8 or 16 bit) to base, index etc.
Example:
MOV AL, [SI + 2]
MOV DX, [BX -5]
MOV BX, [OFFSET LABEL F10H]
MOV AX, [B + 91]
FARPTR:
The FAR PTR directive indicates the assembler that label following FAR PTR is not available within the segment and the address of the label is 32 bits, that is 2 bytes offset followed by 2 bytes segment address
Example:
JMP FAR PTR LABEL
CALL FAR PTR ROUTINE
NEAR PTR
The NEAR PTR directive indicates the label following NEAR PTR is in the same segment and requires only 16-bit that is 2 byte offset to address it.
Example:
JMP NEAR PTR LABEL
CALL NEAR PTR ROUTINE