Search and buy domains from Namecheap
0

Polymorphism

Polymorphism is a feature of OOP that allows one interface to be used for a general class of actions. The specific action is determined by the exact nature of the situation. By dint of polymorphism, it is possible to design a generic interface to a group of related activities. This helps reduce complexity by allowing the same interface to be used to specify a general class of action. It is the compiler’s job to select the specific action as it applies to each situation.

The programmer does not need to make the selection manually. He needs only remember and utilize the general interface. In this way, polymorphism promotes extensibility.



0

Features of object-oriented programming over the procedure-oriented and Java as platform independence

Java makes platform independence possible by translating a program into bytecode instead of machine code. Bytecode is a highly optimized set of instructions designed to be executed by the Java run-time system or JVM (Java Virtual Machine). Translating a Java program into bytecode makes it much easier to run a program in a wide variety of environments because only the JVM needs to be implemented for each platform. Once the run-time package exists for a given system, any Java program can run on it.

Although the details of the JVM will differ from platform to platform, all understand the same Java bytecode. If a Java program were compiled to native code, then different versions of the same program would have to exist for each type of CPU. Thus, the execution of bytecode by the JVM is the easiest way to create truly portable programs.



0

There are three principles of OOP short definations

Encapsulation
Inheritance
Polymorphism

Encapsulation is the mechanism that binds together code and the data it manipulates and keeps both safe from outside interference and misuse. 
Inheritance is the process by which one object acquires the properties of another object. By use of inheritance, an object would need only to define those qualities that make it unique within its class. It can inherit its general attributes from its parent. Thus, it is the inheritance mechanism that makes it possible for one object to be a specific instance of a more general case.
Polymorphism is a feature that allows one interface to be used for a general class of actions. The specific action is determined by the exact nature of the situation.



2

DEADLOCKS and METHODS FOR HANDLING DEADLOCKS in OS

In a multiprogramming environment, several processes may compete for a finite number of resources. A process requests resources; if the resources are not available at that time, the process enters a waiting state.
Coming to definition
Sometimes, a waiting process is never again able to change state, because the resources it has requested are held by other waiting processes. This situation is called a deadlock. The principle of Deadlock can be defined as the permanent blocking of a set of processes that either competes for system resources or communicate with each other. A situation in which two or more processes are unable to proceed because each is waiting For one of the others to do something. A set of processes is deadlocked (is in a deadlocked state) when each process in the set is blocked waiting for an event that can only be triggered (caused) by another blocked process in the set. All deadlocks involve conflicting needs for resources by two or more processes. This is what we call the set of processes in the deadlock. A common example is the traffic deadlock. A figure shows a situation in which four cars have arrived at a four-way stop intersection at approximately the same time.
If all four cars ignore the rules and proceed (cautiously) into the intersection at the same time, then each car sizes /occupies /reserves / finishes one resource (one quadrant) but cannot proceed because the required second resource has already been seized by another car. This is an actual deadlock.
Resemblance of deadlock with Kansas legislature’s law
Perhaps the best illustration of a deadlock can be drawn from a law passed by the Kansas legislature early in the 20th century. It said, in part: “When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone.”
 In which situations, the deadlock becomes common
Deadlock problems can only become more common, given current trends, including larger numbers of processes, multithreaded programs, many more resources within a system, and an emphasis on long-lived file and database servers rather than batch systems.
METHODS FOR HANDLING DEADLOCKS
Generally speaking, we can deal with the deadlock problem in one of three ways. We can use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlocked state. We can allow the system to enter a deadlocked state, detect it, and recover. We can ignore the problem altogether and pretend that deadlocks never occur in the system. The third solution is the one used by most operating systems, including Linux and Windows.
These three ways result in the following general methods of handling deadlocks:
Deadlock prevention It provides a set of methods to ensure that at least one of the necessary conditions (Mutual exclusion, Hold and wait, No preemption, Circular wait) cannot hold.
Deadlock avoidance requires that the operating system is given additional information in advance concerning which resources a process will request and use during its lifetime. With this additional knowledge, the operating system can decide for each request whether or not the process should wait.
To decide whether the current request can be satisfied or must be delayed, the system must consider the resources currently available, the resources currently allocated to each process, and the future requests and releases of each process.
To ensure that deadlocks never occur, the system can use either a deadlock prevention or a deadlock avoidance scheme.
Deadlock Detection: If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm, then a deadlock situation may occur. In this environment, the system may provide an algorithm that examines the state of the system to determine whether a deadlock has occurred and that algorithm will recover the system from the deadlock.



0

PAGE REPLACEMENT ALGORITHMS in OS

PAGE REPLACEMENT ALGORITHMS
In general, we want a page replacement algorithm with the lowest page-fault rate.
We evaluate an algorithm by running it on a particular string of memory references and computing the number of page faults on that string.
To determine the number of page faults for a particular reference string and page replacement algorithm, we also need to know the number of page frames available.
Obviously, as the number of frames available increases, the number of page faults decreases.


(Expected relationship between number of free frames allocated to a process and the number of page faults caused by it).

FIFO Page Replacement
The simplest page-replacement algorithm is a FIFO algorithm. 
When a page must be replaced, the oldest page is chosen.
We can create a FIFO queue to hold all pages in memory. We replace the page at the head of the queue. When a page is brought into memory we insert t at the tail of the queue. 
Example: consider the reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1





0
BASIC ELEMENTS OF COMPUTER
A computer consists of the processor, memory, and I/O components, with one or more modules of each type. These components are interconnected in some fashion to achieve the main function of the computer, which is to execute programs. Thus, there are four main structural elements:

1.      Processor: Controls the operation of the computer and performs its data processing functions. When there is only one processor, it is often referred to as the central processing unit (CPU).
2.      Main memory: Stores data and programs. This memory is typically volatile; that is when the computer is shutdown, the contents of the memory are lost. In contrast, the contents of disk memory are retained even when the computer system is shut down. Main memory is also referred to as real memory or primary memory.
3.      I/O modules: Move data between the computer and its external environment. The external environment consists of a variety of devices, including secondary memory devices (e.g., disks), communications equipment, and terminals. An I/O module transfers data from external devices to processor and memory, and vice versa. It contains internal buffers for temporarily holding data until they can be sent on.
4.       System bus: Provides for communication among processors, main memory, and I/O modules.
The figure depicts these top-level components. One of the processor’s functions is to exchange data with memory. For this purpose, it typically makes use of two internal (to the processor) registers: MAR And MBR
MAR: Memory addresses register (MAR), which specifies the address in memory for the next read or write.
MBR memory buffer register (MBR): This contains the data to be written into memory or which receives the data read from memory.
I/O addresses register (I/OAR) specifies a particular I/O device.
An I/O buffer register (I/OBR) is used for the exchange of data between an I/O module and the processor.


0

Cache memory

Although cache memory is invisible to the OS, it interacts with other memory management hardware. Furthermore, many of the principles used in virtual memory schemes  are also applied in the cache memory.
On all instruction cycles, the processor accesses memory at least once, to fetch the instruction, and often one or more additional times, to fetch operands and/ or store results. The rate at which the processor can execute instructions is clearly limited by the memory cycle time (the time it takes to read one word from or write one word to memory). This limitation has been a significant problem because of the persistent mismatch between the processor and main memory speeds: Over the years, processor speed has consistently increased more rapidly than memory access speed. We are faced with a trade-off among speed, cost, and size. Ideally, main memory should be built with the same technology as that of the processor registers, giving memory cycle times comparable to processor cycle times. This has always been too expensive a strategy. The solution is to exploit the principle of the locality by providing a small, fast memory between the processor and main memory, namely the cache.

                                      (b) Three-level cache organization
Figure: Cache and Main Memory
Cache Design



Key elements  of cache design are briefly summarized here. We will see that similar design issues must be addressed in dealing with virtual memory and disk cache design. They fall into the following categories:
• Cache size
• Block size
• Mapping function
• Replacement algorithm
• Write policy
• Number of cache levels
We have already dealt with the issue of cache size. It turns out that reasonably small caches can have a significant impact on performance.

Another size issue is that of block size: the unit of data exchanged between cache and main memory. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality: the high probability that data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into the cache. The hit ratio will begin to decrease, however, as the block becomes even bigger and the probability of using the newly fetched data becomes less than the probability of reusing the data that have to be moved out of the cache to make room for the new block.

When a new block of data is read into the cache, the mapping function determines which cache location the block will occupy. Two constraints affect the design of the mapping function. First, when one block is read in, another may have to be replaced. We would like to do this in such a way as to minimize the probability that we will replace a block that will be needed in the near future. (i.e. we have not to delete a block coming in near future). The more flexible the mapping functions, the more scope we have to design a replacement algorithm to maximize the hit ratio. Second, the more flexible the mapping function, the more complex is the circuitry required to search the cache to determine if a given block is in the cache.

The replacement algorithm chooses, within the constraints of the mapping
function, which blocks to replace when a new block is to be loaded into the cache and the cache already has all slots filled with other blocks. We would like to replace the block that is least likely to be needed again in the near future. Although it is impossible to identify such a block, a reasonably effective strategy is to replace the block that has been in the cache longest with no reference to it. This policy is referred to as the least-recently-used (LRU) algorithm. Hardware mechanisms are needed to identify the least-recently-used block.

If the contents of a block in the cache are altered, then it is necessary to write it
Back to main memory before replacing it. The write policy dictates when the memory write operation takes place. At one extreme, the writing can occur every time that the block is updated. At the other extreme, the writing occurs only when the block is replaced.
Finally, it is now commonplace to have multiple levels of cache, labeled L1

(cache closest to the processor), L2, and in many cases a third level L3. 


0

THREADS IN OPERATING SYSTEM


A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register set, and a stack.
It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as open files and signals.
Threads share the memory and the resources of the process to which they belong by default.
Single threaded process
A traditional (or heavyweight) process has a single thread of control.



when a process is running a word-processor program, a single thread of instructions is being executed. This single thread of control allows the process to perform only one task at a time.
A single-threaded process has one program counter specifying the next instruction to execute.
The execution of such a process must be sequential.  The CPU executes one instruction of the process after another until the process completes.
multithreaded process
If a process has multiple threads of control, it can perform more than one task at a time.
A multithreaded process has multiple program counters, each pointing to the next instruction to execute for a given thread.
                           

Most modern operating systems have extended the process concept to allow a process to have multiple threads of execution and thus to perform more than one task at a time. This feature is especially beneficial on multicore systems, where multiple threads can run in parallel. Most software applications that run on modern computers are multithreaded.
Example:
1.      An application typically is implemented as a separate process with several threads of control. A web browser might have one thread display images or text while another thread retrieves data from the network, for example.
2.      A word processor may have a thread for displaying graphics, another thread for responding to keystrokes from the user, and a third thread for performing spelling and grammar checking in the background.


0

PROCESS STATE

PROCESS STATE

The state of a process is defined in part by the current activity of that process.
As a process executes, it changes state.
A process may be in one of the following states:
1.      New. The process is being created.
2.      Running. Instructions are being executed.
3.      Waiting. The process is waiting for some event to occur (such as an I/O completion or reception of a signal).
4.      Ready. The process is waiting to be assigned to a processor.
5.      Terminated. The process has finished execution.


The states that they represent are found on all systems.
It is important to realize that only one process can be running on any processor at any instant. Many processes may be ready and waiting, however.
The state diagram corresponding to these states is presented below.


Diagram of process states

  
SCHEDULING QUEUES

Job queues
As processes enter the system, they are put into a job queue, which consists of all processes in the system.

Ready queues
·         The processes that are residing in main memory and are ready and waiting to execute are kept on a list called the ready queue.
·         This queue is generally stored as a linked list.
·         A ready-queue header contains pointers to the first and final PCBs in the list. Each PCB includes a pointer field that points to the next PCB in the ready queue.
Device queue
·         Suppose the process makes an I/O request to a shared device, such as a disk. Since there are many processes in the system, the disk may be busy with the I/O request of some other process. The process therefore may have to wait for the disk.
·         The list of processes waiting for a particular I/O device is called a device queue.
·         Each device has its own device queue

The ready queue and various I/O device queues

Queuing diagram of process scheduling
A common representation of process scheduling is a queueing diagram.


Each rectangular box represents a queue. Two types of queues are present: the ready queue and a set of device queues. The circles represent the resources that serve the queues, and the arrows indicate the flow of processes in the system.


Social Time

Facebook
Like Us
Youtube
Follow Us
Twitter
Follow Us
Pinterest
Follow Us

Subscribe to our newsletter

(Get fresh updates in your inbox. Unsubscribe at anytime)