In a multiprogramming environment, several processes may compete for a finite number of resources. A process requests resources; if the resources are not available at that time, the process enters a waiting state.
Coming to definition
Sometimes, a waiting process is never again able to change state, because the resources it has requested are held by other waiting processes. This situation is called a deadlock. The principle of Deadlock can be defined as the permanent blocking of a set of processes that either competes for system resources or communicate with each other. A situation in which two or more processes are unable to proceed because each is waiting For one of the others to do something. A set of processes is deadlocked (is in a deadlocked state) when each process in the set is blocked waiting for an event that can only be triggered (caused) by another blocked process in the set. All deadlocks involve conflicting needs for resources by two or more processes. This is what we call the set of processes in the deadlock. A common example is the traffic deadlock. A figure shows a situation in which four cars have arrived at a four-way stop intersection at approximately the same time.
If all four cars ignore the rules and proceed (cautiously) into the intersection at the same time, then each car sizes /occupies /reserves / finishes one resource (one quadrant) but cannot proceed because the required second resource has already been seized by another car. This is an actual deadlock.
Resemblance of deadlock with Kansas legislature’s law
Perhaps the best illustration of a deadlock can be drawn from a law passed by the Kansas legislature early in the 20th century. It said, in part: “When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone.”
In which situations, the deadlock becomes common
Deadlock problems can only become more common, given current trends, including larger numbers of processes, multithreaded programs, many more resources within a system, and an emphasis on long-lived file and database servers rather than batch systems.
METHODS FOR HANDLING DEADLOCKS
Generally speaking, we can deal with the deadlock problem in one of three ways. We can use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlocked state. We can allow the system to enter a deadlocked state, detect it, and recover. We can ignore the problem altogether and pretend that deadlocks never occur in the system. The third solution is the one used by most operating systems, including Linux and Windows.
These three ways result in the following general methods of handling deadlocks:
Deadlock prevention It provides a set of methods to ensure that at least one of the necessary conditions (Mutual exclusion, Hold and wait, No preemption, Circular wait) cannot hold.
Deadlock avoidance requires that the operating system is given additional information in advance concerning which resources a process will request and use during its lifetime. With this additional knowledge, the operating system can decide for each request whether or not the process should wait.
To decide whether the current request can be satisfied or must be delayed, the system must consider the resources currently available, the resources currently allocated to each process, and the future requests and releases of each process.
To ensure that deadlocks never occur, the system can use either a deadlock prevention or a deadlock avoidance scheme.
Deadlock Detection: If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm, then a deadlock situation may occur. In this environment, the system may provide an algorithm that examines the state of the system to determine whether a deadlock has occurred and that algorithm will recover the system from the deadlock.