
















Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
ITT303 Module 3 notes KTU s5 IT
Typology: Study notes
1 / 24
This page cannot be seen from the preview
Don't miss anything!
Process Synchronization A co-operating process is one that can affect or be affected by other processes executing in the system. Cooperating process may either directly share a logical address space (that is, both code and data) or be allowed to share data only through files. The former case is achieved through the use of lightweight processes or threads. Concurrent access to shared data may result in data inconsistency. Several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition. To guard against the race condition we need to ensure that only one process at a time can be manipulating the variable counter. To make such a guarantee, we require some form of synchronization of the processes. Such situations occur frequently in operating system systems as different parts of the system manipulate resources and we want the changes not to interfere with one another. A major portion of this module is concerned with process synchronization and coordination. Interprocess Communication Cooperating processes can communicate in a shared-memory environment. The scheme requires that these processes share a common buffer pool, and that the code for implementing the buffer be written explicitly by the application programmer. Another way to achieve the same effect is for the operating system to provide the means for cooperating process to communicate with each other via an interprocess communication facility. IPC provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. IPC is particularly useful in a distributed environment where the communicating processes may reside on different computers connected with a network. IPC is best provided by a message-passing system and message systems can be defined in many ways. Co-operating process The concurrent process executing in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be
affected by the other processes executing in the system. Clearly any process that doesn’t share any data with any other process is independent. On the other hand a process is cooperating if it can affect or be affected by the other processes executing in the system. We may want to provide an environment that allows process cooperation for several reasons.
Do { Flag[i] = true; While (flag[j]); Critical section; Flag[i] = false; Remainder section } while (1); Algorithm 3 It combines the key ideas of algorithm 1 and algorithm 2 .The structure of process in algorithm 3 is as shown below. Do { Flag[i] = true; Turn =j; While (flag[j] && turn ==j); Critical section Flag[i] = false; Remainder section } while (1); Synchronization hardware As with other aspects of software, hardware features can make the programming task easier and improve system efficiency. In this section, we present some simple hardware instructions that are available on many systems, and show how they can be used effectively in solving the critical section problem. The special hardware instructions that we use are Test and Set and Swap. The critical section problem could be solved simply in a uniprocessor environment if we could forbid interrupts to occur while a shared variable is being modified. In this manner, we could be sure that the current sequence of instructions would be allowed execute in order without preemption. No other instruction would be run, so no unexpected modifications could be made to the shared variable. The mutual exclusion problem can be solved with the help of TestAndSet and Swap instructions.
The implementation of TestAndSet instruction is as follows: Boolean TestAndSet (Boolean &target) { Boolean rv = target; Target = true; Return rv; } Do { Critical section Remainder section } while (1); Initially, the Boolean variable lock is set to false. The implementation of Swap instruction is as follows: Void Swap(Boolean &a, Boolean &b) { Boolean temp = a; a = b; b = temp; } Do { Critical section Remainder section } while(1); Deadlock Several processes compete for a finite set of resources in a multiprogrammed environment. A process requests for resources that may not be readily available at the time of the request. Key =true; While (key = = true) Swap (lock, key); Lock = false; while (TestAndSet (lock)); Lock = false;
There are 3 processes P 1 , P 2 and P 3. Resources R 1 , R 2 , R 3 and R 4 have instances 1, 2, 1, and 3 respectively. P 1 is holding R 2 and waiting for R 1. P 2 is holding R 1 , R 2 and is waiting for R 3. P 3 is holding R 3. The resource allocation gragh for a system in the above situation is as shown below R 1 . If a resource allocation graph has no cycles (a closed loop in the direction of the edges), then the system is not in a state of deadlock. If on the other hand, there are cycles, then a deadlock may exist. If there are only single instances of each resource type, then a cycle in a resource allocation graph is a necessary and sufficient condition for existence of a deadlock.
1
3 R 2 R 4
1
3
Here two cycles exist: P 1 → R 1 → P 2 → R 3 → P 3 → R 2 → P 1 P 2 → R 3 → P 3 → R 2 → P 2 Processes P 0 , P 1 and P 3 are deadlocked and are in a circular wait. P 2 is waiting for R 3 held by P 3. P 3 is waiting for P 1 or P 2 to release R 2. So also P 1 is waiting for P 2 to release R 1. If there are multiple instances of resources types, then a cycle does not necessarily imply a deadlock. Here a cycle is a necessary condition but not a sufficient condition for the existence of a deadlock. Deadlock Handling Different methods to deal with deadlocks include methods to ensure that the system will never enter into a state of deadlock, methods that allow the system to enter into a deadlock and then recover or to just ignore the problem of deadlocks. To ensure that deadlocks never occur, deadlock prevention / avoidance schemes are used. The four necessary conditions for deadlocks to occur are mutual exclusion, hold and wait, no preemption and circular wait. Deadlock prevention ensures that at least one of the four necessary conditions for deadlocks do not hold. To do this the scheme enforces constraints on requests for resources. Dead lock avoidance scheme requires the operating system to know in advance, the resources needed by a process for its entire lifetime. Based on this a priori information, the process making a request is either made to wait or not to wait in case the requested resource is not readily available. If none of the above two schemes are
When a process requests for a resource, it is allocated the resource, if it is available. If it is not, than a check is made to see if the process holding the wanted resource is also waiting for additional resources. If so the wanted resource is preempted from the waiting process and allotted to the requesting process. If both the above is not true that is the resource is neither available nor held by a waiting process, then the requesting process waits. During its waiting period, some of its resources could also be preempted in which case the process will be restarted only when all the new and the preempted resources are allocated to it. Another alternative approach could be as follows: If a process requests for a resource which is not available immediately, then all other resources it currently holds are preempted. The process restarts only when the new and the preempted resources are allocated to it as in the previous case. Resources can be preempted only if their current status can be saved so that processes could be restarted later by restoring the previous states. Example CPU memory and main memory. But resources such as printers cannot be preempted, as their states cannot be saved for restoration later.
4. Circular wait : Resource types need to be ordered and processes requesting for resources will do so in increasing order of enumeration. Each resource type is mapped to a unique integer that allows resources to be compared and to find out the precedence order for the resources. Thus F: R → N is a 1:1 function that maps resources to numbers. For example: F (tape drive) = 1, F (disk drive) = 5, F (printer) = 10. To ensure that deadlocks do not occur, each process can request for resources only in increasing order of these numbers. A process to start with in the very first instance can request for any resource say Ri. There after it can request for a resource Rj if and only if F(Rj) is greater than F(Ri). Alternately, if F(Rj) is less than F(Ri), then Rj can be allocated to the process if and only if the process releases Ri. The mapping function F should be so defined that resources get numbers in the usual order of usage. Deadlock Avoidance Deadlock prevention algorithms ensure that at least one of the four necessary conditions for deadlocks namely mutual exclusion, hold and wait, no preemption and circular wait do
not hold. The disadvantage with prevention algorithms is poor resource utilization and thus reduced system throughput. An alternate method is to avoid deadlocks. In this case additional a priori information about the usage of resources by processes is required. This information helps to decide on whether a process should wait for a resource or not. Decision about a request is based on all the resources available, resources allocated to processes, future requests and releases by processes. A deadlock avoidance algorithm requires each process to make known in advance the maximum number of resources of each type that it may need. Also known is the maximum number of resources of each type available. Using both the above a priori knowledge, deadlock avoidance algorithm ensures that a circular wait condition never occurs. SAFE STATE A system is said to be in a safe state if it can allocate resources up to the maximum available and is not in a state of deadlock. A safe sequence of processes always ensures a safe state. A sequence of processes < P 1 , P 2 , ....., Pn > is safe for the current allocation of resources to processes if resource requests from each Pi can be satisfied from the currently available resources and the resources held by all Pj where j < i. If the state is safe then Pi requesting for resources can wait till Pj’s have completed. If such a safe sequence does not exist, then the system is in an unsafe state. A safe state is not a deadlock state. Conversely a deadlock state is an unsafe state. But all unsafe states are not deadlock states as shown below:
Deadlock
Thus the system has gone from a safe state at time instant t 0 into an unsafe state at an instant t 1. The extra resource that was granted to P 2 at the instant t 1 was a mistake. P 2 should have waited till other processes finished and released their resources. Since resources available should not be allocated right away as the system may enter an unsafe state, resource utilization is low if deadlock avoidance algorithms are used. RESOURCE ALLOCATION GRAPH ALGORITHM A resource allocation graph could be used to avoid deadlocks. If a resource allocation graph does not have a cycle, then the system is not in deadlock. But if there is a cycle then the system may be in a deadlock. If the resource allocation graph shows only resources that have only a single instance, then a cycle does imply a deadlock. An algorithm for avoiding deadlocks where resources have single instances in a resource allocation graph is as described below. The resource allocation graph has request edges and assignment edges. Let there be another kind of edge called a claim edge. A directed edge Pi → Rj indicates that Pi may request for the resource Rj some time later. In a resource allocation graph a dashed line represents a claim edge. Later when a process makes an actual request for a resource, the corresponding claim edge is converted to a request edge Pi → Rj. Similarly when a process releases a resource after use, the assignment edge Rj → Pi is reconverted to a claim edge Pi → Rj. Thus a process must be associated with all its claim edges before it starts executing. If a process Pi requests for a resource Rj, then the claim edge Pi → Rj is first converted to a request edge Pi → Rj. The request of Pi can be granted only if the request edge when converted to an assignment edge does not result in a cycle. If no cycle exists, the system is in a safe state and requests can be granted. If not the system is in an unsafe state and hence in a deadlock. In such a case, requests should not be granted. This is illustrated below (Figure 5.5a, 5.5b). R 1 R 2
1
2
1
2
Figure : Resource allocation graph showing safe and deadlock states. Consider the resource allocation graph shown on the left above. Resource R 2 is currently free. Allocation of R 2 to P 2 on request will result in a cycle as shown on the right. Therefore the system will be in an unsafe state. In this situation if P 1 requests for R 2 , then a deadlock occurs. BANKER’S ALGORITHM The resource allocation graph algorithm is not applicable where resources have multiple instances. In such a case Banker’s algorithm is used. A new process entering the system must make known a priori the maximum instances of each resource that it needs subject to the maximum available for each type. As execution proceeds and requests are made, the system checks to see if the allocation of the requested resources ensures a safe state. If so only are the allocations made, else processes must wait for resources. The following are the data structures maintained to implement the Banker’s algorithm:
If the resulting state is safe, then process Pi is allocated the resources and the above changes are made permanent. If the new state is unsafe, then Pi must wait and the old status of the data structures is restored. Illustration: n = 5 < P 0 , P 1 , P 2 , P 3 , P 4 > M = 3 < A, B, C > Initially Available = < 10, 5, 7 > At an instant t 0 , the data structures have the following values: Allocation Max Available Need A B C A B C A B C A B C
To find a safe sequence and to prove that the system is in a safe state, use the safety algorithm as follows: Step Work Finish Safe sequence 0 3 3 2 F F F F F < > 1 5 3 2 F T F F F < P 1 > 2 7 4 3 F T F T F < P 1 , P 3 > 3 7 4 5 F T F T T < P 1 , P 3 , P 4 > 4 7 5 5 T T F T T < P 1 , P 3 , P 4 , P 0 > 5 10 5 7 T T T T T < P 1 , P 3 , P 4 , P 0 , P 2 > Now at an instant t 1 , Request 1 = < 1, 0, 2 >. To actually allocate the requested resources, use the request-resource algorithm as follows: Request 1 < Need 1 and Request 1 < Available so the request can be considered. If the request is fulfilled, then the new the values in the data structures are as follows: Allocation Max Available Need
Use the safety algorithm to see if the resulting state is safe: Step Work Finish Safe sequence 0 2 3 0 F F F F F < > 1 5 3 2 F T F F F < P 1 > 2 7 4 3 F T F T F < P 1 , P 3 > 3 7 4 5 F T F T T < P 1 , P 3 , P 4 > 4 7 5 5 T T F T T < P 1 , P 3 , P 4 , P 0 > 5 10 5 7 T T T T T < P 1 , P 3 , P 4 , P 0 , P 2 > Since the resulting state is safe, request by P 1 can be granted. Now at an instant t 2 Request 4 = < 3, 3, 0 >. But since Request 4 > Available, the request cannot be granted. Also Request 0 = < 0, 2, 0> at t 2 cannot be granted since the resulting state is unsafe as shown below: Allocation Max Available Need A B C A B C A B C A B C
Using the safety algorithm, the resulting state is unsafe since Finish is false for all values of i and we cannot find a safe sequence. Step Work Finish Safe sequence 0 2 1 0 F F F F F < >
A wait-for graph is not applicable for detecting deadlocks where there exist multiple instances of resources. This is because there is a situation where a cycle may or may not indicate a deadlock. If this is so then a decision cannot be made. In situations where there are multiple instances of resources, an algorithm similar to Banker’s algorithm for deadlock avoidance is used. Data structures used are similar to those used in Banker’s algorithm and are given below: