Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

ITT303 Module 3 notes KTU s5 IT, Study notes of Operating Systems

ITT303 Module 3 notes KTU s5 IT

Typology: Study notes

2022/2023

Available from 01/21/2023

dejavu_x_
dejavu_x_ 🇮🇳

1 document

1 / 24

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Process Synchronization
A co-operating process is one that can affect or be affected by other processes executing
in the system. Cooperating process may either directly share a logical address space (that
is, both code and data) or be allowed to share data only through files. The former case is
achieved through the use of lightweight processes or threads.
Concurrent access to shared data may result in data inconsistency. Several processes access
and manipulate the same data concurrently and the outcome of the execution depends on
the particular order in which the access takes place, is called a race condition. To guard
against the race condition we need to ensure that only one process at a time can be
manipulating the variable counter. To make such a guarantee, we require some form of
synchronization of the processes. Such situations occur frequently in operating system
systems as different parts of the system manipulate resources and we want the changes not
to interfere with one another. A major portion of this module is concerned with process
synchronization and coordination.
Interprocess Communication
Cooperating processes can communicate in a shared-memory environment. The scheme
requires that these processes share a common buffer pool, and that the code for
implementing the buffer be written explicitly by the application programmer. Another way
to achieve the same effect is for the operating system to provide the means for cooperating
process to communicate with each other via an interprocess communication facility. IPC
provides a mechanism to allow processes to communicate and to synchronize their actions
without sharing the same address space. IPC is particularly useful in a distributed
environment where the communicating processes may reside on different computers
connected with a network. IPC is best provided by a message-passing system and message
systems can be defined in many ways.
Co-operating process
The concurrent process executing in the operating system may be either independent
processes or cooperating processes. A process is independent if it cannot affect or be
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18

Partial preview of the text

Download ITT303 Module 3 notes KTU s5 IT and more Study notes Operating Systems in PDF only on Docsity!

Process Synchronization A co-operating process is one that can affect or be affected by other processes executing in the system. Cooperating process may either directly share a logical address space (that is, both code and data) or be allowed to share data only through files. The former case is achieved through the use of lightweight processes or threads. Concurrent access to shared data may result in data inconsistency. Several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition. To guard against the race condition we need to ensure that only one process at a time can be manipulating the variable counter. To make such a guarantee, we require some form of synchronization of the processes. Such situations occur frequently in operating system systems as different parts of the system manipulate resources and we want the changes not to interfere with one another. A major portion of this module is concerned with process synchronization and coordination. Interprocess Communication Cooperating processes can communicate in a shared-memory environment. The scheme requires that these processes share a common buffer pool, and that the code for implementing the buffer be written explicitly by the application programmer. Another way to achieve the same effect is for the operating system to provide the means for cooperating process to communicate with each other via an interprocess communication facility. IPC provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. IPC is particularly useful in a distributed environment where the communicating processes may reside on different computers connected with a network. IPC is best provided by a message-passing system and message systems can be defined in many ways. Co-operating process The concurrent process executing in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be

affected by the other processes executing in the system. Clearly any process that doesn’t share any data with any other process is independent. On the other hand a process is cooperating if it can affect or be affected by the other processes executing in the system. We may want to provide an environment that allows process cooperation for several reasons.

  1. Information sharing - Since several users may be interested in the same piece of information, we must provide an environment to allow concurrent access to these types of resources.
  2. Computational speedup - If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Such a speedup can be achieved only if the computers have multiple processing elements.
  3. Modularity - We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads.
  4. Convenience - Even an individual user may have many tasks on which to work at one time. For instance a user may be editing, printing and compiling in parallel. The Critical-Section Problem Consider a system consisting of n processes { P0,P1,…,Pn-1}.Each process has a segment of code, called a critical section, in which the process may be changing common variables, updating a table, writing a file and so on. The important feature of the system is that, when one process is executing in its critical section, no other process is to be allowed to execute in its critical section. Thus the execution of critical sections by the processes is mutually exclusive in time. The critical-section problem is to design a protocol that the processes can use to co-operate. Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section. The general structure of a typical process Pi having critical section is as shown below. do {

Do { Flag[i] = true; While (flag[j]); Critical section; Flag[i] = false; Remainder section } while (1); Algorithm 3 It combines the key ideas of algorithm 1 and algorithm 2 .The structure of process in algorithm 3 is as shown below. Do { Flag[i] = true; Turn =j; While (flag[j] && turn ==j); Critical section Flag[i] = false; Remainder section } while (1); Synchronization hardware As with other aspects of software, hardware features can make the programming task easier and improve system efficiency. In this section, we present some simple hardware instructions that are available on many systems, and show how they can be used effectively in solving the critical section problem. The special hardware instructions that we use are Test and Set and Swap. The critical section problem could be solved simply in a uniprocessor environment if we could forbid interrupts to occur while a shared variable is being modified. In this manner, we could be sure that the current sequence of instructions would be allowed execute in order without preemption. No other instruction would be run, so no unexpected modifications could be made to the shared variable. The mutual exclusion problem can be solved with the help of TestAndSet and Swap instructions.

The implementation of TestAndSet instruction is as follows: Boolean TestAndSet (Boolean &target) { Boolean rv = target; Target = true; Return rv; } Do { Critical section Remainder section } while (1); Initially, the Boolean variable lock is set to false. The implementation of Swap instruction is as follows: Void Swap(Boolean &a, Boolean &b) { Boolean temp = a; a = b; b = temp; } Do { Critical section Remainder section } while(1); Deadlock Several processes compete for a finite set of resources in a multiprogrammed environment. A process requests for resources that may not be readily available at the time of the request. Key =true; While (key = = true) Swap (lock, key); Lock = false; while (TestAndSet (lock)); Lock = false;

There are 3 processes P 1 , P 2 and P 3. Resources R 1 , R 2 , R 3 and R 4 have instances 1, 2, 1, and 3 respectively. P 1 is holding R 2 and waiting for R 1. P 2 is holding R 1 , R 2 and is waiting for R 3. P 3 is holding R 3. The resource allocation gragh for a system in the above situation is as shown below R 1 . If a resource allocation graph has no cycles (a closed loop in the direction of the edges), then the system is not in a state of deadlock. If on the other hand, there are cycles, then a deadlock may exist. If there are only single instances of each resource type, then a cycle in a resource allocation graph is a necessary and sufficient condition for existence of a deadlock.

P

1

P 2 P

3 R 2 R 4

R 1

R 3

P

1

P 2 P

3

R 1

R 3

Here two cycles exist: P 1 → R 1 → P 2 → R 3 → P 3 → R 2 → P 1 P 2 → R 3 → P 3 → R 2 → P 2 Processes P 0 , P 1 and P 3 are deadlocked and are in a circular wait. P 2 is waiting for R 3 held by P 3. P 3 is waiting for P 1 or P 2 to release R 2. So also P 1 is waiting for P 2 to release R 1. If there are multiple instances of resources types, then a cycle does not necessarily imply a deadlock. Here a cycle is a necessary condition but not a sufficient condition for the existence of a deadlock. Deadlock Handling Different methods to deal with deadlocks include methods to ensure that the system will never enter into a state of deadlock, methods that allow the system to enter into a deadlock and then recover or to just ignore the problem of deadlocks. To ensure that deadlocks never occur, deadlock prevention / avoidance schemes are used. The four necessary conditions for deadlocks to occur are mutual exclusion, hold and wait, no preemption and circular wait. Deadlock prevention ensures that at least one of the four necessary conditions for deadlocks do not hold. To do this the scheme enforces constraints on requests for resources. Dead lock avoidance scheme requires the operating system to know in advance, the resources needed by a process for its entire lifetime. Based on this a priori information, the process making a request is either made to wait or not to wait in case the requested resource is not readily available. If none of the above two schemes are

When a process requests for a resource, it is allocated the resource, if it is available. If it is not, than a check is made to see if the process holding the wanted resource is also waiting for additional resources. If so the wanted resource is preempted from the waiting process and allotted to the requesting process. If both the above is not true that is the resource is neither available nor held by a waiting process, then the requesting process waits. During its waiting period, some of its resources could also be preempted in which case the process will be restarted only when all the new and the preempted resources are allocated to it. Another alternative approach could be as follows: If a process requests for a resource which is not available immediately, then all other resources it currently holds are preempted. The process restarts only when the new and the preempted resources are allocated to it as in the previous case. Resources can be preempted only if their current status can be saved so that processes could be restarted later by restoring the previous states. Example CPU memory and main memory. But resources such as printers cannot be preempted, as their states cannot be saved for restoration later.

4. Circular wait : Resource types need to be ordered and processes requesting for resources will do so in increasing order of enumeration. Each resource type is mapped to a unique integer that allows resources to be compared and to find out the precedence order for the resources. Thus F: R → N is a 1:1 function that maps resources to numbers. For example: F (tape drive) = 1, F (disk drive) = 5, F (printer) = 10. To ensure that deadlocks do not occur, each process can request for resources only in increasing order of these numbers. A process to start with in the very first instance can request for any resource say Ri. There after it can request for a resource Rj if and only if F(Rj) is greater than F(Ri). Alternately, if F(Rj) is less than F(Ri), then Rj can be allocated to the process if and only if the process releases Ri. The mapping function F should be so defined that resources get numbers in the usual order of usage. Deadlock Avoidance Deadlock prevention algorithms ensure that at least one of the four necessary conditions for deadlocks namely mutual exclusion, hold and wait, no preemption and circular wait do

not hold. The disadvantage with prevention algorithms is poor resource utilization and thus reduced system throughput. An alternate method is to avoid deadlocks. In this case additional a priori information about the usage of resources by processes is required. This information helps to decide on whether a process should wait for a resource or not. Decision about a request is based on all the resources available, resources allocated to processes, future requests and releases by processes. A deadlock avoidance algorithm requires each process to make known in advance the maximum number of resources of each type that it may need. Also known is the maximum number of resources of each type available. Using both the above a priori knowledge, deadlock avoidance algorithm ensures that a circular wait condition never occurs. SAFE STATE A system is said to be in a safe state if it can allocate resources up to the maximum available and is not in a state of deadlock. A safe sequence of processes always ensures a safe state. A sequence of processes < P 1 , P 2 , ....., Pn > is safe for the current allocation of resources to processes if resource requests from each Pi can be satisfied from the currently available resources and the resources held by all Pj where j < i. If the state is safe then Pi requesting for resources can wait till Pj’s have completed. If such a safe sequence does not exist, then the system is in an unsafe state. A safe state is not a deadlock state. Conversely a deadlock state is an unsafe state. But all unsafe states are not deadlock states as shown below:

Unsafe

Safe

Deadlock

Thus the system has gone from a safe state at time instant t 0 into an unsafe state at an instant t 1. The extra resource that was granted to P 2 at the instant t 1 was a mistake. P 2 should have waited till other processes finished and released their resources. Since resources available should not be allocated right away as the system may enter an unsafe state, resource utilization is low if deadlock avoidance algorithms are used. RESOURCE ALLOCATION GRAPH ALGORITHM A resource allocation graph could be used to avoid deadlocks. If a resource allocation graph does not have a cycle, then the system is not in deadlock. But if there is a cycle then the system may be in a deadlock. If the resource allocation graph shows only resources that have only a single instance, then a cycle does imply a deadlock. An algorithm for avoiding deadlocks where resources have single instances in a resource allocation graph is as described below. The resource allocation graph has request edges and assignment edges. Let there be another kind of edge called a claim edge. A directed edge Pi → Rj indicates that Pi may request for the resource Rj some time later. In a resource allocation graph a dashed line represents a claim edge. Later when a process makes an actual request for a resource, the corresponding claim edge is converted to a request edge Pi → Rj. Similarly when a process releases a resource after use, the assignment edge Rj → Pi is reconverted to a claim edge Pi → Rj. Thus a process must be associated with all its claim edges before it starts executing. If a process Pi requests for a resource Rj, then the claim edge Pi → Rj is first converted to a request edge Pi → Rj. The request of Pi can be granted only if the request edge when converted to an assignment edge does not result in a cycle. If no cycle exists, the system is in a safe state and requests can be granted. If not the system is in an unsafe state and hence in a deadlock. In such a case, requests should not be granted. This is illustrated below (Figure 5.5a, 5.5b). R 1 R 2

P

1

P

2

R 1

R 2

P

1

P

2

Figure : Resource allocation graph showing safe and deadlock states. Consider the resource allocation graph shown on the left above. Resource R 2 is currently free. Allocation of R 2 to P 2 on request will result in a cycle as shown on the right. Therefore the system will be in an unsafe state. In this situation if P 1 requests for R 2 , then a deadlock occurs. BANKER’S ALGORITHM The resource allocation graph algorithm is not applicable where resources have multiple instances. In such a case Banker’s algorithm is used. A new process entering the system must make known a priori the maximum instances of each resource that it needs subject to the maximum available for each type. As execution proceeds and requests are made, the system checks to see if the allocation of the requested resources ensures a safe state. If so only are the allocations made, else processes must wait for resources. The following are the data structures maintained to implement the Banker’s algorithm:

  1. n: Number of processes in the system.
  2. m: Number of resource types in the system.
  3. Available: is a vector of length m. Each entry in this vector gives maximum instances of a resource type that are available at the instant. Available[j] = k means to say there are k instances of the jth^ resource type Rj.
  4. Max: is a demand vector of size n x m. It defines the maximum needs of each resource by the process. Max[i][j] = k says the ith^ process Pi can request for at most k instances of the jth resource type Rj.
  5. Allocation: is an n x m vector which at any instant defines the number of resources of each type currently allocated to each of the m processes. If Allocation[i][j] = k then ith^ process Pi is currently holding k instances of the jth^ resource type Rj.

If the resulting state is safe, then process Pi is allocated the resources and the above changes are made permanent. If the new state is unsafe, then Pi must wait and the old status of the data structures is restored. Illustration: n = 5 < P 0 , P 1 , P 2 , P 3 , P 4 > M = 3 < A, B, C > Initially Available = < 10, 5, 7 > At an instant t 0 , the data structures have the following values: Allocation Max Available Need A B C A B C A B C A B C

P 0 0 1 0 7 5 3 3 3 2 7 4 3

P 1 2 0 0 3 2 2 1 2 2

P 2 3 0 2 9 0 2 6 0 0

P 3 2 1 1 2 2 2 0 1 1

P 4 0 0 2 4 3 3 4 3 1

To find a safe sequence and to prove that the system is in a safe state, use the safety algorithm as follows: Step Work Finish Safe sequence 0 3 3 2 F F F F F < > 1 5 3 2 F T F F F < P 1 > 2 7 4 3 F T F T F < P 1 , P 3 > 3 7 4 5 F T F T T < P 1 , P 3 , P 4 > 4 7 5 5 T T F T T < P 1 , P 3 , P 4 , P 0 > 5 10 5 7 T T T T T < P 1 , P 3 , P 4 , P 0 , P 2 > Now at an instant t 1 , Request 1 = < 1, 0, 2 >. To actually allocate the requested resources, use the request-resource algorithm as follows: Request 1 < Need 1 and Request 1 < Available so the request can be considered. If the request is fulfilled, then the new the values in the data structures are as follows: Allocation Max Available Need

A B C A B C A B C A B C

P 0 0 1 0 7 5 3 2 3 0 7 4 3

P 1 3 0 2 3 2 2 0 2 0

P 2 3 0 2 9 0 2 6 0 0

P 3 2 1 1 2 2 2 0 1 1

P 4 0 0 2 4 3 3 4 3 1

Use the safety algorithm to see if the resulting state is safe: Step Work Finish Safe sequence 0 2 3 0 F F F F F < > 1 5 3 2 F T F F F < P 1 > 2 7 4 3 F T F T F < P 1 , P 3 > 3 7 4 5 F T F T T < P 1 , P 3 , P 4 > 4 7 5 5 T T F T T < P 1 , P 3 , P 4 , P 0 > 5 10 5 7 T T T T T < P 1 , P 3 , P 4 , P 0 , P 2 > Since the resulting state is safe, request by P 1 can be granted. Now at an instant t 2 Request 4 = < 3, 3, 0 >. But since Request 4 > Available, the request cannot be granted. Also Request 0 = < 0, 2, 0> at t 2 cannot be granted since the resulting state is unsafe as shown below: Allocation Max Available Need A B C A B C A B C A B C

P 0 0 3 0 7 5 3 2 1 0 7 2 3

P 1 3 0 2 3 2 2 0 2 0

P 2 3 0 2 9 0 2 6 0 0

P 3 2 1 1 2 2 2 0 1 1

P 4 0 0 2 4 3 3 4 3 1

Using the safety algorithm, the resulting state is unsafe since Finish is false for all values of i and we cannot find a safe sequence. Step Work Finish Safe sequence 0 2 1 0 F F F F F < >

A wait-for graph is not applicable for detecting deadlocks where there exist multiple instances of resources. This is because there is a situation where a cycle may or may not indicate a deadlock. If this is so then a decision cannot be made. In situations where there are multiple instances of resources, an algorithm similar to Banker’s algorithm for deadlock avoidance is used. Data structures used are similar to those used in Banker’s algorithm and are given below:

  1. n: Number of processes in the system.
  2. m: Number of resource types in the system.
  3. Available: is a vector of length m. Each entry in this vector gives maximum instances of a resource type that are available at the instant.
  4. Allocation: is an n x m vector which at any instant defines the number of resources of each type currently allocated to each of the m processes.
  5. Request: is also an n x m vector defining the current requests of each process. Request[i][j] = k means the ith^ process Pi is requesting for k instances of the jth^ resource type Rj. ALGORITHM
  6. Define a vector Work of length m and a vector Finish of length n.
  7. Initialize Work = Available and For i = 1, 2, ….., n If Allocationi != 0 Finish[i] = false Else Finish[i] = true
  8. Find an i such that a. Finish[i] = false and b. Requesti <= Work If such an i does not exist , go to step 5.
  1. Work = Work + Allocationi Finish[i] = true Go to step 3.
  2. If finish[i] = true for all i, then the system is not in deadlock. Else the system is in deadlock with all processes corresponding to Finish[i] = false being deadlocked. Illustration: n = 5 < P 0 , P 1 , P 2 , P 3 , P 4 > M = 3 < A, B, C > Initially Available = < 7, 2, 6 > At an instant t0, the data structures have the following values: Allocation Request Available A B C A B C A B C P 0 0 1 0 0 0 0 0 0 0 P 1 2 0 0 2 0 2 P 2 3 0 3 0 0 0 P 3 2 1 1 1 0 0 P 4 0 0 2 0 0 2 To prove that the system is not deadlocked, use the above algorithm as follows: Step Work Finish Safe sequence 0 0 0 0 F F F F F < > 1 0 1 0 T F F F F < P 0 > 2 3 1 3 T F T F F < P 0 , P 2 > 3 5 2 4 T F T T F < P 0 , P 2 , P 3 > 4 5 2 6 T F T T T < P 0 , P 2 , P 3 , P 4 > 5 7 2 6 T T T T T < P 0 , P 2 , P 3 , P 4 , P 1 > Now at an instant t 1 , Request 2 = < 0, 0, 1 > and the new values in the data structures are as follows: Allocation Request Available