Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Mutual Exclusion and Shared Memory Coordination: Techniques and Algorithms, Slides of Operating Systems

Process coordination using shared memory and busy waiting, focusing on mutual exclusion and the critical section problem. It covers various algorithms like dekker's and peterson's, as well as semaphores and their implementation. The document also discusses alternative methods for mutual exclusion without shared memory.

Typology: Slides

2012/2013

Uploaded on 04/23/2013

ashakiran
ashakiran 🇮🇳

4.5

(27)

268 documents

1 / 30

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Shared Memory Coordination
We will be looking at process coordination
using shared memory and busy waiting.
So we don't send messages but read and write
shared variables.
When we need to wait, we loop and don't context
switch.
This can be wasteful of resources if we must wait a
long time.
Docsity.com
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e

Partial preview of the text

Download Mutual Exclusion and Shared Memory Coordination: Techniques and Algorithms and more Slides Operating Systems in PDF only on Docsity!

Shared Memory Coordination

• We will be looking at process coordination

using shared memory and busy waiting.

– So we don't send messages but read and write

shared variables.

– When we need to wait, we loop and don't context

switch.

– This can be wasteful of resources if we must wait a

long time.

Shared Memory Coordination

– Context switching primitives normally use busy

waiting in their implementation.

• Mutual Exclusion

– Consider adding one to a shared variable V.

– When compiled onto many machines get three

instructions

• load r1 ← V

• add r1 ← r1+

• store r1 → V

Mutual Exclusion

– The problem is that the 3 instruction sequence

must be atomic, i.e. cannot be interleaved with

another execution of these instructions

– That is, one execution excludes the possibility of

another. So they must exclude each other, i.e. we

must have mutual exclusion.

– This was a race condition.

• Hard bugs to find since non-deterministic.

– Can in general involve more than two processes

Mutual Exclusion

– The portion of code that requires mutual

exclusion is often called a critical section.

• One approach is to prevent context switching.

– We can do this for the kernel of a uniprocessor.

• Mask interrupts.

– Not feasible for user mode processes.

– Not feasible for multiprocessors.

Mutual Exclusion

– Trivial solution.

• Let releasing-part be simply "halt”.

– This shows we need to specify the problem better.

– Additional requirement:

• Assume that if a process begins execution of its critical

section and no other process enters the critical section,

then the first process will eventually exit the critical

section.

Mutual Exclusion

• Then the requirement is "If a process is executing its

trying part, then some process will eventually enter the

critical section".

• Software-only solutions to CS problem.

– We assume the existence of atomic loads and

stores.

• Only up to word length (i.e. not a whole page).

– We start with the case of two processes.

– Easy if we want tasks to alternate in CS and you

know which one goes first in CS.

Mutual Exclusion

– But always alternating does not satisfy the

additional requirement above.

– Let NCS for process 1 be an infinite loop (or a

halt).

• We will get to a point when process 2 is in its trying

part but turn=1 and turn will not change.

• So some process enters its trying part but neither

process will enter the CS.

Mutual Exclusion

• The first solution that worked was discovered

by a mathematician named Dekker.

– Now we will use turn only to resolve disputes.

Dekker’s Algorithm

/* Variables are global and shared / for (; ;) { // process 2 - an infinite loop to show it enters // CS more than once. Turn is initially 1. p2wants = 1; while (p1wants == 1) { if (turn == 1) { p2wants = 0; while (turn == 1) {/ Empty loop */} p2wants = 1; } } critical_section(); turn = 1; p2wants = 0; noncritical_section(); }

Mutual Exclusion

– The winner-to-be just loops waiting for the loser

to give up and then goes into the CS.

– The loser-to-be:

• Gives up.

• Waits to see that the winner has finished.

• Starts over (knowing it will win).

– Dijkstra extended Dekker's solution for > 2

processes.

– Others improved the fairness of Dijkstra's

algorithm.

Peterson’s Algorithm

/* Variables are global and shared / for (; ;) { // process 1 - an infinite loop to show it enters // CS more than once. p1wants = 1; turn = 2; while (p2wants && turn == 2) {/ empty loop */} critical_section(); p1wants = 0; noncritical_section(); }

Peterson’s Algorithm

/* Variables are global and shared / for (; ;) { // process 2 - an infinite loop to show it enters // CS more than once. p2wants = 1; turn = 1; while (p1wants && turn == 1) {/ empty loop */} critical_section(); p2wants = 0; noncritical_section(); }

Semaphores

– Definition ( not an implementation):

• Let S be an enumerated type with values closed and

open (like a gate).

P(S) is

while S = closed

S ← closed

• The failed test and the assignment are a single atomic

action.

Semaphores

P(S) is

label:

{[ --begin atomic part

if S = open

S ← closed

else

}] --end atomic part

goto label

V(S) is

S ← open