Lecture 6, part 1: Syncronization and active waiting

Previous lecture Next lecture

Exam

Interaction of parallel activities, the resulting problems and solutions

Important questions:

Example: Shared data

A simple linked list implementation in C:

/* Data type for list elements */
struct element {
  char payload; /* the data to be stored */
  struct element *next; /* pointer to next list element */
};
/* Data type for list administration */
struct list {
  struct element *head; /* first element */
  struct element **tail; /* 'next' pointer in last element */
};
/* Function to add a new element to the end of the list */
void enqueue (struct list *list, struct element *item) {
  item->next = NULL;
  *list->tail = item;
  list->tail = &item->next;
}

Where does this problem occur?

The problem: race conditions

Synchronization

Critical section

Solution: lock variables

Implementing locks: the incorrect way

/* Lock variable (initial value is 0) */
typedef unsigned char Lock;
/* enter the critical section */
void acquire (Lock *lock) {
 while (*lock); /* note: empty loop body! */
 *lock = 1;
}
/* leave the critical section */
void release (Lock *lock) {
 *lock = 0;
}

This naΓ―ve lock implementation does not work!

A working solution: "bakery" algorithm

typedef struct { /* lock variables (initially all 0) */
  bool choosing[N]; int number[N];
} Lock;

void acquire (Lock *lock) { /* enter critical section */
  int j; int i = pid();
  lock->choosing[i] = true;
  lock->number[i] = max(lock->number[0], ...number[N-1]) + 1;
  lock->choosing[i] = false;
  for (j = 0; j < N; j++) {
    while (lock->choosing[j]);
    while (lock->number[j] != 0 &&
        (lock->number[j] < lock->number[i] ||
        (lock->number[j] == lock->number[i] && j < i)));
  }
}

void release (Lock *lock) { /* leave critical section */
  int i = pid(); lock->number[i] = 0;
}

Discussion: bakery algorithm

The bakery alogrithm is a provably correct solution for the problem of critical sections, but:

Locks with atomic operations

Many CPUs support indivisible (atomic) read/modify/write cycles that can be used to implement lock algorithms

acquire   TAS lock
          BNE acquire
          mov ax, 1
acquire   xchg lock
          cmp ax, 0
          jne acquire
          MOV r1, #0xFF
acquire   LDREX r0, [LockAddr]
          CMP r0, #0
          STREXEQ r0, r1, [LockAddr]
          CMPEQ r0, #0
          BNE acquire

Discussion: active waiting

So far, our lock algorithms have a significant drawback. The actively waiting process

Suppressing interrupts

What is the reason for a process switch inside a critical section?

/* enter critical section */
void acquire (Lock *lock) {
 asm ("cli");
}
/* leave critical section */
void release (Lock *lock) {
 asm ("sti");
}

cli and sti are used in Intel x86 processors to disable and enable the handling of interrupts.

Lecture 6, part 2: Passive waiting and monitors

Alternative: passive waiting

Semaphores

Example semaphore implementation

/* C++ implementation taken from the teaching OS OO-StuBS */
class Semaphore : public WaitingRoom {
  int counter;
public:
  Semaphore(int c) : counter(c) {}
  void wait() {
    if (counter == 0) {
      Customer *life = (Customer*)scheduler.active();
      enqueue(life);
      scheduler.block(life, this);
    }
    else
      counter--;
  }
  void signal() {
    Customer *customer = (Customer*)dequeue();
    if (customer)
      scheduler.wakeup(customer);
    else
      counter++;
  }
};

Using semaphores

Semaphore lock; /* = 1: use semaphore as lock variable */
/* Example code: enqueue */
void enqueue (struct list *list, struct element *item) {
  item->next = NULL;
  wait (&lock);
  *list->tail = item;
  list->tail = &item->next;
  signal (&lock);
}

Semaphores: simple interactions

/* shared memory */
Semaphore elem;
struct list l;
struct element e;

/* initialization */
elem = 0;
/* process 1 */
void producer() {
 enqueue(&l, &e);
 signal(&elem);
}
/* process 2 */
void consumer() {
 struct element *x;
 wait(&elem);
 x = dequeue(&l);
}
/* shared memory */
Semaphore resource;

/* initialization */
resource = N; /* N > 1 */

/* the rest: same as with
mutual exclusion */

Semaphores: complex interactions

/* shared memory */
Semaphore mutex;
Semaphore wrt;
int readcount;

/* initialization */
mutex = 1;
wrt = 1;
readcount = 0;
/* writer */
wait(&wrt);
// … write data …
signal(&wrt);
/* reader */
wait(&mutex);
readcount++;
if (readcount == 1)
  wait(&wrt);
signal(&mutex);
// … read data …
wait(&mutex);
readcount--;
if (readcount == 0)
  signal(&wrt);
signal(&mutex);

Semaphores: discussion

Language suppoert: monitors

Monitors: example code

/* A synchronized queue */
monitor SyncQueue {
  Queue queue;
  condition not_empty;
public:
  /* add an element */
  void enqueue(Element element) {
    queue.enqueue(element);
    not_empty.signal();
  }
  /* remove an element */
  Element dequeue() {
    while (queue.is_empty())
      not_empty.wait();
    return queue.dequeue();
  }
};

Signaling semantics in monitors

Monitors in Java

/* A synchronized queue */
class SyncQueue {
  private Queue queue;
  /* add element */
  public synchronized void enqueue(Element element) {
    queue.enqueue(element);
    notifyAll();
  }
  /* remove element */
  public synchronized Element dequeue() {
    while (queue.empty()) wait();
    return queue.dequeue();
  }
};

Conclusion