Converted all wait/wakeup runqueue lock/unlock paths to irq versions.

Irqs can now touch runqueues and do async wakeups. This necessitated
that we implement all wake up wait and runqueue locking work with irqs.

All this, assumes that in an SMP setup we may have cross-cpu wake ups,
runqueue manipulation. If we later decide that we only wake up threads
in the current container, (and lock containers to cpus) we won't really
need spinlocks, or irq disabling anymore. The current set up might be
trivially less responsive, but is more flexible.
This commit is contained in:
Bahadir Balban
2009-12-12 01:20:14 +02:00
parent b1614191b3
commit 32c0bb3a76
11 changed files with 155 additions and 72 deletions

View File

@@ -0,0 +1,25 @@
#ifndef __L4LIB_ARCH_IRQ_H__
#define __L4LIB_ARCH_IRQ_H__
/*
* Destructive atomic-read.
*
* Write 0 to byte at @location as its contents are read back.
*/
static inline char l4_atomic_dest_readb(void *location)
{
unsigned char zero = 0;
unsigned char val;
char *loc = location;
__asm__ __volatile__ (
"swpb %1, %2, [%0] \n"
:: "r" (*loc), "r" (val), "r" (zero)
);
return val;
}
#endif