Important points:
----------------
1. Works fine for pb926 + qemu.
2. Scan code logic for kryboard is not complete.
We just have generic keys and shift working.
3. Mouse scancodes are collected but not decoded.
4. Right now we are doing enable_irq(), just before we go for waiting
again for new irqs. This is not correct but we had latency issues.
This needs to be fixed immediately.
5. Also it seems like the notify_clot count should be an atomic
variable. Needs to be discussed.
- Userspace irq handling for timer.
- If no runnable task is left. scheduler busy loops in user context
of last runnable task until a new task becomes runnable.
Irqs can now touch runqueues and do async wakeups. This necessitated
that we implement all wake up wait and runqueue locking work with irqs.
All this, assumes that in an SMP setup we may have cross-cpu wake ups,
runqueue manipulation. If we later decide that we only wake up threads
in the current container, (and lock containers to cpus) we won't really
need spinlocks, or irq disabling anymore. The current set up might be
trivially less responsive, but is more flexible.
l4_capability_control works well for almost all system calls
using a buffer pointer to the capability that it operates on.
Only for sharing/granting of capability lists, it is yet to be
decided how to provide a grant target id.
The first one is related to resource recycling. The parent which is waiting its
child to exit did not delete its ktcb. Now, it deletes.
The second one is related to self destroy. The added code wakes up all the
waiters before it exits.
If sleepers on a mutex were more than one, only one of them was woken up. This
caused the other ones to sleep forever. Now, there is not any facility to check
if there are still sleepers on the kernel space when a thread is about to unlock
a mutex. To workaround this problem, we started waking up all the threads
instead of one. This brings another problem called thundering herd but also
provides random fairness which gives more oppurtunity to a higher priority
thread to get the lock.
It is not very straightforward to reach a capabilities list. We
now use a single function to find out a capability by its id and
its list, since the two are used frequently together (i.e. cap
removal and destruction)
Implemented a protocol between a client and its pager to
request and get a capability to ipc to another client of the pager.
Pager first ensures the request is valid from its client.
It then tries to use a greater capability that it possesses, to
produce a new capability that the client requested. Once the kernel
validates the correct one and replicates/reduces it to client's
need, it grants it to the client.
Pager handles client capability requests by using one of its own
capabilities to create a new one that suits the client's needs.
The current issue is that the kernel can have multiple caps and it
may not know which one is suitable for using to create one for the client.
The kernel knows this very well, so the solution would be to attempt to
use capabilities that roughly match (i.e. by type) and leave it to
the kernel to decide whether it is any powerful to suit client's needs.
Thread ids now contain their container ids in the top 2 nibbles.
Threads on other containers can be addressed by changing those
two nibbles. The addressing of inter-container threads are
subject to capabilities.
Task ids are now unsigned as the container ids will need to be encoded
in the id fields as well.
For requests who require even more comprehensive id input, (such as
thread creation) also added is the container id so that threads
_could_ potentially be created in other containers as well.
Multi-threaded apps can now wait on children to destroy.
WAIT_ON is useful when a child exists with an exit code and the pager
of the child does not want to take the hassle of destorying it via an
ipc. It provides an alternative method of synchronous thread destruction,
where the child destroys itself directly rather than the parent issuing
a destroy on it explicitly.
Pagers kill all children but suspend themselves.
Currently not straightforward for a pager to delete its own tcb and quit.
It should take all allocator locks without sleeping, remove itself from
scheduler queue and then delete itself and quit. This is not so easy now
as some allocation locks are mutexes. (Address space lock, ktcb/space
allocators etc.)
An easier approach would be to have a kernel thread or a superior thread
that would delete the pager
Reiterating again to simplify:
Working:
- Pager issues destroy, client also issues exit
they work in sync.
Missing
- Pager killing itself
- Pager killing all children while killing itself
- Pager waiting on children
One is related to the time distribution when a new child is created.
If the parent has one tick left, then both child and parent received
zero tick. When combined with
current_irq_nest_count = 1
voluntary_preempt = 0
values, this caused the scheduler from being invoked.
Second is related to the overall time distribution. When a thread
runs out of time, its new time slice is calculated by the below
formula:
new_timeslice = (thread_prio * SCHED_TICKS) / total_prio
If we consider total_prio is equal to the sum of the priorities of
all the threads in the system, it imposes a problem of getting
zero tick. In the new scenario, total_prio is equal to the priority
types in the system so it is fixed. Every thread gets a timeslice
in proportion of their priorities. Thus, there is no risk of taking
zero tick.
We moved initial list of a pager's caps from ktcb to task's space
since the task is expected to trust its space.
Most references to task->cap_list had to change. Although a single
cap list only tells part of the story about the task's caps, the
TASK_CAP_LIST macro works for us to get the first private set of
caps that a task has.
Capability checking for thread_control, exregs, mutex, cap_control,
ipc, and map system calls.
The visualised model is implemented in code that compiles, but
actual functionality hasn't been tested.
Need to add:
- Dynamic assignment of initial resources matching with what's
defined in the configuration.
- A paged-thread-group, since that would be a logical group of
seperation from a capability point-of-view.
- Resource ids for various tasks. E.g.
- Memory capabilities don't have target resources.
- Thread capability assumes current container for THREAD_CREATE.
- Mutex syscall assumes current thread (this one may not need
any changing)
- cap_control syscall assumes current thread. It may happen to
be that another thread's capability list is manipulated.
Last but not least:
- A simple and easy-to-use userspace library for dynamic expansion
of resource domains as new resources are created such as threads.