- Updated sleeping paths such that a task is atomically put into
a runqueue and made RUNNABLE, or removed from a runqueue and made SLEEPING.
- Modified vma dropping sources to handle both copy_on_write() and exit() cases
in a common function.
- Added the first infrastructure to have a pager to suspend a task and wait for
suspend completion from the scheduler.
A new scheduler replaces the old one.
- There are no sched_xxx_notify() calls that ask scheduler to change task state.
- Tasks now have priorities and different timeslices.
- One second interval is distributed among processes.
- There are just runnable and expired queues.
- SCHED_GRANULARITY determines a maximum running boundary for tasks.
- Scheduler can now detect a safe point and suspend a task.
Interruptible blocking is implemented.
- Mutexes, waitqueues and ipc are modified to have an interruptible nature.
- Sleep information is stored on the ktcb. (which waitqueue? etc.)
- test0 now forks 16 tasks that each modify a global variable.
- scheduler now gives 1/10th of a second per task. It also does not increase timeslice
of a task that has scheduled.
- When a memory is granted to the kernel, the distribution of this memory to memcaches
was calculated in a complicated way. This is now simplified.
Now all system calls can simply return their final values and they
will be sent to client parties from a single location. Should have had this
simple cleanup a long time ago.
fs0 used to receive open() requests and notify pager about them via a syscall ipc.
This caused deadlocks because normally request flow is mm0 -> fs0 on all other calls.
The solution was to have mm0 ask and validate file descriptors from fs0 on the first
request instance that involved that file descriptor. By this method we delay the
validation of the fd until its first use, and avoid deadlock. It also fits well with
the lazy request handling design philosophy.
- Fixed do_mmap() so that it returns mapped address, and various bugs.
- A child seems to fork with new setup, but with incorrect return value.
Need to use and test exregs() for fork + clone.
- Shmat searches an unmapped area if input arg is invalid, do_mmap()
should do this.
- Added mutex_trylock()
- Implemented most of exchange_registers()
- thread_control() now needs a lock for operations that can modify thread context.
- thread_start() does not initialise scheduler flags, now done in thread_create.
TODO:
- Fork/clone'ed threads should retain their context in tcb, not syscall stack.
- exchange_registers() calls in userspace need cleaning up.
Stopped working on self_spawn() - going to finish clone() syscall first.
Arch-specific clone() library call that does ipc() and cloned child setup.
- Need to finish thread_create() that satisfy clone() necessities. i.e. setting up its stack.
Question: Does the pager (and thus the microkernel) have to explicitly set SP_USR?
Once the call is known to be successful, the library could set it.
For clone, file descriptor and vm area structures need to be
separate from the tcb and reached via a pointer so that they
can be shared among multiple tcbs.
- Added automatic utcb map/prefaulting of forked tasks for fs0
so that it does not need to explicitly request those tasks from mm0.
Eliminating fs0 requests to mm0 reduce deadlock possibilities.
- Replaced kmalloc with a public malloc implementation because of a bug in kmalloc.
- Fixed a kfree bug. default_release_pages was trying to free page_array pages.
- Adding prefaulting of fs0 to avoid page fault deadlocks.
- Fixed a bug that a vmo page_cache equivalence would simply drop a link to
an original vmo, even if the vmo could have more pages outside the page cache,
or if the vmo was not a shadow vmo.
- Fixed a bug with page allocator where recursion would corrupt global variables.
- Now going to fix or re-write a simpler page allocator that works.
Normally common kernel entries in 2nd level page table need not get
copied to new tasks since those entries will be used commonly by all tasks.
On fork, we were copying those unnecessarily.
Child needs rewound function stack in order to reach registers r9-r12
that have original userspace values. But we jump to return_from_syscall
without rewinding the stack. Therefore to ease context restore, we save
r9-r12 on the stack as well upon syscall entry.
Added a list of links for vm objects so they can follow
the links that point at them.
More succinct handling of the case where a vm object
is dropped. Now depending on the object's number of link
references and shadow references, upon a drop it could
either be merged, deleted or kept.
Added opener reference count for vm files. Now files
have opener count, objects have shadow and link count.
Link count is also meaningful for how many tasks have
mmap'ed that object.
When allocating a new area, the alignment calculation of new area
structure was wrong. Hopefully this is now fixed. The original function
with bad alignment was preserved.
When freeing a new area and merging it with adjacent areas, pointer
to previous area was mistakenly used instead of pointer to next.
modified: tasks/libmem/kmalloc/kmalloc.c
R/W shadow objects must be made read-only during a fork.
When searching for a writeable shadow object, it can only be the first
object on the vma object chain, therefore not traversing the object
chain and checking only the first object is sufficient.