l4_unmap now returns -1 if given range was only partially unmapped.
do_munmap() now only unmaps address ranges that have correspondence in
the unmapped vmas. Trying to unmap regions with no correspondent vmas
causes problems in corner cases, e.g. mm0 that tries to mmap its own
address space during initialisation would unmap its whole address space
and fail to execute.
When sys_munmap() splits a vma, the new vma had no copy of the objects
in the original vma. Now we fixed that using a vma_copy_links() function
which can also be used as part of fork().
Still testing sys_munmap(). It now correctly spots and unmaps the overlapping vma.
The issue now is that if a split occurs, we forgot to add same objects to new vma.
do_munmap currently shrinks, splits, destroys vmas and unmaps the given
virtual address range from the task. Unmapped pages may go completely unused
but page reclamation will be done in another part of the pager rather than
directly on the munmap instance.
- Fixed an important bug with shadow object handling.
When a shadow is dropped, if there are references left
to it, both the object in front and dropped object becomes
a shadow of the original object underneath. We had thought
of this case but had not increase the shadow count.
- Added a test mechanism that tests the number of objects,
vmfiles, shadows etc. by first counting them and trying to
reach the same number by other means, i.e. per-object-shadow counts.
It discovered a plethora of bugs.
- Added new set of functions to register objects, files and tasks
globally with the pager, these functions introduce a refcount as
well as adding structures to linked lists.
- fork/exit now seems to work stably i.e. no negative shadow counts etc.
- Added cleaner allocation of shm addresses by moving the allocation to do_mmap().
- Added deletion routine for all objects: shadow, vm_file of type vfs_file, shm_file, etc.
- Need to make sure objects get deleted properly after exit().
- Currently we allow a single, unique virtual address for each shm segment.
- Updated sleeping paths such that a task is atomically put into
a runqueue and made RUNNABLE, or removed from a runqueue and made SLEEPING.
- Modified vma dropping sources to handle both copy_on_write() and exit() cases
in a common function.
- Added the first infrastructure to have a pager to suspend a task and wait for
suspend completion from the scheduler.
Now all system calls can simply return their final values and they
will be sent to client parties from a single location. Should have had this
simple cleanup a long time ago.
fs0 used to receive open() requests and notify pager about them via a syscall ipc.
This caused deadlocks because normally request flow is mm0 -> fs0 on all other calls.
The solution was to have mm0 ask and validate file descriptors from fs0 on the first
request instance that involved that file descriptor. By this method we delay the
validation of the fd until its first use, and avoid deadlock. It also fits well with
the lazy request handling design philosophy.
- Fixed do_mmap() so that it returns mapped address, and various bugs.
- A child seems to fork with new setup, but with incorrect return value.
Need to use and test exregs() for fork + clone.
- Shmat searches an unmapped area if input arg is invalid, do_mmap()
should do this.
- Added mutex_trylock()
- Implemented most of exchange_registers()
- thread_control() now needs a lock for operations that can modify thread context.
- thread_start() does not initialise scheduler flags, now done in thread_create.
TODO:
- Fork/clone'ed threads should retain their context in tcb, not syscall stack.
- exchange_registers() calls in userspace need cleaning up.
For clone, file descriptor and vm area structures need to be
separate from the tcb and reached via a pointer so that they
can be shared among multiple tcbs.
- Added automatic utcb map/prefaulting of forked tasks for fs0
so that it does not need to explicitly request those tasks from mm0.
Eliminating fs0 requests to mm0 reduce deadlock possibilities.
- Replaced kmalloc with a public malloc implementation because of a bug in kmalloc.
- Fixed a kfree bug. default_release_pages was trying to free page_array pages.
- Adding prefaulting of fs0 to avoid page fault deadlocks.
- Fixed a bug that a vmo page_cache equivalence would simply drop a link to
an original vmo, even if the vmo could have more pages outside the page cache,
or if the vmo was not a shadow vmo.
- Fixed a bug with page allocator where recursion would corrupt global variables.
- Now going to fix or re-write a simpler page allocator that works.
Added a list of links for vm objects so they can follow
the links that point at them.
More succinct handling of the case where a vm object
is dropped. Now depending on the object's number of link
references and shadow references, upon a drop it could
either be merged, deleted or kept.
Added opener reference count for vm files. Now files
have opener count, objects have shadow and link count.
Link count is also meaningful for how many tasks have
mmap'ed that object.
Factored out mapping of the physical page as the final generic code
after all fault-specific handling is done.
Fixed the error that zero page didn't have an owner (devzero).
Fixed the error that struct dirent did not have the record length
field as u16 as expected by userspace.
Separated vfs file as a specific file. vm file is not always a vfs file.
Updated the README
sys_open was not returning back to client, added that.
Added comments for future vfs additions.
Removed some commented out code.
Removed excessive printfs.
Fixed spid not initialising for mm0
Fixed some faults with fs0.
TODO:
- Need to store vfs files in a separate list.
- Need to define vnum as a vfs-file-specific data, i.e. in priv_data field of vm_file.
- Need to then fix vfs_receive_sys_open.
- fixed is_err(x), was evaluating x twice, resulting in calling a
function x twice.
- Divided task initialisation into multiple parts.
- MM0 now creates a tcb for itself and maintains memory regions of its own.
- MM0's tcb is used for mmapping other tasks' regions. MM0 mmaps and prefaults
those regions, instead of the typical mmap() and fault approach used by
non-pager tasks.
For example there's an internal shmget_shmat() path to map in other tasks'
shm utcbs. Those mappings are then prefaulted into mm0's address space using
the default fault handling path.
- FS0 now reads task data into its utcb from mm0 via a syscall.
FS0 shmat()s to utcbs of other tasks, e.g. mm0 and test0.
FS0 then crashes, that is to be fixed and where this commit is left last.