Minor changes in README. Added fault debugging printfs that can be turned on/off.

Tasks boot fine up to doing ipc using their utcbs.

UTCB PLAN:

- Push ipc registers into private environment instead of a shared utcb,
  but map-in a shared utcb to pass on long data to server tasks.
- Shared utcb has unique virtual address for every thread.
- Forked child does inherit parent's utcb, but cannot use it to communicate to
  any server. It must explicitly obtain its own utcb for that.
- Clone could have a flag to explicitly not inherit parent utcb, which is the
  right thing to do.
- MM0 serves a syscall to obtain self utcb.
- By this method, upon forks tasks don't need to map-in a utcb unless they want
  to pass long data.
This commit is contained in:
Bahadir Balban
2008-03-17 17:09:19 +00:00
parent 509e949983
commit d2aa9a552b
6 changed files with 86 additions and 50 deletions

22
README
View File

@@ -101,8 +101,8 @@ irrelevant to a new problem, and embedded systems tend to raise new problems
often. Codezero is written from scratch to solely target embedded systems and often. Codezero is written from scratch to solely target embedded systems and
as such the source code is %100 relevant. It is small and free from legacy code. as such the source code is %100 relevant. It is small and free from legacy code.
From a design perspective, due to these kernels having a monolithic design, they From a design perspective, these kernels have a monolithic design, and as such
may have issues with dependability due to much of the code sharing the same they may have issues with dependability due to much of the code sharing the same
address space. This is an important issue on embedded systems since their address space. This is an important issue on embedded systems since their
operation is more sensitive to disruptions. Being a microkernel design, Codezero operation is more sensitive to disruptions. Being a microkernel design, Codezero
aims to defeat this problem and increase dependability. aims to defeat this problem and increase dependability.
@@ -112,9 +112,9 @@ embedded devices. Most of them are proprietary, with their own users. Some of
them are structurally too simplistic, and lack modern features such as paging. them are structurally too simplistic, and lack modern features such as paging.
There ones that are well established, but Codezero will contrast them by There ones that are well established, but Codezero will contrast them by
providing an alternative that will follow the open source development principles providing an alternative that will follow the open source development principles
more closely. Many embedded systems still use older development methods and the more closely. Many embedded software projects still use older development
right open source methodology would prove favorable in the fast-paced nature of methods and the right open source methodology would prove favorable in the
development. fast-paced nature of development.
Finally, there are new ideas in literature that would improve systems software Finally, there are new ideas in literature that would improve systems software
but aren't implemented either because they have no existing users or may break but aren't implemented either because they have no existing users or may break
@@ -128,13 +128,13 @@ opportunity to incorporate the latest ideas in OS technology.
Can you summarise all this? Why should I use Codezero, again? Can you summarise all this? Why should I use Codezero, again?
Codezero is an operating system that targets embedded systems. It supports the Codezero is an operating system that targets embedded systems. It supports the
most fundamental posix system calls. Different from other posix-like systems, most fundamental posix features. Different from other posix-like systems,
it is based on a microkernel design. It supports modern features such as it is based on a microkernel design. It supports modern features such as
demand-paging, virtual filesystem support. It has a cleanly separated set of demand-paging, virtual filesystem support. It has a cleanly separated set of
services, and it is small. Therefore it is a good candidate as systems software services, and it is small. For these reasons it is a good candidate as systems
to be used on embedded systems. Currently it has little or no users, and yet it software to be used on embedded systems. Currently it has little or no users,
is about to become usable, therefore compared to systems with a saturated user therefore compared to systems with a saturated user base it is possible to
base it is possible to tailor it rapidly towards the needs of any users who want tailor it rapidly towards the needs of any users who want to be the first to
to be the first to use it. incorporate it to their needs.

View File

@@ -17,7 +17,7 @@
#include INC_SUBARCH(mm.h) #include INC_SUBARCH(mm.h)
/* Abort debugging conditions */ /* Abort debugging conditions */
//#define DEBUG_ABORTS // #define DEBUG_ABORTS
#if defined (DEBUG_ABORTS) #if defined (DEBUG_ABORTS)
#define dbg_abort(...) dprintk(__VA_ARGS__) #define dbg_abort(...) dprintk(__VA_ARGS__)
#else #else

View File

@@ -14,6 +14,13 @@
#include <arch/mm.h> #include <arch/mm.h>
#include <lib/spinlock.h> #include <lib/spinlock.h>
// #define DEBUG_FAULT_HANDLING
#ifdef DEBUG_FAULT_HANDLING
#define dprintf(...) printf(__VA_ARGS__)
#else
#define dprintf(...)
#endif
/* Protection flags */ /* Protection flags */
#define VM_NONE (1 << 0) #define VM_NONE (1 << 0)
#define VM_READ (1 << 1) #define VM_READ (1 << 1)

View File

@@ -3,6 +3,7 @@
*/ */
#include <arch/mm.h> #include <arch/mm.h>
#include <task.h> #include <task.h>
#include <vm_area.h>
/* Extracts generic protection flags from architecture-specific pte */ /* Extracts generic protection flags from architecture-specific pte */
unsigned int vm_prot_flags(pte_t pte) unsigned int vm_prot_flags(pte_t pte)
@@ -24,6 +25,18 @@ unsigned int vm_prot_flags(pte_t pte)
return vm_prot_flags; return vm_prot_flags;
} }
#if defined(DEBUG_FAULT_HANDLING)
void print_fault_params(struct fault_data *fault)
{
printf("%s: Handling %s fault (%s abort) from %d. fault @ 0x%x\n",
__TASKNAME__, (fault->reason & VM_READ) ? "read" : "write",
is_prefetch_abort(fault->kdata->fsr) ? "prefetch" : "data",
fault->task->tid, fault->address);
}
#else
void print_fault_params(struct fault_data *fault) { }
#endif
/* /*
* PTE STATES: * PTE STATES:
@@ -52,9 +65,6 @@ void set_generic_fault_params(struct fault_data *fault)
else else
BUG(); BUG();
} }
printf("%s: Handling %s fault (%s abort) from %d. fault @ 0x%x\n", print_fault_params(fault);
__TASKNAME__, (fault->reason & VM_READ) ? "read" : "write",
is_prefetch_abort(fault->kdata->fsr) ? "prefetch" : "data",
fault->task->tid, fault->address);
} }

View File

@@ -19,13 +19,6 @@
#include <shm.h> #include <shm.h>
#include <file.h> #include <file.h>
#define DEBUG_FAULT_HANDLING
#ifdef DEBUG_FAULT_HANDLING
#define dprint(...) printf(__VA_ARGS__)
#else
#define dprint(...)
#endif
unsigned long fault_to_file_offset(struct fault_data *fault) unsigned long fault_to_file_offset(struct fault_data *fault)
{ {
/* Fault's offset in its vma */ /* Fault's offset in its vma */
@@ -284,8 +277,6 @@ int copy_on_write(struct fault_data *fault)
__TASKNAME__, __FUNCTION__); __TASKNAME__, __FUNCTION__);
BUG(); BUG();
} }
printf("Top object:\n");
vm_object_print(vmo_link->obj);
/* Is the object read-only? Create a shadow object if so. /* Is the object read-only? Create a shadow object if so.
* *
@@ -298,7 +289,7 @@ int copy_on_write(struct fault_data *fault)
if (!(vmo_link->obj->flags & VM_WRITE)) { if (!(vmo_link->obj->flags & VM_WRITE)) {
if (!(shadow_link = vma_create_shadow())) if (!(shadow_link = vma_create_shadow()))
return -ENOMEM; return -ENOMEM;
printf("%s: Created a shadow.\n", __TASKNAME__); dprintf("%s: Created a shadow.\n", __TASKNAME__);
/* Initialise the shadow */ /* Initialise the shadow */
shadow = shadow_link->obj; shadow = shadow_link->obj;
shadow->refcnt = 1; shadow->refcnt = 1;
@@ -322,7 +313,7 @@ int copy_on_write(struct fault_data *fault)
/* Shadow is the copier object */ /* Shadow is the copier object */
copier_link = shadow_link; copier_link = shadow_link;
} else { } else {
printf("No shadows. Going to add to topmost r/w shadow object\n"); dprintf("No shadows. Going to add to topmost r/w shadow object\n");
/* No new shadows, the topmost r/w vmo is the copier object */ /* No new shadows, the topmost r/w vmo is the copier object */
copier_link = vmo_link; copier_link = vmo_link;
@@ -336,7 +327,7 @@ int copy_on_write(struct fault_data *fault)
} }
/* Traverse the list of read-only vm objects and search for the page */ /* Traverse the list of read-only vm objects and search for the page */
while (!(page = vmo_link->obj->pager->ops.page_in(vmo_link->obj, while (IS_ERR(page = vmo_link->obj->pager->ops.page_in(vmo_link->obj,
file_offset))) { file_offset))) {
if (!(vmo_link = vma_next_link(&vmo_link->list, if (!(vmo_link = vma_next_link(&vmo_link->list,
&vma->vm_obj_list))) { &vma->vm_obj_list))) {
@@ -372,8 +363,8 @@ int copy_on_write(struct fault_data *fault)
(void *)page_align(fault->address), 1, (void *)page_align(fault->address), 1,
(reason & VM_READ) ? MAP_USR_RO_FLAGS : MAP_USR_RW_FLAGS, (reason & VM_READ) ? MAP_USR_RO_FLAGS : MAP_USR_RW_FLAGS,
fault->task->tid); fault->task->tid);
printf("%s: Mapped 0x%x as writable to tid %d.\n", __TASKNAME__, dprintf("%s: Mapped 0x%x as writable to tid %d.\n", __TASKNAME__,
page_align(fault->address), fault->task->tid); page_align(fault->address), fault->task->tid);
vm_object_print(new_page->owner); vm_object_print(new_page->owner);
/* /*
@@ -419,22 +410,31 @@ int __do_page_fault(struct fault_data *fault)
struct vm_area *vma = fault->vma; struct vm_area *vma = fault->vma;
unsigned long file_offset; unsigned long file_offset;
struct vm_obj_link *vmo_link; struct vm_obj_link *vmo_link;
struct vm_object *vmo;
struct page *page; struct page *page;
/* Handle read */ /* Handle read */
if ((reason & VM_READ) && (pte_flags & VM_NONE)) { if ((reason & VM_READ) && (pte_flags & VM_NONE)) {
file_offset = fault_to_file_offset(fault); file_offset = fault_to_file_offset(fault);
BUG_ON(!(vmo_link = vma_next_link(&vma->vm_obj_list,
&vma->vm_obj_list)));
vmo = vmo_link->obj;
/* Get the page from its pager */ /* Get the first object, either original file or a shadow */
if (IS_ERR(page = vmo->pager->ops.page_in(vmo, file_offset))) { if (!(vmo_link = vma_next_link(&vma->vm_obj_list, &vma->vm_obj_list))) {
printf("%s: Could not obtain faulty page.\n", printf("%s:%s: No vm object in vma!\n",
__TASKNAME__); __TASKNAME__, __FUNCTION__);
BUG(); BUG();
} }
/* Traverse the list of read-only vm objects and search for the page */
while (IS_ERR(page = vmo_link->obj->pager->ops.page_in(vmo_link->obj,
file_offset))) {
if (!(vmo_link = vma_next_link(&vmo_link->list,
&vma->vm_obj_list))) {
printf("%s:%s: Traversed all shadows and the original "
"file's vm_object, but could not find the "
"faulty page in this vma.\n",__TASKNAME__,
__FUNCTION__);
BUG();
}
}
BUG_ON(!page); BUG_ON(!page);
/* Map it to faulty task */ /* Map it to faulty task */
@@ -442,17 +442,17 @@ int __do_page_fault(struct fault_data *fault)
(void *)page_align(fault->address), 1, (void *)page_align(fault->address), 1,
(reason & VM_READ) ? MAP_USR_RO_FLAGS : MAP_USR_RW_FLAGS, (reason & VM_READ) ? MAP_USR_RO_FLAGS : MAP_USR_RW_FLAGS,
fault->task->tid); fault->task->tid);
printf("%s: Mapped 0x%x as readable to tid %d.\n", __TASKNAME__, dprintf("%s: Mapped 0x%x as readable to tid %d.\n", __TASKNAME__,
page_align(fault->address), fault->task->tid); page_align(fault->address), fault->task->tid);
vm_object_print(vmo); vm_object_print(vmo_link->obj);
} }
/* Handle write */ /* Handle write */
if ((reason & VM_WRITE) && (pte_flags & VM_READ)) { if ((reason & VM_WRITE) && (pte_flags & VM_READ)) {
/* Copy-on-write */ /* Copy-on-write */
if (vma_flags & VMA_PRIVATE) { if (vma_flags & VMA_PRIVATE)
copy_on_write(fault); copy_on_write(fault);
}
/* Regular files */ /* Regular files */
if ((vma_flags & VMA_SHARED) && !(vma_flags & VMA_ANONYMOUS)) { if ((vma_flags & VMA_SHARED) && !(vma_flags & VMA_ANONYMOUS)) {
/* No regular files are mapped yet */ /* No regular files are mapped yet */
@@ -460,10 +460,10 @@ int __do_page_fault(struct fault_data *fault)
file_offset = fault_to_file_offset(fault); file_offset = fault_to_file_offset(fault);
BUG_ON(!(vmo_link = vma_next_link(&vma->vm_obj_list, BUG_ON(!(vmo_link = vma_next_link(&vma->vm_obj_list,
&vma->vm_obj_list))); &vma->vm_obj_list)));
vmo = vmo_link->obj;
/* Get the page from its pager */ /* Get the page from its pager */
if (IS_ERR(page = vmo->pager->ops.page_in(vmo, file_offset))) { if (IS_ERR(page = vmo_link->obj->pager->ops.page_in(vmo_link->obj,
file_offset))) {
printf("%s: Could not obtain faulty page.\n", printf("%s: Could not obtain faulty page.\n",
__TASKNAME__); __TASKNAME__);
BUG(); BUG();
@@ -475,11 +475,16 @@ int __do_page_fault(struct fault_data *fault)
(void *)page_align(fault->address), 1, (void *)page_align(fault->address), 1,
(reason & VM_READ) ? MAP_USR_RO_FLAGS : MAP_USR_RW_FLAGS, (reason & VM_READ) ? MAP_USR_RO_FLAGS : MAP_USR_RW_FLAGS,
fault->task->tid); fault->task->tid);
printf("%s: Mapped 0x%x as writable to tid %d.\n", __TASKNAME__, dprintf("%s: Mapped 0x%x as writable to tid %d.\n", __TASKNAME__,
page_align(fault->address), fault->task->tid); page_align(fault->address), fault->task->tid);
vm_object_print(vmo); vm_object_print(vmo_link->obj);
} }
/* FIXME: Just do fs files for now, anon shm objects later. */ /* FIXME: Just do fs files for now, anon shm objects later. */
/* Things to think about:
* - Is utcb a shm memory really? Then each task must map it in via
* shmget(). FS0 must map all user tasks' utcb via shmget() as well.
* For example to pass on pathnames etc.
*/
BUG_ON((vma_flags & VMA_SHARED) && (vma_flags & VMA_ANONYMOUS)); BUG_ON((vma_flags & VMA_SHARED) && (vma_flags & VMA_ANONYMOUS));
} }

View File

@@ -9,13 +9,23 @@
#include <kmalloc/kmalloc.h> #include <kmalloc/kmalloc.h>
// #define DEBUG_FAULT_HANDLING
#ifdef DEBUG_FAULT_HANDLING
#define dprintf(...) printf(__VA_ARGS__)
#else
#define dprintf(...)
#endif
#if defined(DEBUG_FAULT_HANDLING)
void print_cache_pages(struct vm_object *vmo) void print_cache_pages(struct vm_object *vmo)
{ {
struct page *p; struct page *p;
printf("Pages:\n======\n"); if (!list_empty(&vmo->page_cache))
printf("Pages:\n======\n");
list_for_each_entry(p, &vmo->page_cache, list) { list_for_each_entry(p, &vmo->page_cache, list) {
printf("Page offset: 0x%x, virtual: 0x%x, refcnt: %d\n", p->offset, dprintf("Page offset: 0x%x, virtual: 0x%x, refcnt: %d\n", p->offset,
p->virtual, p->refcnt); p->virtual, p->refcnt);
} }
} }
@@ -38,6 +48,10 @@ void vm_object_print(struct vm_object *vmo)
print_cache_pages(vmo); print_cache_pages(vmo);
printf("\n"); printf("\n");
} }
#else
void print_cache_pages(struct vm_object *vmo) { }
void vm_object_print(struct vm_object *vmo) { }
#endif
/* Global list of in-memory vm objects. */ /* Global list of in-memory vm objects. */
LIST_HEAD(vm_object_list); LIST_HEAD(vm_object_list);