Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>Ok, to answer this question please refer to my <a href="https://stackoverflow.com/questions/2293581/does-linux-time-division-processes-or-threads/2293724#2293724">answer</a> to another question here on SO, in relation to 'Does Linux time division processes or threads?', let's follow on from that structure of an imaginary kernel here using x86 processor and C code...I have amended this structure to take into account of semaphores</p> <pre> struct regs{ int eax, ebx, ecx, edx, es, ds, gs, fs, cs, ip, flags; struct tss *task_sel; } struct mem_loc{ int *phys_mem_begin; int *phys_mem_end; int *semaphores; } struct thread{ struct regs *regs; struct mem_loc *memlocTracker; int parent_id; struct thread *next; } struct task{ struct regs *regs; struct mem_loc *task_mem; int *filehandles; int priority; int *num_threads; int quantum; int duration; int start_time, end_time; int parent_id; struct thread *task_thread; /* ... */ struct task *next; } struct queue_ready{ struct task *task_id; struct queue_ready *next; } </pre> <p>There, I think that looks better, right, in relation to following on from my previous answer linked above, let's see what happens when there's queues involved, now, look at this <code>queue_ready</code> structure, now, supposing there's a task in the that structure as set up by the kernel, a linked list of <code>queue_ready</code> to be more precise, this is based on the <code>priority</code> field of the <code>task</code> structure itself, and the priority is 1 for example, to be ready to executed. </p> <p>Let's look at an imaginary scheduler based on C, as shown below, ok, it may be crappy and vulnerable to nit-picky, but put that aside to concentrate on the aspect of the question...</p> <pre> static void scheduler(void){ struct queue_ready *rdy_sched; while(1){ for (rdy_sched = head_queue_ready; rdy_sched != NULL; rdy_sched = rdy_sched-&gt;next){ if (time &gt;= rdy_sched-&gt;task_id-&gt;start_time + rdy_sched-&gt;task_id-&gt;quantum){ save_cpu_task_state(&amp;task_reg-&gt;regs); save_mem_task_state(&amp;rdy_sched-&gt;task_id-&gt;task_mem); }else{ struct regs *task_reg = rdy_sched-&gt;task_id-&gt;regs; struct mem_loc *mem_task = rdy_sched-&gt;task_id-&gt;task_mem; load_cpu_task_state(task_reg); load_mem_task_state(mem_task); jmp_and_exec_cpu_task(task_reg-&gt;cs, task_reg-&gt;ip); save_cpu_task_state(&amp;rdy_sched-&gt;task_id-&gt;regs); if (rdy_sched-&gt;task_id-&gt;num_threads > 0){ struct thread *threads = rdy_sched-&gt;task_id-&gt;task_thread; while (threads-&gt;next != NULL){ struct regs *thread_reg = threads-&gt;regs; load_cpu_thread_state(thread_reg); if (threads-&gt;memlocTracker-&gt;semaphores){ /* House keeping in relation to semaphores */ lock_memory(threads-&gt;memlocTracker); jmp_and_exec_cpu_thread(thread_reg-&gt;cs, thread_reg-&gt;ip); save_cpu_thread_state(&amp;thread-&gt;regs); unlock_memory(threads-&gt;memlocTracker); }else{ jmp_and_exec_cpu_thread(thread_reg-&gt;cs, thread_reg-&gt;ip); save_cpu_thread_state(&amp;thread-&gt;regs); } } } } } } } </pre> <p>The kernel's scheduler looks overly complex, but it isn't really, the only omission I have left out in the code is the priorities...that can be ignored now for the discussion of the OP's question. </p> <p>Let's break this scheduler function <code>scheduler</code> down...but first a quick peek and explanation of what functions are used within the <code>scheduler</code> function:</p> <ul> <li>'load_cpu_task_state' and 'load_cpu_thread_state' - this loads the state of the CPU registers, for brevity, they are for task and thread respectively.</li> <li>'load_mem_task_state' and 'save_mem_task_state' - this loads and save, respectively, the memory state, perhaps to/from a page on disk specified by the <code>phys_mem_begin</code> and <code>phys_mem_end</code> fields of the <code>mem_loc</code> structure for the said task. For brevity, this will load <em>all</em> memory owned by the said task, including threads.</li> <li>'jmp_and_exec_cpu_task' and 'jmp_and_exec_cpu_task' - this causes the kernel to issue the magic in jumping to the specified <code>cs</code> and <code>ip</code> registers of the said task's <code>reg</code> structure, again same for the thread respectively.</li> <li>'save_cpu_task_state' and 'save_cpu_thread_state' - this causes the kernel to save the state of CPU registers for both task and thread respectively.</li> <li>'lock_memory' and 'unlock_memory' - this is for the locking of the memory region using the semaphores which comes into play shortly...</li> </ul> <p>Now, <code>scheduler</code> runs forever, and iterates over the linked list of tasks, the check is required to see if the task has run for a period and exceeded the <code>quantum</code> amount, for example, 20ms. Then it saves the cpu state, and state of memory, (perhaps saved to paging file on disk), ready to go on to the next task in the list. </p> <p>If there is still time left, it loads the task's CPU registers and loads up the memory (perhaps from a page), and jumps to where the last instruction pointer and code segment was (ip and cs respectively) and the task resumes. </p> <p>If the said task has threads, it will iterate over the linked-list of threads, loading the CPU registers, and jumping into the code to resume thread execution then save the CPU registers.</p> <p>Now, this is where things gets a bit hairy, especially in this toy OS, how to manage semaphores! uh huh, looking like a big headache looming over the OS developer's head....</p> <p>I would imagine if the section of memory is locked, the <code>semaphores</code> would be non NULL, and would have been acquired by the runtime manager in user-land code that requested the semaphore, it would lock the memory to prevent a thread trampling the data, and jump into the code for the said thread and would unlock it when that thread's duration is completed and saves that state of thread's CPU registers...</p> <p><strong><em>Conclusion:</em></strong></p> <p>This is my conclusion based on this, as for facts to back me up, I cannot, as this theory has been exhaustively lectured at, examined under the microscope in countless books, ranging from a University in the US, all the way around to the University in New Zealand, this is from my observations and my understanding, as you can see from this code, the complexity and the burden placed on the developer in coding this, not alone that, the constraints imposed, design decisions - both hardware and software. </p> <p>Managing semaphores in this context is far more complicated, to answer, if there was more queues available, like I have strived to keep it to one queue <code>queue_ready</code>, the cost factor in terms of memory usage, disk usage and above all time usage, grows as the kernel would have to keep track of those mitigating factors that come into play, another equally important factor is how much priorities is there going to be, so having a ready queue for different semaphores in my opinion is not a viable and feasible option.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload