Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>In order to understand within the bowels of tasks/threads...lets look at this toy kernel code...</p> <pre> struct regs{ int eax, ebx, ecx, edx, es, ds, gs, fs, cs, ip, flags; struct tss *task_sel; } struct thread{ struct regs *regs; int parent_id; struct thread *next; } struct task{ struct regs *regs; int *phys_mem_begin; int *phys_mem_end; int *filehandles; int priority; int *num_threads; int quantum; int duration; int start_time, end_time; int parent_id; struct thread *task_thread; /* ... */ struct task *next; } </pre> <p>Imagine the kernel allocates memory for that structure <code>task</code>, which is a linked-list, look closely at the <code>quantum</code> field, that is the timeslice of the processor-time based on the <code>priority</code> field. There will always be a task of id 0, which never sleeps, just idle, perhaps issuing nops (No OPerationS)...the scheduler spins around ad nauseum until infinity (that is when the power gets unplugged), if the <code>quantum</code> field determines the task runs for 20ms, sets the <code>start_time</code> and <code>end_time</code> + 20ms, when that <code>end_time</code> is up, the kernel saves the state of the cpu registers into a <code>regs</code> pointer. Goes onto the next task in the chain, loads the cpu registers from the pointer to <code>regs</code> and jumps into the instruction, sets the quantum and time duration, when the duration reaches zero, goes on to the next...effectively context-switching...this is what gives it an illusion that is running simultaneously on a single cpu. </p> <p>Now look at the <code>thread</code> struct which is a linked-list of threads...within that <code>task</code> structure. The kernel allocates threads for that said task, sets up the cpu states for that thread and jumps into the threads...now the kernel has to manage the threads as well as the tasks themselves...again context switching between a task and thread...</p> <p>Move on to a multi-cpu, the kernel would have been set up to be scalable, and what the scheduler would do, load one <code>task</code> onto one cpu, load another onto another cpu (dual core), and both jump into where the instruction pointer is pointing at...now the kernel is genuinely running both tasks simultaneously on both cpu's. Scale up to 4 way, same thing, additional tasks loaded onto each CPU, scale up again, to n-way...you get the drift. </p> <p>As you can see the notion how the threads would not be perceived as scalable, as the kernel has quite frankly a mammoth job in keeping track of what cpu is running what, and on top of that, what task is running which threads, which fundamentally explains why I think threads are not exactly scalable...Threads consumes a lot of resources...</p> <p>If you really want to see what is happening, take a look at the source code for Linux, specifically in the scheduler. No hang on, forget about the 2.6.x kernel releases, look at the prehistoric version 0.99, the scheduler would be more simpler to understand and easier to read, sure, its a bit old, but worth looking at, this will help you understand why and hopefully my answer also, in why threads are not scalable..and shows how the toy-os uses time division based on processes. I have strived to not to get into the technical aspects of modern-day cpu's that can do more then just what I have described...</p> <p>Hope this helps.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload