Operating system/Multiprogramming
This is a lesson in in the course, Operating system, which is a part of The School of Computer Science
Objectiveedit
|
IntroductioneditMultiprogramming became possible when disks were introduced to the computing world. The concept of multiprogramming relies on the capability of a computer to store instructions (programs) for long-term use. The goal is to reduce CPU idle time by allowing new jobs to take over the CPU whenever the currently running job needed to wait (e.g. for user I/O). It was also at this point when operating systems received a new responsibility - decision making. Before multiprogramming was introduced, the role of the operating system was simple and straight-forward - load a program into memory and execute it via the CPU. With the advent of multiprogramming, operating systems now faced different mechanics for program execution as multiple jobs now needed to be loaded into memory at the same time and several options existed for allocating CPU time. Two types of scheduling were introduced to handle this decision-making - job scheduling and CPU scheduling. Job scheduling refers to the selection of jobs to load into memory. CPU scheduling refers to the selection of a job existing in memory to execute via the CPU. In a computer system, both these decisions are made by the operating system. Scheduling AlgorithmseditScheduling Basics Process scheduling is one of the most important functions of an operating system that supports multiprogramming. This function is heavily dependent on queues. There are three types of queues that are used in process scheduling:
Scheduling Statistics To gauge the performance of a scheduling algorithm, several statistics are used.
The general goals of scheduling are:
First Come First Serve The name of this algorithm is self-explanatory. This is the simplest (but least efficient) algorithm used to schedule processes for execution. The first process to arrive in the ready queue will be executed first. Since this is not a preemptive algorithm, an incoming process is not allowed to take over the CPU if the running process is not complete. Thus, the running process will continue until it is completed. Shortest Job First In this algorithm, if more than one process is ready to execute, the scheduler selects the process with the shortest "burst time" (time to finish) and assigns it to the CPU. This is also a non-preemptive algorithm, so as soon as a process is assigned to the CPU, it will not be replaced until it is completed. Shortest Remaining Time First This is similar to Shortest Job First, but this algorithm is preemptive. If a new process becomes ready, the scheduler will check if its burst time is shorter than the remaining time of the currently running process, then the new process will "preempt" the current process. The current process will be returned to the ready queue. ThreadseditThe concept of threading was introduced to address the issue of concurrent execution within a single process. Since the overhead for switching between multiple processes was too expensive in terms of computing resources, programmers needed a way to perform concurrent execution without resorting to the use of multiple processes. A thread is simply a lightweight process. Although each thread executes a separate program code, all threads belonging to the same process share the same memory space and context. This allows more convenient communication and data sharing. More importantly, switching between threads takes much less time compared to the context switch between processes. Threads come in two types:
|
Assignmentsedit
|
|