Scheduling

From Computer Science Wiki
Revision as of 08:42, 23 March 2023 by Mr. MacKenty (talk | contribs) (→‎Factors affecting scheduling decisions)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Resource Management[1]

II. Scheduling[edit]

Definition of scheduling[edit]

Scheduling, in the context of computing and operating systems, refers to the process of assigning and managing the execution of processes or tasks on system resources, such as the CPU (Central Processing Unit). The primary goal of scheduling is to ensure that resources are utilized efficiently, while maintaining a balance between fairness, responsiveness, and throughput.

Types of scheduling[edit]

Long-term scheduling[edit]

Long-term scheduling, also known as job scheduling or admission scheduling, is a process performed by the operating system to decide which processes should be admitted into the main memory (RAM) from the job pool or secondary storage. The job pool consists of processes that are waiting to be executed but are not yet loaded into the main memory.

Long-term scheduling is responsible for managing the degree of multiprogramming, which is the number of processes that can reside in memory concurrently. By controlling the number of processes in the main memory, the operating system can efficiently utilize system resources and maintain a balance between the system's load and its capacity.

The long-term scheduler selects processes based on various criteria, such as priority, resource requirements, and submission time. It ensures that the system does not become overloaded or underutilized by either admitting or delaying the entrance of new processes into the main memory. Once a process is admitted into the main memory by the long-term scheduler, it becomes a part of the ready queue and is eligible for execution. The short-term scheduler, also known as the CPU scheduler, then takes over and assigns CPU time to the processes in the ready queue.

It is important to note that long-term scheduling is not performed as frequently as short-term scheduling. In some systems, especially time-sharing systems or real-time systems, the distinction between long-term and short-term scheduling might not be as clear, as the scheduling mechanisms are more tightly integrated.

Short-term scheduling[edit]

Short-term scheduling, also known as CPU scheduling or process scheduling, is a process performed by the operating system to determine which process in the ready queue should be executed next by the CPU. The primary objective of short-term scheduling is to ensure efficient utilization of the CPU, maintain a balance between fairness and responsiveness, and optimize system throughput.

The ready queue contains processes that are loaded in main memory, have all the necessary resources for execution, and are waiting for their turn to use the CPU. The short-term scheduler selects a process from the ready queue based on a scheduling algorithm, which considers factors such as process priority, execution time, and waiting time.

Medium-term scheduling[edit]

Medium-term scheduling, also known as process swapping or memory swapping, is a process performed by the operating system to maintain an optimal level of multiprogramming and system performance. It involves temporarily removing processes from the main memory (RAM) and placing them in secondary storage (usually the hard disk) when they are in a blocked or suspended state, and later bringing them back into the main memory when they are ready to resume execution.

Scheduling algorithms[edit]

First-Come-First-Served (FCFS)[edit]

First-Come, First-Served (FCFS) is a scheduling algorithm used in operating systems and other computing environments to manage the execution order of processes or tasks. As the name suggests, in the FCFS algorithm, processes are executed in the order in which they arrive in the ready queue. The process that arrives first gets executed first, followed by the process that arrives next, and so on.

Shortest Job Next (SJN)[edit]

Shortest Job Next (SJN), also known as Shortest Job First (SJF), is a scheduling algorithm used in operating systems to manage the execution order of processes or tasks. In the SJN algorithm, processes are executed based on their estimated execution time, with the shortest job being executed first. The main goal of this algorithm is to minimize the average waiting time for all processes in the system. When a process arrives in the ready queue, the scheduler compares its burst time (estimated execution time) with the burst times of other processes in the queue. The process with the shortest burst time is selected for execution.

Round Robin (RR)[edit]

Round Robin (RR) is a scheduling algorithm used in operating systems and other computing environments to manage the execution order of processes or tasks. The Round Robin algorithm assigns a fixed, small unit of time, known as a time quantum or time slice, to each process in the ready queue. It then executes processes in a cyclic order, allowing each process to run for the duration of its allocated time quantum. In the Round Robin scheduling algorithm, the ready queue functions as a circular queue. The scheduler selects the first process in the queue, allows it to execute for its time quantum, and then moves it to the end of the queue. This process is repeated for each process in the queue until all processes have completed execution.

Priority scheduling[edit]

Priority scheduling is a scheduling algorithm used in operating systems and other computing environments to manage the execution order of processes or tasks based on their assigned priorities. In this algorithm, each process is assigned a priority value, which could be based on factors like the importance of the task, user privileges, or resource requirements. In priority scheduling, the scheduler selects the process with the highest priority from the ready queue for execution. If multiple processes have the same priority, the scheduler may use another method, such as First-Come, First-Served (FCFS), to break the tie. Once a process is completed or blocked (e.g., waiting for I/O), the scheduler picks the next highest priority process in the queue.


Multilevel Queue scheduling[edit]

Multilevel Queue scheduling is a scheduling algorithm used in operating systems to manage the execution order of processes or tasks by organizing them into multiple priority-based queues. In this algorithm, processes are classified into different groups based on their characteristics, such as process type, priority, or resource requirements. Each group is then assigned to a separate queue, with each queue having its own scheduling policy.

The scheduler selects processes for execution from these queues based on their priority or the scheduling policy of the corresponding queue. Typically, processes in higher-priority queues are executed before those in lower-priority queues. Within a queue, processes are scheduled according to the specific scheduling policy of that queue, such as First-Come, First-Served (FCFS), Round Robin (RR), or Priority scheduling.

Multilevel Queue scheduling can also include a feedback mechanism known as Multilevel Feedback Queue scheduling. This mechanism allows processes to move between queues based on their behavior or performance. For example, a process that uses a lot of CPU time might be demoted to a lower-priority queue, while a process that has been waiting for a long time might be promoted to a higher-priority queue.

Factors affecting scheduling decisions[edit]

Several factors affect scheduling decisions in an operating system, influencing the choice of scheduling algorithms and policies. These factors play a crucial role in determining the overall system performance, responsiveness, and user experience. Some of the key factors affecting scheduling decisions are:

  • Process priority: The importance of a process or task can dictate scheduling decisions. High-priority processes, such as critical system tasks or time-sensitive applications, should be executed before lower-priority processes to ensure smooth system operation.
  • Process type: The nature of a process can also affect scheduling decisions. For example, I/O-bound processes spend more time waiting for I/O operations to complete, while CPU-bound processes require more CPU time for computation. Scheduling algorithms should consider these differences to ensure efficient resource utilization and minimize wait times.
  • Burst time: The estimated execution time, or burst time, of a process can influence scheduling decisions. Shorter burst times can lead to shorter wait times and improved responsiveness, which is the main idea behind the Shortest Job Next (SJN) scheduling algorithm.
  • Arrival time: The order in which processes arrive in the ready queue can also impact scheduling decisions. In the First-Come, First-Served (FCFS) scheduling algorithm, processes are executed in the order they arrive, regardless of their priority or burst time.
  • Preemption: Some scheduling algorithms allow preemption, which means that a running process can be interrupted and moved back to the ready queue, allowing a higher-priority process to execute. Preemptive scheduling algorithms can improve system responsiveness but may introduce additional overhead due to context switching.
  • Time quantum: In time-sharing systems, the time quantum (time slice) allocated to each process is an important factor in scheduling decisions. A shorter time quantum can lead to better responsiveness but may increase context switching overhead, while a longer time quantum can reduce overhead but may decrease responsiveness.
  • System load: The number of processes in the system and their resource requirements can affect scheduling decisions. Schedulers should balance system load and resource availability to maintain optimal performance and avoid overloading the system.
  • Resource requirements: Processes may have specific resource requirements, such as memory, I/O devices, or CPU cores. Scheduling decisions should take these requirements into account to ensure that processes have the necessary resources to execute efficiently.
  • Fairness: Ensuring fair allocation of resources among all processes is essential to prevent starvation, where a process is indefinitely delayed due to other processes monopolizing resources. Scheduling algorithms should strive to balance fairness with other factors like priority and responsiveness.
  • User experience: The scheduling decisions should consider user experience, ensuring that interactive applications and processes remain responsive and provide a smooth user experience.

Standards[edit]

  • Outline OS resource management techniques: scheduling, policies, multitasking, virtual memory, paging, interrupt, polling.

References[edit]