Operating Systems management techniques: Difference between revisions

From Computer Science Wiki
 
(27 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[file:resource.png|right|frame|Resource Management<ref>http://www.flaticon.com/</ref>]]
[[file:resource.png|right|frame|Resource Management<ref>http://www.flaticon.com/</ref>]]


We have learned <ref>https://www.lifewire.com/what-is-a-system-resource-2626016</ref>
An operating system must manage limited resources. There is intense and heavy competition for these limited resources, so <strong>HOW</strong> does an operating system manage to efficiently balance resources?


== OS resource management techniques ==  
== OS resource management techniques ==  
 
Resource management is the dynamic allocation and de-allocation by an operating system of processor cores, memory pages, and various types of bandwidth to computations that compete for those resources. The objective is to allocate resources so as to optimize responsiveness subject to the finite resources available. <ref>https://www.bscmsrc.eu/media/events/barcelona-multicore-workshop-2010/burton-smith-abstract</ref>
<br />
{| style="width: 95%;" class="wikitable"
{| style="width: 95%;" class="wikitable"
|-
|-
! Technique  !!  Description  
! Technique  !!  Description  
|-
|-
| scheduling || Scheduling is the method by which work is assigned to resources that complete the work.<ref>https://en.wikipedia.org/wiki/Scheduling_(computing)</ref>. There are many different scheduling strategies. The main purposes of scheduling algorithms are to minimize resource starvation and to ensure fairness amongst the parties utilizing the resources<ref>https://en.wikipedia.org/wiki/Scheduling_(computing)#Scheduling_disciplines</ref>
| scheduling || The aim of CPU scheduling is to make the system efficient, fast and fair.<ref>https://www.studytonight.com/operating-system/cpu-scheduling</ref> Scheduling is the method by which work is assigned to resources that complete the work.<ref>https://en.wikipedia.org/wiki/Scheduling_(computing)</ref>. There are many different scheduling strategies. The main purposes of scheduling algorithms are to minimize resource starvation and to ensure fairness amongst the parties utilizing the resources<ref>https://en.wikipedia.org/wiki/Scheduling_(computing)#Scheduling_disciplines</ref> Different scheduling approaches are:
* First Come First Serve(FCFS) Scheduling
* Shortest-Job-First(SJF) Scheduling
* Priority Scheduling
* Round Robin(RR) Scheduling.
* Multilevel feedback queue scheduling.
 
There is a [https://www.youtube.com/watch?v=THqcAa1bbFU video here which goes into some depth about scheduling]. You do not need to understand this for your IB test, but this material will help you have a deeper understanding of this content.
 
 
 
|-
|-
| policies || foo
| policies || Given a particular task, policy refers to what needs to be done (i.e. activities to perform) and mechanism refers to how to do it (i.e. implementation to enforce policy).<ref>http://www.8bitavenue.com/2016/12/policy-vs-mechanism-in-operating-system/</ref>. Put another way, the separation of mechanism and policy is a design principle in computer science. It states that mechanisms (those parts of a system implementation that control the authorization of operations and the allocation of resources) should not dictate (or overly restrict) the policies according to which decisions are made about which operations to authorize, and which resources to allocate.<ref>https://en.wikipedia.org/wiki/Separation_of_mechanism_and_policy</ref>  Please distinguish between policy and mechanism. Policies are ways to choose which activities to perform. Mechanisms are the implementations that enforce policies.<ref>http://www.cs.unc.edu/~dewan/242/s07/notes/intro/node15.html</ref>
 
There [https://www.youtube.com/watch?v=fQin6dsYTdE is a video here goes into some depth about scheduling policies]. You do not need to understand this for your IB test, but this material will help you have a deeper understanding of this content.
 
|-
|-
| multitasking || foo
| multitasking || In computing, multitasking is a concept of performing multiple tasks (also known as processes) over a certain period of time by executing them concurrently.<ref>https://en.wikipedia.org/wiki/Computer_multitasking</ref>. Multitasking operating systems allow more than one program to run at a time. They can support either preemptive multitasking, where the OS doles out time to applications (virtually all modern OSes) or cooperative multitasking, where the OS waits for the program to give back control (Windows 3.x, Mac OS 9 and earlier).<ref>https://stackoverflow.com/questions/79248/what-is-a-multitasking-operating-system</ref>
 
There [https://www.youtube.com/watch?v=t-zgY7zV9tk is a video here where you can learn more about multitasking]. It's a bit dry, but the content is excellent. You do not need to understand this for your IB test, but this material will help you have a deeper understanding of this content.
|-
|-
| virtual memory || foo
| virtual memory || In computing, virtual memory (also virtual storage) is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory.<ref>https://en.wikipedia.org/wiki/Virtual_memory</ref>. This video is deliciously excellent: https://www.youtube.com/watch?time_continue=1&v=qlH4-oHnBb8
 
The primary benefits of virtual memory include freeing applications (and programmers) from having to manage a shared memory space, increasing security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique of paging. Indeed, almost every virtual memory implementations divide a virtual address space into blocks of contiguous virtual memory addresses, called pages, which are usually 4 KB in size.<ref>https://gabrieletolomei.wordpress.com/miscellanea/operating-systems/virtual-memory-paging-and-swapping/</ref>
 
Prior to virtual memory we used overlays, which you can learn about by [https://en.wikipedia.org/wiki/Overlay_(programming) clicking here].
|-
|-
| paging || foo
| paging || This is related to virtual memory. In computer operating systems, paging is a memory management scheme by which a computer stores and retrieves data from secondary storage  for use in main memory.  In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.<ref>https://en.wikipedia.org/wiki/Paging</ref>. For a deeper (and excellent) look at paging, [https://www.geeksforgeeks.org/operating-system-paging/ please click here]
 
 
 
|-
|-
| interrupt || foo
| interrupt || In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing. The processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, the processor resumes normal activities. There are two types of interrupts: hardware interrupts and software interrupts. <ref>https://en.wikipedia.org/wiki/Interrupt</ref>.
 
Basically, the processor has a set of interrupt wires which are connected to a bunch of devices. When one of the devices has something to say, it turns its interrupt wire on, which triggers the processor (without the help of any software) to pause the execution of current instructions and start running a handler function.<ref>https://stackoverflow.com/questions/10388757/polling-v-s-interrupt</ref>
 
An interrupt is a hardware mechanism that enables CPU to detect that a device needs its attention. The CPU has a wire interrupt-request line which is checked by CPU after execution of every single instruction. When CPU senses an interrupt signal on the interrupt-request line, CPU stops its currently executing task and respond to the interrupt send by I/O device by passing the control to interrupt handler. The interrupt handler resolves the interrupt by servicing the device.
 
Although CPU is not aware when an interrupt would occur as it can occur at any moment, but it has to respond to the interrupt whenever it occurs.
 
When the interrupt handler finishes executing the interrupt, then the CPU resumes the execution of the task that it has stopped for responding the interrupt. Software, hardware, user, some error in the program, etc. can also generate an interrupt. Interrupts handling nature of CPU leads to multitasking, i.e. a user can perform a number of different tasks at the same time.
 
If more than one interrupts are sent to the CPU, the interrupt handler helps in managing the interrupts that are waiting to be processed. As interrupt handler gets triggered by the reception of an interrupt, it prioritizes the interrupts waiting to be processed by the CPU and arranges them in a queue to get serviced.<ref>https://techdifferences.com/difference-between-interrupt-and-polling-in-os.html</ref>
 
There [https://www.youtube.com/watch?v=jMnuQMYR3Ro is a video which helps you understand interrupts very well]. You do not need to understand this for your IB test, but this material will help you have a deeper understanding of this content.
This [https://www.youtube.com/watch?v=54BrU82ANww video is also superb to help you understand interrupts].
 
|-
|-
| polling || foo
| polling || Polling, or polled operation, in computer science, refers to actively sampling the status of an external device by a client program as a synchronous activity. <ref>https://en.wikipedia.org/wiki/Polling_(computer_science)</ref>.
 
* A polling cycle is the time in which each element is monitored once. The optimal polling cycle will vary according to several factors, including the desired speed of response and the overhead (e.g., processor time and bandwidth) of the polling.
 
* In roll call polling, the polling device or process queries each element on a list in a fixed sequence. Because it waits for a response from each element, a timing mechanism is necessary to prevent lock-ups caused by non-responding elements. Roll call polling can be inefficient if the overhead for the polling messages is high, there are numerous elements to be polled in each polling cycle and only a few elements are active.
 
* In hub polling, also referred to as token polling, each element polls the next element in some fixed sequence. This continues until the first element is reached, at which time the polling cycle starts all over again.<ref>https://en.wikipedia.org/wiki/Polling_(computer_science)</ref>
 
Key Differences Between Interrupt and Polling in OS:<ref>https://techdifferences.com/difference-between-interrupt-and-polling-in-os.html</ref>
# In interrupt, the device notifies the CPU that it needs servicing whereas, in polling CPU repeatedly checks whether a device needs servicing.
# Interrupt is a hardware mechanism as CPU has a wire, interrupt-request line which signal that interrupt has occurred. On the other hands, Polling is a protocol that keeps checking the control bits to notify whether a device has something to execute.
# Interrupt handler handles the interrupts generated by the devices. On the other hands, in polling, CPU services the device when they require.
# Interrupts are signalled by the interrupt-request line. However, Command-ready bit indicate that the device needs servicing.
# In interrupts, CPU is only disturbed when any device interrupts it. On the other hand, in polling, CPU waste lots of CPU cycles by repeatedly checking the command-ready bit of every device.
# An interrupt can occur at any instant of time whereas, CPU keeps polling the device at the regular intervals.
# Polling becomes inefficient when CPU keeps on polling the device and rarely finds any device ready for servicing. On the other hands, interrupts become inefficient when the devices keep on interrupting the CPU processing repeatedly.
 
There [https://www.youtube.com/watch?v=LNPBr3WvuNg is a video here which may help you better understand polling].  You do not need to understand this for your IB test, but this material will help you have a deeper understanding of this content.
|}
|}



Latest revision as of 10:03, 12 September 2022

Resource Management[1]

An operating system must manage limited resources. There is intense and heavy competition for these limited resources, so HOW does an operating system manage to efficiently balance resources?

OS resource management techniques[edit]

Resource management is the dynamic allocation and de-allocation by an operating system of processor cores, memory pages, and various types of bandwidth to computations that compete for those resources. The objective is to allocate resources so as to optimize responsiveness subject to the finite resources available. [2]

Technique Description
scheduling The aim of CPU scheduling is to make the system efficient, fast and fair.[3] Scheduling is the method by which work is assigned to resources that complete the work.[4]. There are many different scheduling strategies. The main purposes of scheduling algorithms are to minimize resource starvation and to ensure fairness amongst the parties utilizing the resources[5] Different scheduling approaches are:
  • First Come First Serve(FCFS) Scheduling
  • Shortest-Job-First(SJF) Scheduling
  • Priority Scheduling
  • Round Robin(RR) Scheduling.
  • Multilevel feedback queue scheduling.

There is a video here which goes into some depth about scheduling. You do not need to understand this for your IB test, but this material will help you have a deeper understanding of this content.


policies Given a particular task, policy refers to what needs to be done (i.e. activities to perform) and mechanism refers to how to do it (i.e. implementation to enforce policy).[6]. Put another way, the separation of mechanism and policy is a design principle in computer science. It states that mechanisms (those parts of a system implementation that control the authorization of operations and the allocation of resources) should not dictate (or overly restrict) the policies according to which decisions are made about which operations to authorize, and which resources to allocate.[7] Please distinguish between policy and mechanism. Policies are ways to choose which activities to perform. Mechanisms are the implementations that enforce policies.[8]

There is a video here goes into some depth about scheduling policies. You do not need to understand this for your IB test, but this material will help you have a deeper understanding of this content.

multitasking In computing, multitasking is a concept of performing multiple tasks (also known as processes) over a certain period of time by executing them concurrently.[9]. Multitasking operating systems allow more than one program to run at a time. They can support either preemptive multitasking, where the OS doles out time to applications (virtually all modern OSes) or cooperative multitasking, where the OS waits for the program to give back control (Windows 3.x, Mac OS 9 and earlier).[10]

There is a video here where you can learn more about multitasking. It's a bit dry, but the content is excellent. You do not need to understand this for your IB test, but this material will help you have a deeper understanding of this content.

virtual memory In computing, virtual memory (also virtual storage) is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory.[11]. This video is deliciously excellent: https://www.youtube.com/watch?time_continue=1&v=qlH4-oHnBb8

The primary benefits of virtual memory include freeing applications (and programmers) from having to manage a shared memory space, increasing security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique of paging. Indeed, almost every virtual memory implementations divide a virtual address space into blocks of contiguous virtual memory addresses, called pages, which are usually 4 KB in size.[12]

Prior to virtual memory we used overlays, which you can learn about by clicking here.

paging This is related to virtual memory. In computer operating systems, paging is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in main memory. In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.[13]. For a deeper (and excellent) look at paging, please click here


interrupt In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing. The processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, the processor resumes normal activities. There are two types of interrupts: hardware interrupts and software interrupts. [14].

Basically, the processor has a set of interrupt wires which are connected to a bunch of devices. When one of the devices has something to say, it turns its interrupt wire on, which triggers the processor (without the help of any software) to pause the execution of current instructions and start running a handler function.[15]

An interrupt is a hardware mechanism that enables CPU to detect that a device needs its attention. The CPU has a wire interrupt-request line which is checked by CPU after execution of every single instruction. When CPU senses an interrupt signal on the interrupt-request line, CPU stops its currently executing task and respond to the interrupt send by I/O device by passing the control to interrupt handler. The interrupt handler resolves the interrupt by servicing the device.

Although CPU is not aware when an interrupt would occur as it can occur at any moment, but it has to respond to the interrupt whenever it occurs.

When the interrupt handler finishes executing the interrupt, then the CPU resumes the execution of the task that it has stopped for responding the interrupt. Software, hardware, user, some error in the program, etc. can also generate an interrupt. Interrupts handling nature of CPU leads to multitasking, i.e. a user can perform a number of different tasks at the same time.

If more than one interrupts are sent to the CPU, the interrupt handler helps in managing the interrupts that are waiting to be processed. As interrupt handler gets triggered by the reception of an interrupt, it prioritizes the interrupts waiting to be processed by the CPU and arranges them in a queue to get serviced.[16]

There is a video which helps you understand interrupts very well. You do not need to understand this for your IB test, but this material will help you have a deeper understanding of this content. This video is also superb to help you understand interrupts.

polling Polling, or polled operation, in computer science, refers to actively sampling the status of an external device by a client program as a synchronous activity. [17].
  • A polling cycle is the time in which each element is monitored once. The optimal polling cycle will vary according to several factors, including the desired speed of response and the overhead (e.g., processor time and bandwidth) of the polling.
  • In roll call polling, the polling device or process queries each element on a list in a fixed sequence. Because it waits for a response from each element, a timing mechanism is necessary to prevent lock-ups caused by non-responding elements. Roll call polling can be inefficient if the overhead for the polling messages is high, there are numerous elements to be polled in each polling cycle and only a few elements are active.
  • In hub polling, also referred to as token polling, each element polls the next element in some fixed sequence. This continues until the first element is reached, at which time the polling cycle starts all over again.[18]

Key Differences Between Interrupt and Polling in OS:[19]

  1. In interrupt, the device notifies the CPU that it needs servicing whereas, in polling CPU repeatedly checks whether a device needs servicing.
  2. Interrupt is a hardware mechanism as CPU has a wire, interrupt-request line which signal that interrupt has occurred. On the other hands, Polling is a protocol that keeps checking the control bits to notify whether a device has something to execute.
  3. Interrupt handler handles the interrupts generated by the devices. On the other hands, in polling, CPU services the device when they require.
  4. Interrupts are signalled by the interrupt-request line. However, Command-ready bit indicate that the device needs servicing.
  5. In interrupts, CPU is only disturbed when any device interrupts it. On the other hand, in polling, CPU waste lots of CPU cycles by repeatedly checking the command-ready bit of every device.
  6. An interrupt can occur at any instant of time whereas, CPU keeps polling the device at the regular intervals.
  7. Polling becomes inefficient when CPU keeps on polling the device and rarely finds any device ready for servicing. On the other hands, interrupts become inefficient when the devices keep on interrupting the CPU processing repeatedly.

There is a video here which may help you better understand polling. You do not need to understand this for your IB test, but this material will help you have a deeper understanding of this content.

Standards[edit]

  • Outline OS resource management techniques: scheduling, policies, multitasking, virtual memory, paging, interrupt, polling.

References[edit]