Embedded Talks Ep 5: Multi-Level Feedback Queue Scheduling

Embedded Talks Ep 5: Multi-Level Feedback Queue Scheduling

In Episode 4, we discussed various scheduling algorithms and their trade-offs. However, a fundamental problem with these algorithms is that the operating system (OS) doesn’t know how long a job will take to execute. While round-robin scheduling improves system responsiveness, its turnaround time is often suboptimal.

This brings us to the core question: How can the OS learn and adapt as it runs?

The Multi-Level Feedback Queue (MLFQ) scheduling algorithm addresses this challenge by optimizing two key metrics:

  • Turnaround time: Minimizing the total time taken to complete a job.
  • Response time: Ensuring the system remains responsive to user interactions.

MLFQ achieves this by using multiple queues with different priority levels and applying a set of rules to dynamically adjust job priorities. Let’s dive deeper into how it works.


How MLFQ Works

MLFQ organizes jobs into multiple queues, each with a different priority level. Jobs in higher-priority queues are scheduled to run before those in lower-priority queues. The algorithm follows two basic rules:

  1. Rule 1: If Priority(A) > Priority(B), Job A runs, and Job B does not.
  2. Rule 2: If Priority(A) = Priority(B), Jobs A and B run in a round-robin fashion.

However, these rules alone can lead to a significant issue: low-priority jobs may never get CPU time, causing starvation. To address this, MLFQ introduces additional rules to dynamically adjust job priorities.


How MLFQ Adjusts Priorities

To ensure fairness and adaptability, MLFQ uses the following rules to change job priorities:

  1. Rule 3: When a job enters the system, it is placed in the highest-priority queue.
  2. Rule 4: If a job uses up its entire time slice (quantum) without yielding the CPU, it is moved down to a lower-priority queue.
  3. Rule 5: If a job yields the CPU before its time slice ends (e.g., by performing an I/O operation), it remains in the same priority queue.

These rules allow MLFQ to learn the behavior of jobs over time. CPU-bound jobs that consume their entire time slice are gradually demoted to lower-priority queues, while interactive jobs (which frequently yield the CPU) remain in higher-priority queues, ensuring better responsiveness.


Example of MLFQ in Action

Consider the following set of jobs:

  • Job A: A long-running CPU-bound task.
  • Job B: An interactive task with short CPU bursts.
  • Job C: A background task with low priority.

  1. Initial State: All jobs enter the highest-priority queue.
  2. Job B (interactive) frequently yields the CPU, so it remains in the high-priority queue.
  3. Job A (CPU-bound) uses its entire time slice and is gradually demoted to lower-priority queues.
  4. Job C (low-priority) eventually gets CPU time when higher-priority queues are empty.

This approach ensures that interactive jobs like Job B receive quick responses, while CPU-bound jobs like Job A still make progress without starving low-priority jobs like Job C.


Challenges with MLFQ

While MLFQ is a powerful scheduling algorithm, it has two major limitations:

  1. Starvation of Long-Running Jobs: If there are too many short-running jobs, long-running jobs may starve in lower-priority queues.
  2. Gaming the Scheduler: A malicious process could artificially yield the CPU before its time slice ends, tricking the scheduler into keeping it in a high-priority queue indefinitely.


Solving Starvation: Priority Boosting

To address starvation, MLFQ periodically boosts the priority of all jobs in the system. This is achieved through Rule 6:

  • Rule 6: After a fixed time period S, move all jobs to the highest-priority queue.

This ensures that:

  1. Long-running jobs are guaranteed CPU time periodically.
  2. If a CPU-bound job becomes interactive, it is treated fairly after the priority boost.


Preventing Gaming of the Scheduler

To mitigate the risk of gaming the scheduler, modern implementations of MLFQ use additional techniques:

  • Aging: Gradually increasing the priority of jobs that have been waiting for a long time.
  • Time Slice Adjustment: Reducing the time slice for lower-priority queues to prevent high-priority jobs from monopolizing the CPU.


Trade-offs and Tuning

MLFQ introduces several parameters that need to be carefully tuned for optimal performance:

  • Number of Queues: Too many queues can increase overhead, while too few can reduce flexibility.
  • Time Slice Length: Shorter time slices improve responsiveness but increase context-switching overhead.
  • Priority Boost Interval (S): A shorter interval reduces starvation but may increase the overhead of priority adjustments.


Conclusion

The Multi-Level Feedback Queue (MLFQ) scheduling algorithm is a sophisticated approach to balancing turnaround time and response time in operating systems. By dynamically adjusting job priorities and periodically boosting low-priority jobs, MLFQ ensures that both interactive and CPU-bound tasks are handled efficiently. However, it requires careful tuning to avoid issues like starvation and gaming of the scheduler.

MLFQ is widely used in modern operating systems, demonstrating its effectiveness in real-world scenarios. By understanding its principles and trade-offs, developers and system designers can make informed decisions when implementing or optimizing scheduling algorithms.

要查看或添加评论,请登录

Isfandyar Qureshi的更多文章

  • Embedded Talks Ep 4: Scheduling Algorithms

    Embedded Talks Ep 4: Scheduling Algorithms

    An operating system must provide an scheduler that is tasked with job of handling multiple process/tasks Before we dive…

  • Embedded Talks Ep3 : Context Switching

    Embedded Talks Ep3 : Context Switching

    Context switching refers to the process where cpu halt program currently executing save its context and switch to…

  • Embedded Talks Ep2 : A Process/TASK/THREAD

    Embedded Talks Ep2 : A Process/TASK/THREAD

    Concepts an embedded engineer must know: Process: A process is simply a running program; at any instant of in time…

  • Embedded Talk Ep.1

    Embedded Talk Ep.1

    Concepts an embedded software engineer must know Operating System: An operating system is a piece of software that…

  • WHY IT SUCKS TO BE EMBEDDED ENGINEER!!!!

    WHY IT SUCKS TO BE EMBEDDED ENGINEER!!!!

    This article is a personal experience and observation over my 6-7 years as an electronics/embedded engineer. Your…

    69 条评论
  • Sending Struct on SPI?

    Sending Struct on SPI?

    Another short article for embedded developers like me who are learning We all use structures in our C / C++ programing…

    3 条评论
  • Common Issue with SPI

    Common Issue with SPI

    Spi is very common protocol and is used very often it has its perk of multiple sensors on common bus etc Hardware used…

    1 条评论
  • MOCKING THE MYSTERY IN UNIT TESTING

    MOCKING THE MYSTERY IN UNIT TESTING

    Greeting everyone are you a software engineer making apps or are you an embedded iot developer. This article is for…

  • IS YOUR SPI GITCHY ?????

    IS YOUR SPI GITCHY ?????

    #embeddedsystems #spi #embeddedengineer Have you even face a scenario where your spi run sometimes very well and…

    8 条评论
  • RTOS DEBUGGING IN REAL TIME USING OZONE AND SYSTEM VIEWER

    RTOS DEBUGGING IN REAL TIME USING OZONE AND SYSTEM VIEWER

    Finally got my stlink-V2 on nucleo board to debug rtos providing me to see at what instance which task is running at…

社区洞察