CPU Scheduling & Concurrency in Operating Systems | Best Pdf for 2025 – 2026 Exam

CPU Scheduling & Concurrency in Operating Systems The process of deciding which process in the ready queue should be allocated the CPU to maximize CPU utilization, minimize turnaround time, minimize waiting time, minimize response time, ensure fairness, and maximize throughput. CPU Scheduling & Concurrency in Operating Systems

Concurrency: The ability of an operating system to execute multiple processes seemingly at the same time by managing shared resources safely and efficiently, improving resource utilization, responsiveness, enabling parallelism (on multi-core systems), and promoting modularity

1. Scheduling Concepts (CPU Scheduling & Concurrency in Operating Systems)

CPU Scheduling:

  • Determines which process will be assigned to the CPU when it becomes available.

Need for Scheduling:

  • Efficient CPU utilization.
  • Minimizing response & turnaround time.
  • Fairness among processes. (CPU Scheduling & Concurrency in Operating Systems)

2. Performance Criteria (Scheduling Metrics)

CriteriaDescription
CPU Utilization% of time CPU is busy executing processes. (High utilization is desired)
ThroughputNumber of processes completed per unit time.
Turnaround TimeTotal time from process submission to completion. (Finish Time – Arrival Time)
Waiting TimeTotal time a process spends in ready queue. (Turnaround Time – CPU Burst Time)
Response TimeTime from request submission to first response.
FairnessEnsures no process is starved of CPU time.

3. Process Concept

Process:

  • A program in execution. (CPU Scheduling & Concurrency in Operating Systems)

Process States:

  • New → Ready → Running → Waiting → Terminated.
  • New
    • The process is being created. (CPU Scheduling & Concurrency in Operating Systems)
  • Ready
    • The process is ready to run but waiting for CPU time.
  • Running
    • The process is currently executing on the CPU. (CPU Scheduling & Concurrency in Operating Systems)
  • Waiting (Blocked)
    • The process is waiting for some event or resource (like I/O).
  • Terminated (Exit) (CPU Scheduling & Concurrency in Operating Systems)

4. Process Transition Diagram

            +-------------------+
            |        New        |
            +-------------------+
                     |
                     v
            +-------------------+
            |       Ready       |
            +-------------------+
                     |
                     v
            +-------------------+
            |      Running      |
            +-------------------+
               ^            |
               |            v
            +--------+   +--------+
            | Waiting|   |Terminate|
            +--------+   +--------+

5.Schedulers (in Operating System)

A scheduler is a part of the operating system that manages process execution by selecting processes from a queue and allocating CPU time or resources. (CPU Scheduling & Concurrency in Operating Systems)

TypeFunction
Long-term SchedulerControls admission of processes to system. (Controls degree of multiprogramming)
Short-term SchedulerSelects among ready processes to execute next (Dispatcher).
Medium-term SchedulerSuspends/resumes processes (Swapping).

6.Process Control Block (PCB)

Definition:

A Process Control Block (PCB) is a data structure used by the operating system to store all information about a process. (CPU Scheduling & Concurrency in Operating Systems)

Contents of PCB:

  1. Process ID (PID): Unique identification number for the process.
  2. Process State: Current state (Ready, Running, Waiting, etc.).
  3. Program Counter: Address of the next instruction to execute.
  4. CPU Registers: Values of CPU registers (used during context switching).
  5. Memory Management Info: Details of memory assigned to the process (page tables, segment tables). (CPU Scheduling & Concurrency in Operating Systems)
  6. Accounting Information: CPU usage, execution time, job priority.
  7. I/O Status Information: List of I/O devices allocated to the process, files opened, etc.
  8. Scheduling Information: Process priority, scheduling queue pointers.

Diagram of PCB:

+------------------------+
| Process ID (PID)        |
+------------------------+
| Process State           |
+------------------------+
| Program Counter         |
+------------------------+
| CPU Registers           |
+------------------------+
| Memory Management Info  |
+------------------------+
| Accounting Information  |
+------------------------+
| I/O Status Information  |
+------------------------+
| Scheduling Information  |
+------------------------+

7.Threads and Their Management

A thread is the smallest unit of execution within a process.It is also called a lightweight process because multiple threads can exist within a single process, sharing resources but running independently. (CPU Scheduling & Concurrency in Operating Systems)

Benefits of Threads:

  • Faster than processes.
  • Efficient resource sharing.
  • Better CPU utilization.
  • Enables multitasking within a process. (CPU Scheduling & Concurrency in Operating Systems)

Types of Threads:

  1. User-Level Threads (ULT)
    • Managed by the user program, OS is unaware.
  2. Kernel-Level Threads (KLT)
    • Managed directly by the operating system.

Multithreading:

The ability of a CPU (or a single process) to execute multiple threads simultaneously. (CPU Scheduling & Concurrency in Operating Systems)


Thread Management Activities:

  1. Thread Creation
    • Initiating a new thread within a process.
  2. Thread Scheduling
    • Deciding which thread runs next.
  3. Thread Synchronization
    • Coordinating threads to prevent conflicts (e.g., using locks, semaphores).
  4. Thread Termination
    • Ending a thread’s execution properly. (CPU Scheduling & Concurrency in Operating Systems)

Example:

  • In a web browser, one thread loads images while another thread downloads data. (CPU Scheduling & Concurrency in Operating Systems)

Diagram:

Process
 ├── Thread 1
 ├── Thread 2
 └── Thread 3
(All share same memory & resources)

8.CPU Scheduling Algorithms

“CPU Scheduling Algorithms decide the order in which processes are executed by the CPU.”

CPU Scheduling & Concurrency in Operating Systems

Types of CPU Scheduling Algorithms:

1. First-Come, First-Served (FCFS)

  • Processes are executed in the order they arrive.
  • Non-preemptive.
  • Simple but can lead to long waiting times.

2. Shortest Job Next (SJN) / Shortest Job First (SJF)

  • Process with the shortest CPU burst time is executed first.
  • Can be preemptive or non-preemptive. (CPU Scheduling & Concurrency in Operating Systems)
  • Minimizes average waiting time.

3. Priority Scheduling

  • Processes are executed based on priority values.
  • Higher priority = gets CPU first.
  • Can be preemptive or non-preemptive.

4. Round Robin (RR)

  • Each process gets a fixed time slot (time quantum) to execute.
  • Preemptive.
  • Suitable for time-sharing systems.

5. Multilevel Queue Scheduling

  • Processes are divided into different queues based on priority/type.
  • Each queue has its own scheduling algorithm. (CPU Scheduling & Concurrency in Operating Systems)

6. Multilevel Feedback Queue Scheduling

  • Processes can move between queues based on their behavior and CPU needs.
  • Flexible and dynamic. (CPU Scheduling & Concurrency in Operating Systems)

Comparison Table:

AlgorithmTypeKey Feature
FCFSNon-preemptiveSimple, based on arrival time
SJF/SJNBothExecutes shortest job first
Priority SchedulingBothExecutes process with highest priority first
Round RobinPreemptiveFixed time slice for each process
Multilevel QueueBothMultiple queues with different policies
Feedback QueuePreemptiveProcesses can move between queues

9.Concurrent Processes

Concurrent processes are processes that are executing at the same time or overlapping in execution but are not necessarily running simultaneously on different CPUs. They share the CPU and are interleaved (processes are executed in overlapping time periods). (CPU Scheduling & Concurrency in Operating Systems)

Key Characteristics of Concurrent Processes:

  1. Independence: Processes do not need to communicate or synchronize.
  2. Interleaving: CPU time is shared among processes, but their execution is not necessarily synchronized.
  3. Shared Resources: Concurrent processes can share resources like memory, files, or peripherals.

Types of Concurrent Execution:

  1. Parallel Execution: Processes run at the same time on multiple CPUs or cores (true simultaneous execution).
  2. Interleaved Execution: Processes share the CPU by switching back and forth so quickly that it appears as if they are running at the same time.

Examples of Concurrent Processes:

  • Web Server: Handles multiple client requests simultaneously. Each request is a concurrent process that can execute independently. (CPU Scheduling & Concurrency in Operating Systems)
  • Multitasking OS: Running several applications (e.g., browser, email, music player) at the same time.

Issues in Concurrent Processes:

  1. Synchronization: Ensuring that processes work together correctly when accessing shared resources.
  2. Deadlock: A situation where two or more processes are waiting for each other to release resources, causing all of them to be stuck. (CPU Scheduling & Concurrency in Operating Systems)
  3. Race Condition: When two or more processes try to modify shared data at the same time, leading to unpredictable results.

How to Manage Concurrent Processes:

  1. Locks: Mechanisms like semaphores or mutexes to ensure that only one process can access shared resources at a time. (CPU Scheduling & Concurrency in Operating Systems)
  2. Semaphores: A signaling mechanism to manage access to shared resources between processes.
  3. Monitors: High-level synchronization tools that control access to shared resources.

10.Mutual Exclusion & Critical Section Problem

Critical Section:

The critical section is a part of a process where it accesses shared resources (like variables, memory, or devices). Only one process can be inside the critical section at a time to ensure data integrity. (CPU Scheduling & Concurrency in Operating Systems)


The Critical Section Problem:

The Critical Section Problem arises when multiple processes need to access shared resources. The challenge is to design a solution that:

  1. Prevents race conditions (when multiple processes try to modify shared resources simultaneously).
  2. Ensures mutual exclusion (only one process accesses the critical section at a time).
  3. Guarantees progress (if no process is in the critical section, any process can enter).
  4. Prevents deadlock (no process should wait indefinitely to enter the critical section).

Conditions for a Solution to the Critical Section Problem:

  1. Mutual Exclusion: Only one process can be in the critical section at a time.
  2. Progress: If no process is in the critical section and multiple processes want to enter, the selection of the next process should not be blocked. (CPU Scheduling & Concurrency in Operating Systems)
  3. Bounded Waiting: A process should not have to wait indefinitely to enter the critical section.

Solutions to the Critical Section Problem:

  1. Locks: A lock is a mechanism to ensure mutual exclusion. A process must acquire a lock before entering its critical section and release it when done.
    • Example: Mutex (Mutual Exclusion).
  2. Semaphores: A synchronization tool that uses counters to manage access to shared resources.
    • Binary Semaphore: Acts as a lock (1 or 0). (CPU Scheduling & Concurrency in Operating Systems)
    • Counting Semaphore: Allows counting of multiple resources.
  3. Monitors: A higher-level synchronization tool that automatically manages mutual exclusion. Only one process can execute a monitor function at a time.
  4. Peterson’s Algorithm (for two processes): A classic solution to the two-process critical section problem that uses two shared variables to enforce mutual exclusion.

Example:

Imagine two processes trying to access a printer. If both processes print at the same time, their outputs will mix. Mutual exclusion ensures that only one process prints at a time, avoiding the problem.

11.Dekker’s Solution (Software Solution to the Critical Section Problem)

Dekker’s Algorithm is a software solution to the Critical Section Problem that ensures mutual exclusion for two processes. It was developed by Tanenbaum and Dekker to avoid race conditions and ensure that two processes do not enter their critical sections simultaneously. (CPU Scheduling & Concurrency in Operating Systems)

Key Features of Dekker’s Solution:

  1. Mutual Exclusion: Ensures that only one process can be in the critical section at a time.
  2. Progress: If no process is in the critical section, the algorithm ensures that one of the waiting processes will be allowed in. (CPU Scheduling & Concurrency in Operating Systems)
  3. Bounded Waiting: No process will wait indefinitely to enter the critical section.

Dekker’s Algorithm for Two Processes:

Dekker’s solution uses two flags and a turn variable to control access to the critical section. Each process has a flag indicating whether it wants to enter the critical section. The turn variable indicates which process has the right to enter if both processes want to enter at the same time.


Variables Used:

  • flag[0], flag[1]: Flags for Process 0 and Process 1. These indicate whether a process wants to enter the critical section.
    • flag[0] = true means Process 0 wants to enter.
    • flag[1] = true means Process 1 wants to enter.
  • turn: A variable used to give priority to one process when both processes want to enter the critical section at the same time.
    • turn = 0 means Process 0 gets priority.
    • turn = 1 means Process 1 gets priority.

Steps in Dekker’s Algorithm:

  1. Want to Enter Section:
    • Process sets its own flag to true indicating it wants to enter the critical section.
  2. Wait for the Other Process:
    • Each process checks if the other process wants to enter the critical section (flag[1] for Process 0, flag[0] for Process 1). (CPU Scheduling & Concurrency in Operating Systems)
    • If the other process wants to enter, the process must wait until the other process is not interested or the turn is in its favor.
  3. Enter Critical Section:
    • Once the other process either does not want to enter or the turn variable allows it, the process enters the critical section.
  4. Exit Critical Section:
    • After completing the critical section, the process sets its flag to false, allowing the other process to enter.
// Shared Variables
int flag[2] = {0, 0};  // flag[i] = 1 means process i wants to enter
int turn = 0;  // Turn variable (0 or 1)

// Process 0
while (true) {
    flag[0] = 1;  // Indicate Process 0 wants to enter critical section
    while (flag[1] == 1) {  // If Process 1 also wants to enter
        if (turn == 1) {  // If it's Process 1's turn, wait
            flag[0] = 0;  // Stop trying to enter
            while (turn == 1);  // Wait for Process 1 to finish
            flag[0] = 1;  // Start trying again
        }
    }
    // Critical Section Code here...
    turn = 1;  // Give turn to Process 1
    flag[0] = 0;  // Exit Critical Section
}

// Process 1 (same code as Process 0, with flag[1] and turn = 0)

12.Peterson’s Solution (Software Solution)

Peterson’s Algorithm is a software solution for the Critical Section Problem designed for two processes. It ensures mutual exclusion, progress, and bounded waiting using two shared variables. It was developed by Gary Peterson in 1981. (CPU Scheduling & Concurrency in Operating Systems)

Key Features of Peterson’s Algorithm:

  1. Mutual Exclusion: Ensures only one process enters the critical section at a time.
  2. Progress: If no process is in the critical section, the algorithm guarantees that one process will enter the critical section.
  3. Bounded Waiting: Prevents any process from waiting indefinitely to enter the critical section.

Variables Used in Peterson’s Algorithm:

  1. flag[2]: A shared array of two boolean values (flag[0] for Process 0 and flag[1] for Process 1).
    • flag[i] = true means Process i wants to enter the critical section.
  2. turn: A shared integer variable that determines whose turn it is to enter the critical section.
    • turn = 0 means it is Process 0’s turn to enter.
    • turn = 1 means it is Process 1’s turn to enter.

Steps in Peterson’s Algorithm:

  1. Want to Enter the Critical Section:
    • Each process sets its own flag to true, indicating it wants to enter the critical section.
  2. Wait for the Other Process:
    • Each process enters a waiting loop where it checks if the other process wants to enter the critical section. If the other process also wants to enter, the process checks whose turn it is.
    • If it’s not the process’s turn, it waits for the other process to finish.
  3. Enter the Critical Section:
    • If the other process does not want to enter, or it’s the process’s turn, it enters the critical section. (CPU Scheduling & Concurrency in Operating Systems)
  4. Exit the Critical Section:
    • After completing the critical section, the process sets its flag to false, allowing the other process to enter.

Peterson’s Algorithm (Code Example for Two Processes):

// Shared variables
bool flag[2] = {false, false};  // flag[i] = true means process i wants to enter
int turn = 0;                   // turn variable

// Process 0
while (true) {
    flag[0] = true;             // Indicate Process 0 wants to enter critical section
    turn = 1;                   // Give turn to Process 1
    while (flag[1] == true && turn == 1);  // Wait if Process 1 is also trying
    // Critical Section Code here...
    flag[0] = false;            // Exit Critical Section
}

// Process 1
while (true) {
    flag[1] = true;             // Indicate Process 1 wants to enter critical section
    turn = 0;                   // Give turn to Process 0
    while (flag[0] == true && turn == 0);  // Wait if Process 0 is also trying
    // Critical Section Code here...
    flag[1] = false;            // Exit Critical Section
}

Explanation of the Algorithm:

  1. Process 0:
    • flag[0] = true: Process 0 wants to enter the critical section.
    • turn = 1: Process 0 gives priority to Process 1.
    • while (flag[1] && turn == 1): If Process 1 also wants to enter, Process 0 waits. It waits until either Process 1 doesn’t want to enter or it’s Process 0’s turn.
  2. Process 1:
    • flag[1] = true: Process 1 wants to enter the critical section.
    • turn = 0: Process 1 gives priority to Process 0.
    • while (flag[0] && turn == 0): If Process 0 also wants to enter, Process 1 waits until either Process 0 doesn’t want to enter or it’s Process 1’s turn.
  3. Critical Section:
    • After the waiting loop, the process enters the critical section.
  4. Exit Critical Section:
    • Once the critical section is done, the process sets its flag to false, allowing the other process to enter.

Diagram of Peterson’s Algorithm:

luaCopy codeProcess 0: |--- Wants to Enter ---|--- Critical Section ---|--- Exit Critical Section ---|
Process 1:                |--- Wants to Enter ---|--- Critical Section ---|--- Exit Critical Section ---|

Advantages of Peterson’s Algorithm:

  1. Mutual Exclusion: Ensures only one process enters the critical section at a time.
  2. Fairness: Prevents starvation and ensures both processes get access to the critical section in a fair manner.
  3. Simple and Elegant: The algorithm uses only two shared variables (flag[] and turn), making it easy to implement.

Disadvantages of Peterson’s Algorithm:

  1. Busy Waiting: Both processes continuously check flags and turn, wasting CPU cycles.
  2. Limited to Two Processes: The algorithm is specifically designed for two processes. For more than two processes, it needs to be extended or a different solution must be used.
  3. Overhead: Though it’s simple, using busy waiting can be inefficient in systems where processes are long-running or where there are many processes competing for the critical section.

Use Cases of Peterson’s Algorithm:

  • Two-Process Mutual Exclusion: Ideal for situations where only two processes need to synchronize and share a critical section.
  • Educational Tool: Peterson’s algorithm is widely used in textbooks and courses to demonstrate basic synchronization techniques. (CPU Scheduling & Concurrency in Operating Systems)

Also Read:- Basic Transformations in Computer graphics

Similar Posts