Unit-2 Process- Operating System | BCA 4th Sem
Unit-2 Process- Operating System | BCA 4th Sem- Hello everyone welcome to the pencilchampions.com website. This website provide Operating System BCA CCS University Notes. Thankyou for visiting here.
Unit-2
Process
Meaning of Process
- The word “process” can have a few different meanings depending on the context. It can refer to a series of actions or steps taken to achieve a particular goal or result. It can also refer to the ongoing changes or developments that something goes through.
Process Life Cycle
- The process life cycle refers to the various stages that a process goes through, from its inception to completion. It is often used in project management or software development to ensure that processes are executed efficiently and effectively.
Read more-Â https://pencilchampions.com/unit-1-introduction-of-operating-system-bca/
The process life cycle typically consists of four main stages: initiation, planning, execution, and closure. Let’s dive into each stage:
- Initiation: This is the starting point of the process life cycle. It involves identifying the need for a process and defining its objectives. During this stage, the goals, scope, and stakeholders of the process are identified. It’s important to have a clear understanding of what needs to be achieved and why.
- Planning: Once the process is initiated, the planning stage begins. This involves creating a detailed roadmap for how the process will be executed. Key activities in this stage include defining the tasks, allocating resources, setting timelines, and identifying potential risks. Effective planning ensures that everyone involved has a clear understanding of their roles and responsibilities.
- Execution: With the plan in place, the process moves into the execution stage. This is where the actual work takes place to achieve the desired outcome. Tasks are carried out, resources are utilized, and progress is monitored. Communication and coordination among team members are crucial during this stage to ensure smooth execution and address any issues that may arise.
- Closure: The closure stage marks the end of the process life cycle. It involves evaluating the outcomes against the initial objectives and determining whether they have been met. Lessons learned from the process are captured and documented for future reference. This stage also includes celebrating successes and acknowledging the efforts of those involved.
- Throughout the process life cycle, it’s important to have a feedback loop in place. This allows for continuous improvement and adjustment as needed. Feedback can come from stakeholders, team members, or even external sources. By incorporating feedback, processes can be refined and optimized for better results in the future.
- Remember, the process life cycle is a dynamic and iterative process. It may involve multiple iterations, especially in complex projects or situations where changes are likely to occur. Flexibility and adaptability are key to successfully navigating the process life cycle.
Wikipedia-Â https://en.wikipedia.org/wiki/Process
Process Control Block
- The Process Control Block (PCB) is an important concept in operating systems. It’s like a data structure that contains information about a specific process running on a computer. The PCB holds details such as the process’s current state, program counter, register values, memory allocation, and other relevant information. The operating system uses the PCB to manage and control the execution of processes, allowing it to switch between different processes efficiently. The PCB plays a crucial role in multitasking and process scheduling. It helps the operating system keep track of all the processes and ensure that they are executed properly.
Meaning of process scheduling
- Process scheduling is a vital aspect of operating systems that involves managing and organizing the execution of multiple processes on a computer system. It determines the order in which processes are executed, allocates system resources, and ensures efficient utilization of the CPU (Central Processing Unit).
- In a multitasking operating system, there are typically more processes ready for execution than there are available CPUs. Process scheduling algorithms help the operating system decide which process should run next and for how long. Let’s explore the concept of process scheduling in more detail:
- Objectives of Process Scheduling:
The primary objectives of process scheduling are:
- Maximizing CPU utilization: The goal is to keep the CPU busy as much as possible to ensure efficient resource utilization.
- Fairness: Providing fair access to CPU time for all processes, preventing starvation or excessive dominance by certain processes.
- Responsiveness: Ensuring that interactive processes receive quick responses to user input.
- Throughput: Maximizing the number of processes completed per unit of time.
- Turnaround time: Minimizing the time it takes for a process to complete its execution.
- Waiting time: Minimizing the time processes spend waiting in the ready queue.
- Scheduling Policies:
- There are different scheduling policies or algorithms that operating systems use to determine the order of process execution. Some common scheduling algorithms include:
- First-Come, First-Served (FCFS): Processes are executed in the order they arrive.
- Shortest Job Next (SJN): The process with the shortest burst time is executed next.
- Round Robin (RR): Each process is given a fixed time slice (quantum) of CPU time, and processes are executed in a circular manner.
- Priority Scheduling: Processes are assigned priorities, and the highest priority process is executed first.
- Multilevel Queue Scheduling: Processes are divided into multiple priority queues, each with its own scheduling algorithm.
- Context Switching:
- Context switching is an essential part of process scheduling. When the operating system switches from executing one process to another, it needs to save the current process’s +state and load the state of the next process. This involves saving and restoring the process’s program counter, register values, and other relevant information. Context switching introduces some overhead, so scheduling algorithms aim to minimize the frequency of context switches.
Process scheduling queue
- In operating systems, process scheduling queues play a crucial role in managing the execution of processes on a computer system. These queues are used to organize and prioritize processes based on their characteristics and scheduling policies. Let’s explore the concept of process scheduling queues in more detail:
- Ready Queue:
- The ready queue is where all the processes that are ready for execution are placed. When a process is created or becomes ready to run, it is added to the ready queue. The scheduling algorithm determines the order in which processes are selected from the ready queue for execution. Different scheduling policies, such as First-Come, First-Served (FCFS) or Round Robin (RR), determine the order of execution.
- Job Queue:
- The job queue, also known as the long-term scheduler or the admission queue, is where all the processes reside when they enter the system. This queue holds both the processes that are waiting to be executed and the ones that are waiting to be admitted to the system. The long-term scheduler selects processes from the job queue and moves them to the ready queue based on various factors, such as system load, memory availability, and scheduling policies.
- Device Queue:
- Device queues are used to manage processes that are waiting for input/output (I/O) operations to complete. Each I/O device typically has its own device queue. When a process initiates an I/O operation, it is moved from the ready queue to the device queue associated with the specific I/O device. Once the I/O operation is finished, the process is moved back to the ready queue.
- Priority Queue:
- In priority scheduling algorithms, processes are assigned different priorities. The priority queue holds processes based on their priority levels. The highest priority process is selected for execution first. If multiple processes have the same priority, a secondary scheduling algorithm, such as FCFS or RR, may be used to determine the order of execution within the priority level.
- Multi-Level Queue:
- A multi-level queue is a combination of multiple ready queues, each with its own scheduling algorithm. Processes are assigned to different queues based on their characteristics, such as priority, type, or resource requirements. Each queue may have different scheduling policies, allowing for efficient management of processes with varying priorities or requirements.
- Feedback Queue:
- In feedback scheduling algorithms, processes are assigned to multiple queues based on their behavior and execution history. Each queue has a different priority level, and processes move between queues based on their performance.
Meaning of Scheduler
- Schedulers play a crucial role in managing the execution of processes in an operating system. There are different types of schedulers that handle different aspects of process scheduling. Here are a few common types of schedulers:
- Long-Term Scheduler (Admission Scheduler):
- The long-term scheduler, also known as the admission scheduler, is responsible for selecting processes from the job queue and admitting them to the system. It determines which processes are brought into main memory for execution. The long-term scheduler considers factors such as system load, memory availability, and scheduling policies to make admission decisions.
- Short-Term Scheduler (CPU Scheduler):
- The short-term scheduler, also known as the CPU scheduler, selects processes from the ready queue for execution on the CPU. It determines the order in which processes are allocated CPU time. The short-term scheduler uses scheduling algorithms like First-Come, First-Served (FCFS), Round Robin (RR), Shortest Job Next (SJN), or Priority Scheduling to make scheduling decisions.
- Medium-Term Scheduler:
- The medium-term scheduler, also known as the swapping scheduler, is responsible for managing the movement of processes between main memory and secondary storage (disk). It decides when to swap out a process from memory to free up space and when to bring a process back into memory. The medium-term scheduler helps in managing memory efficiently by swapping out less frequently used or idle processes.
- I/O Scheduler:
- The I/O scheduler manages the order in which I/O requests from processes are serviced. It determines the sequence in which I/O operations are performed to optimize disk or device utilization. Different I/O scheduling algorithms, such as First-Come, First-Served (FCFS), Shortest Seek Time First (SSTF), or Deadline Scheduling, can be used by the I/O scheduler.
Long-term Scheduler
- The long-term scheduler, also known as the admission scheduler, is an important component of an operating system. Its main job is to select processes from the job queue and admit them into the system for execution.
Now, let’s dive a bit deeper into the long-term scheduler and understand its role in managing the execution of processes.
- The long-term scheduler plays a crucial role in maintaining system efficiency by controlling the number of processes in the system. It decides which processes are brought into main memory from the job queue, based on various factors such as system load, memory availability, and scheduling policies.
- One of the primary goals of the long-term scheduler is to ensure that the system is not overloaded with too many processes. By carefully admitting processes into the system, it helps in balancing the workload and preventing resource contention.
- When a new process is submitted to the operating system, it enters the job queue, which acts as a holding area for incoming processes. The long-term scheduler periodically selects processes from the job queue and admits them into main memory for execution. It considers factors such as the current system load, available memory, and scheduling policies to make informed admission decisions.
- The long-term scheduler uses various strategies to select processes from the job queue. One common strategy is to prioritize processes based on their priority levels or other criteria specified by the scheduling policies. This ensures that important or high-priority processes are given preference for admission.
- Once a process is selected for admission, it is loaded into main memory, and its state is set to “Ready.” The process then enters the ready queue, where the short-term scheduler (also known as the CPU scheduler) selects processes for execution on the CPU.
- By controlling the number of processes in main memory, the long-term scheduler helps in managing system resources efficiently. It prevents memory congestion and ensures that there is enough memory available for other critical system processes.
- Another important aspect of the long-term scheduler is its ability to handle process priorities. Different processes may have different priorities based on their importance or urgency. The long-term scheduler takes these priorities into account when selecting processes for admission. This ensures that processes with higher priorities get executed in a timely manner.
- The long-term scheduler also plays a role in optimizing system performance. It considers factors such as the CPU utilization, I/O device availability, and memory usage to make informed decisions about process admission. By admitting processes strategically, it helps in achieving better overall system performance.
Short-term Scheduler
- The short-term scheduler to you. The short-term scheduler, also known as the CPU scheduler, is an essential component of an operating system. Its main job is to decide which process from the ready queue gets to use the CPU next.
Now, let’s dig a little deeper into the short-term scheduler and understand its role in managing the CPU and executing processes efficiently.
- The short-term scheduler plays a crucial role in determining the execution order of processes in the system. It selects the most suitable process from the ready queue and allocates the CPU to it for a specific time slice, which is known as a time quantum.
- The primary goal of the short-term scheduler is to optimize CPU utilization, minimize response time, and ensure fair allocation of CPU time among processes.
- When a process enters the ready queue after being admitted by the long-term scheduler, the short-term scheduler comes into action. It evaluates various factors to make informed decisions about process selection. These factors include process priority, CPU burst time, and scheduling policies.
- One common scheduling policy used by the short-term scheduler is the round-robin scheduling algorithm. In this algorithm, each process is assigned a fixed time quantum to execute on the CPU. If a process completes its execution within the given time quantum, it is moved to the end of the ready queue. This allows other processes to get a fair chance to use the CPU.
- If a process doesn’t complete its execution within the time quantum, it is temporarily suspended and moved to the back of the ready queue. The short-term scheduler then selects the next process in the queue to use the CPU.
- Another popular scheduling policy used by the short-term scheduler is the priority scheduling algorithm. In this algorithm, each process is assigned a priority value. The process with the highest priority gets the CPU first. This ensures that high-priority processes are executed promptly.
- The short-term scheduler also handles process synchronization and manages the context switching overhead. Context switching refers to the process of saving the current state of a process and loading the state of another process onto the CPU. This allows multiple processes to share the CPU efficiently.
Medium-term Scheduler
- The medium-term scheduler, also known as the swapping scheduler, is a part of the operating system that manages the movement of processes between main memory and secondary storage.
- The main goal of the medium-term scheduler is to improve the overall system performance by controlling the number of processes in main memory. It helps in maintaining a balance between the number of processes and the available resources.
- When a process is not actively using the CPU, the medium-term scheduler steps in and decides whether to keep the process in main memory or move it to secondary storage, such as the hard disk. This decision is based on factors like process priority, memory requirements, and the current state of the system.
- One of the key functions of the medium-term scheduler is to free up memory space by swapping out less frequently used or inactive processes. This helps in avoiding memory congestion and ensures that only the most essential processes are kept in main memory.
- When a process is swapped out, its entire contents, including its code, data, and stack, are transferred from main memory to secondary storage. This frees up memory space for other processes to be loaded into main memory.
- On the other hand, when a process needs to be brought back into main memory, the medium-term scheduler selects a suitable process from secondary storage and swaps it back in. This process is known as swapping in.
- The medium-term scheduler also plays a vital role in managing memory demands and preventing excessive swapping. It monitors the memory usage and makes decisions to balance the number of processes in main memory. If there is a shortage of memory, it can choose to swap out processes to create more space.
- The medium-term scheduler helps in improving the efficiency of process execution. By swapping out processes that are not actively using the CPU, it ensures that the available CPU time is utilized effectively by keeping only the most relevant processes in main memory.
- The medium-term scheduler helps in optimizing memory usage and maintaining a balanced system state. It ensures that processes are efficiently managed between main memory and secondary storage, leading to better system performance and resource utilization.
Operating system scheduling algorithms
- Scheduling algorithms are an essential part of an operating system as they determine the order in which processes are executed on the CPU. There are several popular scheduling algorithms, each with its own characteristics and goals. Here are a few commonly used ones:
- First-Come, First-Served (FCFS): This algorithm schedules processes in the order they arrive. It’s simple and easy to understand, but it may lead to poor performance if long-running processes block shorter ones.
- Shortest Job Next (SJN): Also known as Shortest Job First (SJF), this algorithm schedules processes based on their burst time, or the amount of time they require to complete. It aims to minimize the average waiting time and provides optimal scheduling in terms of minimizing the total execution time.
- Round Robin (RR): In this algorithm, each process is assigned a fixed time quantum or time slice. Processes are executed in a cyclic manner, and if a process doesn’t finish within its time quantum, it’s moved to the back of the queue. RR ensures fairness and prevents starvation, but it may not be efficient for long-running processes.
- Priority Scheduling: This algorithm assigns a priority value to each process, and the CPU is allocated to the process with the highest priority. It can be either preemptive (allowing higher-priority processes to interrupt lower-priority ones) or non-preemptive (allowing a process to complete its execution before selecting the next one).
- Multilevel Queue Scheduling: This algorithm divides processes into multiple queues based on priority or other criteria. Each queue has its own scheduling algorithm, such as FCFS or RR. Processes move between queues based on predefined rules, allowing for different levels of priority and scheduling policies.
- Multilevel Feedback Queue Scheduling: Similar to multilevel queue scheduling, this algorithm allows processes to move between different queues based on their behavior and resource requirements. However, it also allows processes to move up or down the queues dynamically based on their performance, which helps in adapting to changing workload characteristics.
Introduction of process synchronization
- Process synchronization is an important concept in operating systems that ensures proper coordination and communication between multiple processes. It helps prevent conflicts and race conditions that can arise when multiple processes access shared resources simultaneously.
- The main goal of process synchronization is to maintain the integrity and consistency of shared data. It involves implementing mechanisms that allow processes to cooperate and coordinate their actions, ensuring that they don’t interfere with each other or access shared resources in an inconsistent or incorrect manner.
There are various techniques for process synchronization, including:
- Mutex: A mutex (short for mutual exclusion) is a synchronization object that allows only one process to access a shared resource at a time. It provides exclusive access, ensuring that only one process can execute a critical section of code.
- Semaphores: Semaphores are integer variables used for process synchronization. They can be used to control access to shared resources by allowing a specified number of processes to access them simultaneously.
- Monitors: Monitors are high-level synchronization constructs that encapsulate shared data and the procedures that operate on them. They provide a structured and safe way to handle process synchronization by allowing only one process to execute a monitor procedure at a time.
- Condition Variables: Condition variables are synchronization primitives used to coordinate the execution of processes based on certain conditions. They allow processes to wait until a specific condition is met before proceeding.
Classical problems of synchronization with semaphore solution
- The Producer-Consumer Problem: This problem involves two types of processes – producers that generate data and place it in a shared buffer, and consumers that retrieve and consume the data from the buffer. The challenge is to ensure that the buffer is accessed correctly, with producers not overwriting data and consumers not accessing empty slots. This problem can be solved using semaphores. We can use two semaphores: one to keep track of the number of empty slots in the buffer (initialized to the buffer size) and another to keep track of the number of filled slots (initialized to 0). Producers will wait on the empty semaphore before adding data to the buffer, and signal the filled semaphore after adding data. Consumers will wait on the filled semaphore before retrieving data from the buffer, and signal the empty semaphore after consuming data.
- The Dining Philosophers Problem: In this problem, there are a group of philosophers sitting around a table, and each philosopher alternates between thinking and eating. There are only a limited number of forks available for the philosophers to use, with one fork placed between each pair of adjacent philosophers. The challenge is to prevent deadlock and ensure that the philosophers can eat without conflicts. This problem can also be solved using semaphores. We can use a semaphore for each fork. Philosophers will attempt to acquire the forks on their left and right before starting to eat. If both forks are available, they can proceed to eat. Otherwise, they will wait until the necessary forks become available.
Discover more from Pencil Champions
Subscribe to get the latest posts sent to your email.