Selasa, 19 Maret 2013



Deadlock detection

If the system does not employ either a deadlock prevention or avoidance algorithm deadlock, the deadlock situation may occur. In this environment, the system must provide:

* An algorithm that examines the state of the system to determine whether there has been a deadlock. 

* An algorithm to recover from deadlock.

If all resources have only one event, then we can define a deadlock detection algorithm which uses a variant of the resource allocation graph, called wait-for graph.

As before, there is a deadlock in the system if and only if the wait-for graph contains a cycle. To detect deadlocks, the system needs to keep the wait-for graph and periodically call the algorithm that finds cycles in the graph. Algorithm to detect cycles in the graph are used to sequence operations n1, where n is the number of vertices in the graph



Deadlock avoidance

 
The caches guarantee that all incoming replies (REPLYs and COHE_REPLYs) are accepted even if they require further messages to be sent out. If a REPLY leads to a retry (either because of an RAR or a NackUpgradeConflictReply), the MSHR originally allocated by the REQUEST will be used as a resource from which to send the message to the cache ports. If a REPLY leads to a writeback or replacement message, its write-back buffer entry or MSHR can be used as a resource from which to send the message. In all cases, the resources needed to send any possible messages on REPLY are reserved at the time of the REQUEST.
Additionally, the tag arrays of the caches have a special port reserved for COHE messages to insure that these messages do not interact with REQUESTs or REPLYs to cause a deadlock. Such a port would likely be considered excessive in a real system; an alternative is to have the caches break deadlocks caused by COHEs by sending these messages back to the directory marked with s.reply fields set to RAR (request a retry).




VIRTUAL MEMORY1. The basic concept of virtual memory:
Memory management is essentially putting all the parts of a process that will run into memory before the process is run. To that end, it's all part of the process should have its own place in the physical memory.
But not all parts of the process will be run, for example:
• Statement or the only option will be exercised in certain circumstances. Examples are: error messages only appear if an error occurs when the program is run.
• These functions are rarely used.
• Allocating more memory than necessary. Example: arrays, lists, and tables.
In a large-capacity memory, these things would not be a problem. However, the memory is very limited, this will decrease the utility of optimizing the physical memory space. As a solution to these problems is used the concept of virtual memory.
Virtual memory is a technique that separates between logical memory and physical memory. This technique hides the physical aspects of user memory by making the location of the virtual address memory in the form of bytes is unlimited and put some parts of the virtual memory in logical memory. In contrast to the limitations of the physical memory, virtual memory can hold program on a large scale, exceeding the capacity of the available physical memory.
Principle of virtual memory to remember is that: "The maximum speed of the execution process in virtual memory can be equal, but never exceed the speed of execution of the same process in the system without the use of virtual memory."
The concept of virtual memory was first suggested in 1961 Fotheringham on the Atlas computer system at the University of Manchester, UK (Hariyanto, Bambang: 2001)
As said above that only a portion of the program is placed in physical memory. This provides advantages:
• Reduced process I / O is needed (traffic I / O to be low). For example, for the program needs to read from the disk and insert the memory every time it is accessed.
• Space becomes more flexible due to the reduction of physical memory used. For example, to program the entire 10 MB is not included in the physical memory. Error messages included only if an error occurs.
• Increased response to declining load I / O and memory.
• Increasing the number of users that can be served. The memory space is still widely available allows the computer to receive more requests from users.
The main idea of ​​the virtual memory is a composite measure of the program, data and stack exceeds the number of available physical memory. The operating system stores the parts of the process that is being used in physical memory (main memory) and the rest are placed on disk. Once the parts are in the disk, then the part of memory that is not needed will be removed from physical memory (swap-out) and replaced (swap-in) by the disc's necessary.
The virtual memory is implemented in a multiprogramming system. For example: 10 programs with a size of 2 Mb of memory to run at a capacity of 4 Mb. Each program allocated 256 KByte and parts swap in process) into physical memory as needed and will be out (swap out) when not needed. Thus, multiprogramming system becomes more efficient.
Virtual memory can be done in two ways:
• Demand administration pages (demand paging)
• Demand segmentation (demand segmentation). Example: IBM OS / 2. Of demand segmentation algorithm is more complex, because it is rarely implemented.Demand Paging
Demand paging or granting requests page is one implementation of the virtual memory is the most commonly used. Demand paging) in principle similar to the request page (paging) it's just that page (page) will not be brought into physical memory until it is really needed. They need help from the hardware to determine the location of the page when it is needed.
Since the implementation of demand paging virtual memory, then the profits equally with the advantage of virtual memory, namely:
• Bit I / O is needed.
• Less memory required
• a faster response
• Able to serve more usersProblems on Demand Paging
There are three possible cases that may occur at the time of the required checks on the page, namely: existing and already in memory. Valid status ("1"). Page was still on the disk but not diberada in memory (had to wait until incorporated). Invalid status ("0"). Page does not exist, either in memory or on disk (invalid reference -> abort).

2. Memory management on Linux and Widows:

* In Linux:
As in solaris 2, Linux also uses a variation of the clock algorithm. Thread of the linux kernel (kswapd) will be run periodically (or be called when memory usage is excessive). If the number of free pages less than the upper limit of free pages, then the thread will attempt to free three pages. If less than the lower limit of free pages, the thread will attempt to free the six yard and sleep for a while before walking again. When he runs, it will check mem_map, a list of all pages contained in the memory. Each page has a byte that is initialized to the age of three. Each time the page is accessed, then the age is to be added (up to a maximum of 20), each time kswapd check this page, then the life will be reduced. If the age of a page has reached 0 then she can be exchanged. When kswapd trying to free pages, he first frees the page from the cache, if it fails he will reduce the file system cache, and if all else failed, he would stop the process. Memory allocation in linux using two primary allocation, the algorithm buddy and slab. For the buddy algorithm, each allocation exercise routine is invoked, he checked out the next memory block, if it is found he is allocated, otherwise the list will d iperiksa the next level. If there is a free block, it will be divided into two, one is allocated and the other moved to the list below.
* In Windows:Windows NT
Windows NT implements virtual memory using page requests through clustering. Clustering menanganani error page by adding not only affected by the error pages, but also pages that are around. At first the process was made, he was given the minimum working set minimum number of pages are guaranteed to be owned by the process in memory. If enough memory is available, the process can be given to as many pages maximum working set. Virtual memory manager maintains a list of free page frames. There is also a limit value associated with this list to indicate whether the available memory is sufficient. If the process had reached its maximum working set and an error page, then he should choose a replacement page with a local page replacement rules.
When the amount of free memory falls below the limit value, the virtual memory manager to use a tactic known as automatic working set trimming to restore the value of the above restrictions. It works by evaluating the number of pages allocated to the process. If the page allocation process has gotten more than its minimum working set, virtual memory manager will reduce the number of pages to the working-set minimum. If the free memory is available, the process of working on the minimum working set to get extra pages.

Senin, 18 Maret 2013



Shortest Job First Algorithm

Shortest Job First (SJF) scheduling is a priority and Non Preventive. Non Preveentive mean here is when the allotted time a processor then the processor can not be taken the other, until the process is completed in the execution. This assumes scheduling time to complete the process known in advance. The mechanism is to schedule the process with the shortest time to completion first, thus providing high efficiency and low turn around time. In terms of time spent in the current program (job) began to enter into the system until the process is completed the system, requiring a short time. Shortest Job First (SJF) scheduling algorithm can be said to be optimal with an average waiting time is minimal.

For example, there are four processes by CPU Burst in milliseconds.


                                               Process with CPU Burst in milliseconds

Scheduling processes with the SJF algorithm (non-preventive) gant chart can be seen in the following:

                                              Gant chart algorithm SJF (non-Preventive)
 
Waiting time for P1 is 0, P2 is 26, P3 and P4 is 3 is 7 so that the average waiting time is (0 + 6 + 3 + 7) / 4 = 4 milliseconds

Another example for the algorithm Shortest Job First (SJF), for example, there are four processes (job), namely A, B, C, D with the time course of each is 8,4,4 and 4 minutes. If the processes are executed, then the turn around time for A is 8 minutes, for B is 12, for C is 16 and D is 20. If the four process using shortest job scheduling fisrt, then turn around time for B is 4, to C is 8, for D is 12 and for A is 20.

Since SJF always pay attention to the average response time is the smallest, it is very good for the interactive process. Generally interactive process has a pattern, that is waiting for orders, run the command, wait command and run the command, and so on. Problems arise when using this algorithm is not knowing the size of the job when the job log. To determine the size of the job is to make estimates or estimates based on previous behavior. The process does not come together, so that its adoption should be dynamic.

Another issue that emerged in this algorithm is that it can be as certain conditions, a job may never complete execution. Let me give an example, for example, there is a process with the elapsed time 1 hour to arrive at time 0. But at the same time every one minute and the next come the short elapsed time of 2 minutes. As a result, it could be the A never gets a small executable.
 




Algorithm First Come First Served

Algorithm First Come First Served (FCFS) is also called as the first technique Arriving First Served (PTPD). FCFS is no priority scheduling and without preempsi (see post cpu scheduling). Therefore, this process consists in the queue simultaneously pure.

In FCFS, a process that arrives first will be served first. When it arrives at the same time, the services performed by the order they are in the queue. It does not matter whether they are short or a long process. To be able to be serviced by the processor, process the queue behind must wait until all processes carried out in front of it completed.


  Name of the   process Arrival Old process
A
B
C
D
E
0
0
0
0
0
7
10
2
4
8

Picture Case DI - five queues with arrival process = 0

Name of the   process Arrival Old process
A
B
C
D
E
0
1
8
2
5
7
10
2
4
8

Picture Case II - queue five different arrival processes

We can directly incorporate this process into the process table on the processor. In case I, the B can only be implemented after a completed process implemented. Process C can only be implemented after the completion B implemented. And so on to obtain the values ​​as shown in the table following Figure.


Name of the   process Arrival Old Process Time Start Time Complete Time
Vain
Older Response
A
B
C
D
E
0
0
0
0
0
7
10
2
4
8
0
7
17
19
23
7
17
19
23
31
0
7
17
19
23
7
17
19
23
31




Number
Mean
66
13,2
97
19,4

Performance image processor with the FCFS algorithm for the case I

It appears here that the average response time was 19.4 units of time. This value is quite large compared to the long process of each process. Here's an illustration for an example of the second case.


Name of the   process Arrival Old Process Time Start Time Complete Time
Vain
Older Response
A
B
D
E
C
0
1
2
5
8
7
10
4
8
2
0
7
17
21
29
7
17
21
29
31
0
6
15
16
21
7
16
19
24
23




Number
Mean
58
11,6
89
17,8
   
 Performance image processor with the FCFS algorithm for the case II

In Figure Case at the top of this post, the process has not ordered according to arrival time, so after ordered by time of arrival, the queue to be A, B, D, E, and C. It appears here that the average response time was 17.8 units of time. This value is quite large compared to the long process of each process but still smaller than the average response time for case I. The difference with the first example lies in the calculation of response time. If in the first instance, the same old response when finished, so here they are not the same as when it is not the same process.

These two examples show that the response time is strongly influenced by the long process of the process located at the front of the queue. If the old process to the process at the front of the queue was huge, then the long process with a short process on the back of the queue, still have a long wait before they can be serviced by the processor. That's why FCFS scheduling is less profitable for the entire service.

One way to overcome this is through a priority. Long process with a short process the priority precedence to the processor. Shortened period of the process, the sooner the process is serviced by the processor. Scheduling is known as the Shortest Process Prior.
 



DEFINITIONS DEFINITIONS Interrupt
The process is performed by the microcontroller Interrupt Service Routine

Interrupt is an event or events that led to the microcontroller interrupt pause to serve them. Programs that run at interrupt minister called Interrupt Service Routine. The analogy is as follows, someone is typing reports, telephone suddenly rang and menginterrupsi person so stop typing jobs and lift the telephone. After the telephone conversation which in this case is an analogy of the Interrupt Service Routine is completed then that person went back to his job typing. Similarly, in the microcontroller system that is running the program, when the interrupt occurs, the program will stop for a moment, serving interrupt by running a program that is at the address designated by the vector of interrupt that occurred to finish and return to continue the program was halted by an interrupt. As seen in Figure below, a program that is supposed to continue straight, suddenly interrupt occurs and must serve the first interrupt to complete before returning to continue his work.


The process is done by the microcontroller when serving an interrupt is as follows:

Last instruction being executed resolved first

Program Counter (address of the instruction in progress) is saved to the stack

Interrupt Status stored internally

Interrupt served corresponding ratings from interrupt (see Interrupt Priority)

Program Counter is loaded with the address of the interrupt vector (see Interrupt Vector) so that the microcontroller directly run programs located on the interrupt vector

The interrupt vector program usually ends with a RETI instruction at which this process occurs when the microcontroller is as follows:

Program Counter is filled with the address stored in the stack when the interrupt occurs so that the microcontroller back forward

program at the current interrupt occurs

Interrupt status is returned to the state before the interrupt



Management Processes and Threads
Definition of Process
The process is a program being executed. According to Silberschatz process is not just a program code (text section), but includes several activities in question as the program counter and stack. A process also involves a stack that contains temporary data (parameters of function / method, return address and local variables) and data section that stores global variables. Tanenbaum also believes that the process is an executable program that includes program counter, registers, and variables in it [MDGR2006]. The difference between the programs is the program is a passive entity, which is a file that contains a set of instructions stored on the disk (executable file), while the process is an active entity with a program counter that stores the address of the instruction to be executed hereinafter and a set of resources (resource) is required in order for a process to be executed.
Status Process
The process that has executed five states comprising:
a. new: The establishment of a process
b. running: Instructions are being executed
c. waiting: Process waiting for some event happens
d. ready: The process of waiting to be distributed to processors (processor)
e. terminated: The process has finished execution.
Process Control Block (PCB)
Each process is described in the operating system by a process control block (PCB), also called a control block. PCB contains much of the information related to a specific process, including those below:
?? Status of the process: the status is probably new, ready, running, waiting, halted, and so on.
?? Program counter: a counter that indicates the address of the next instruction to be executed for the process.
?? CPU registers: registers vary in number and type, depending on the computer architecture. Register includes accumulator, index register, stack pointer, general-purposes registers, plus the condition-code information. Along with the program counter, condition / status information must be stored when disruption occurs, to allow the process to run / work correctly.
?? Information memory management: This information can include any information as the value of the base and limit registers, the page table / page or segment table depending on system memory used by the operating system.
?? Information recording: This information includes the amount of CPU and real time used, time limits, account number, job number or a process, and more.
?? Information status of I / O: The information includes the list of I / O devices are in use in this process, a list of files that are being accessed and more.
PCB simply serves as a repository of information that can vary from process
one another.



Thread
The process is a single thread that executes the program. This single thread of control allows the process to run only one task at a time. Many modern operating systems have had a concept that was developed to allow a process to execute multi-threads. For example, a user is typing work together and run a spell check in the same process. Thread is a basic unit of CPU utilization, which consists of the Thread ID, program counter, register set, and a stack. A thread sharing code section, data section, and operating system resources with other threads that are owned by the same process. Thread is also often called lightweight process. A traditional or heavyweight process has a single thread process that serves as the controller. The difference is the thread which banyakmengerjakan process more than one task at a time unit.
In general, the software running on a modern computer designed multithreading. An application is usually implemented as a separate process with several threads that serves as the controller. For example, a thread has a web browser to display images or text while another thread serves as a recipient of the data from the network.
Sometimes there is an application that needs to perform some similar tasks. For example, a web server can have hundreds of clients who access them concurrent. If the web server is running as a process that has only a single thread then it can only serve one client at at one time unit. If there are other clients who wish to make a request that he should wait until the client previously had been served. The solution is to create a web server to multi-threading. With this it is a web server will create a thread that will listen to client requests, while other requests filed then the web server will create another thread that will serve these requests [MDGR2006].
Advantages Thread
Some advantages of the use of threads is as follows:
a. Responsive. Interactive application to remain responsive even though most of the programs are blocked or do lengthy operation to the user. For example, a thread from a web browser can serve user requests while another thread trying to display an image.
b. Sharing of resources. Threads share the memory and thread resources that are owned by the same process. The advantage of sharing the code is to allow an application to have several different threads in the same memory location.
c. Economical. Making a process requires allocating necessary memory and resources. The alternative is to use threads, because threads share memory and other resources that have the process be more economical to create and context exchange thread. Be difficult to measure the time difference between processes and threads in terms of making and setting, but in general the creation and regulation of processes longer than thread. On Solaris, making the process more than 30 times compared to the manufacture of thread, and context exchange process five times longer than the context of the exchange thread.
d. Utilization of multiprocessor architectures. The advantage of multithreading can be greatly increased in multiprocessor architectures, where each thread can run in parallel on the different processors. In a single processor architectures, CPU runs each thread in turn but it went very quickly so as to create the illusion of parallel, but in fact only one thread is running the CPU at the time unit (CPU time on satusatuan called time slice or quantum).
Multithreading Model
Support is provided to the user-level threads are user threads or tingka kernel to kernel threads. User Threads provided by the kernel and set up without the support of the kernel, whereas the kernel therads langusng supported and regulated by the operating system. The relationship between user threads and kernel threads consist of three models of relations, namely:
Model Many to One: Model Many-to-One user level threads mapped to a kernel thread level. Setting thread done in user space, so efficient. Only one thread can access the user kernel thread at a time. Thus, multiple threads can not run in parallel on multiprocessor. User level threads are implemented on the operating system does not support kernel threads model Many-to-One.
Model One to One: Model One-to-One mapping every user level threads to kernel threads. He provides more concurrency than the model Many-to-One. Advantage alike to gain kernel thread. The disadvantage of this model is that each user thread creation requires a kernel thread creation. Because the manufacture of thread can degrade the performance of an application then implmentasi of this model, the number of threads is limited by the system. Examples of operating systems that support the model of One-to-One is Windows NT and OS / 2.
Many To Many Model: This model is me-multiplex many user level threads to kernel threads that number fewer than or equal to the level of the user. thread. The number of kernel threads specific to some applications or part of the machine. Many-to-One models allow developers to create a user thread as much as he wants but concurrency (walking together) could not be obtained because only one thread can be scheduled by the kernel at a time. One-to-One produces more concurrency but developers must be careful not to create too many threads in an application (in some cases, developers can only create a limited number of threads). Model Many-to-Many do not experience weakness of the two models above. Developers can create user threads as necessary, and the corresponding kernel threads can walks in parallel on multiprocessor. And also when a running thread blocking system call the kernel can schedule another thread for execution. Examples of operating systems that support this model are Solaris, IRIX and Digital UNIX.


Problems in Thread
System Calls fork () and exec ()
There are two possibilities in the UNIX system if the fork was called by one thread in the process:
a. All the thread is duplicated.
b. Only the thread that calls fork.
If a thread calling the exec system call the program specified in the parameter exec, will replace the whole process including the thread. The use of two versions of the fork on the subject of the application. If exec is called immediately after the fork, then duplicating the entire thread is not needed, because the program is specified in the parameter exec will replace the entire process. In this case quite simply replace the thread that calls fork. But if the process is a separate not call exec after the fork a separate process should duplicate the entire thread.
Cancellation Thread
Cancellation thread is termination task before the process is complete, for example in a web page, the call to an image using multiple threads. If the drawing is not perfect while the user presses the stop button, then the whole portrayal by each thread will be canceled as a whole. Cancellation of a thread may occur in two different scenarios, namely:
a. Asynchronous cancellation: a thread instantly dismiss the target thread.
b. Deferred cancellation: the target thread is perodik check if she should stop, this method allows the target thread to terminate itself in order. Kejaidan difficult of cancellation of a thread is when there is a situation where resources are allocated for the thread to be canceled. Besides other difficulties is when the thread that is being aborted update the data that he shared with other threads. This will be a difficult problem when using asynchronous cancellation. The operating system will retrieve resources from a canceled thread but often the operating system does not take back all the resources of the thread. The alternative is to use deffered cancellation. The workings of deffered cancellation is to use one thread that serves as pengindikasi that the target thread will be canceled. But the cancellation will only occur if the target thread has examined whether or not he should be canceled. This allows a thread to check if he had to stop at that point safely.
Signal Handling
Signal used in UNIX systems to notify a process that an event has occurred. A signal can be received in a synchronous or asynchronous depending on the source and the reason for an event signaled. All signals (asynchronous and synchronous) followed the same pattern, namely:
a. A signal generated by the occurrence of an event.
b. The signal generated is sent to the process.
c. Once delivered, the signal must be addressed.
Examples of synchronous signal is when the process of conducting illegal memory access or division by zero, the signal is raised and sent to the processes that perform the operation. Examples of such asynchronous signal we are sending a signal to kill the process with the keyboard (CTRL + C) the asynchronous signal is sent to the process. Each signal can be handled by one of two signal handling, namely:
1. Default signal handling.
2. Handling signals defined by the user.
Signal handling in programs that use only a single thread is easy enough just by sending a signal to the process. But it sends a signal to the program more complicated multithreading, because a process can have multiple threads. In general, there are four options where the signal should be sent as follows:
1. Sends a signal to the thread to which the signal.
2. Sends a signal to every thread in the process.
3. Sends a signal to a particular thread in the process.
4. Assign specific thread to receive all signals are directed to the process.
Method to send a signal depending on the type signal is raised. For example, a synchronous signal needs to be sent to the thread that gave rise to these signals is not another thread in the process. But the situation with asynchronous signals are not clear. Some asynchronous signal that serves as a signal to kill the process (eg Alt-F4) must be sent to all threads. Some versions of UNIX that allow multithreading thread receives a signal that he will receive and reject signals that he would refuse. Because it signals asynchronouns only be sent to the thread that does not block the signal. Solaris 2 implements the option to-4 to handle the signal. Windows 2000 does not provide facilities to support the signal, instead of Windows 2000 using asynchronous procedure calls (APCs).

Thread Pools
In this situation there are two web server multithreading problems that arise, such as:
a. Measure of the time required to create a thread in the service request submitted will be excessive. In fact thread discarded when it completed its task.
b. Making an unlimited number of threads can degrade the performance of the system. The solution is to use Thread Pools, the way it works is by making multiple threads on the startup process and put them into pools, where the thread is waiting for work. So when the server receives a request it will wake a thread from the thread pool and, if available, the request will be served. When the thread has finished its job then the thread is returned to the pool and
waiting for other jobs. If no threads are available when needed, the server waits until there is a single thread that is free.
The advantage of using thread pool are:
?? Generally faster in serving the demand for the existing thread than waiting for a new thread is created.
?? Thread pool limits the number of threads that exist at a time. It is important that the system can not support a lot of threads that can run simultaneously.
The number of threads in the pool may depend on the number of CPUs in the system, the amount of physical memory, and the number of concurrent client requests. Thread owned by a process is sharing data, but each thread may require copies of specific data for itself in certain circumstances. This data is called a thread-specific data.



UNDERSTANDING THE PROCESS
A process is a program that is being executed or the program being on the run or software that are being implemented, including the operating system organized into a number of sequential processes.While the program is a set of instructions written in a language understandable to the operating system. The process contains instructions and data. program counter and processor registers, and a stack containing data such as routine parameters, delivery address and local variables.
The operating system manages all processes on the system and allocate resources according to the processes of policy to meet the target system. One is a program that is being executed which is the smallest unit of work that an individual has the resources and resource-scheduled operating system. The operating system manages all processes on the system and allocate resources according to the processes of policy to meet the target system.
Things - things related to the process are:
1. MULTIPROGAMMING (multitasking)
Multiprogramming is the management of multiple processes on a single processor. Terms used multiprogramming (multitasking) is not multiprocessing. Mulitprocessing been used for other concepts, ie the computer with many processors on a single computer system with each processor to process independently. Today, most personal computers, workstations are single-processor system running the operating system multiprogramming (multitasking) such as MS-Windows 3.0, MS-Windows NT, OS / 2 and Macintosh System 7.
Many processes run concurrently, each process gets its own section of memory and control. Programs that are run are:

    
Not dependent (Independent)

    
Separate processes from each other and have no effect
    
One program at a time (one program at any instant)

    
At one time only one process that served processors, using interleave not overlap between programs
Because of the displacement of one process to be done quickly for the user as if working in parallel. This is known as parallel pseudo (pseudoparallelism).
2. MultiprocessingMultiprocessing is a lot of process management in multiprocessor computers (many processors in it). In other words, a computer with many processors on a single computer system with each processor to process independently. Previously the system was only found in large systems, mainframes and minicomputers. Today computers can be equipped multiprocessor workstations have. Using the computer as much as possible with multiple CPUs that can run multiple programs simultaneously, each using its own processor. The operating system that supports multiprocessing, namely: Microsoft Windows NT, UNIX, Linux.
4. Distributed Processing / COMPUTING
Distributed Processing is a lot of management processes executed on many computer systems are dispersed (distributed). Future trend is toward dispersed computing (distributed computing). Many research and development operating system spread among Amoeba, MACH, and so on.



 
5. ProcessingThe primary need of process control by the operating system can be expressed with reference to the process are:

    
Mutual resume (Interleave).
It said interleave (continued / continue) processing means executes the process at any time and quickly switch to other processes in turn. Operating System must interleave (each resume) the execution of processes in order to maximize the use of processors while still giving an adequate response time.

    
Following a certain wisdom.
Operating system must allocate resources to processes follow prescribed policy (eg, an application has a higher priority) while avoiding deadlock.

    
Supports inter-process communication and process creation.
Operating systems should support inter-process communication and user creation process by helping to structure the application. So in a system with many active processes, the processes at one time were in various stages of execution is the process of experiencing a variety of state during its life cycle before it ends and out of the system.
Operating systems need to know the state of each process and record all changes dynamically. This information is for the scheduling and deciding the allocation of resources.