Management Processes and Threads
Definition of Process
The process is a program being executed. According
to Silberschatz process is not just a program code (text section), but
includes several activities in question as the program counter and
stack. A
process also involves a stack that contains temporary data (parameters
of function / method, return address and local variables) and data
section that stores global variables. Tanenbaum
also believes that the process is an executable program that includes
program counter, registers, and variables in it [MDGR2006]. The
difference between the programs is the program is a passive entity,
which is a file that contains a set of instructions stored on the disk
(executable file), while the process is an active entity with a program
counter that stores the address of the instruction to be executed
hereinafter and a set of resources (resource) is required in order for a process to be executed.
Status Process
The process that has executed five states comprising:
a. new: The establishment of a process
b. running: Instructions are being executed
c. waiting: Process waiting for some event happens
d. ready: The process of waiting to be distributed to processors (processor)
e. terminated: The process has finished execution.
Process Control Block (PCB)
Each process is described in the operating system by a process control block (PCB), also called a control block. PCB contains much of the information related to a specific process, including those below:
?? Status of the process: the status is probably new, ready, running, waiting, halted, and so on.
?? Program counter: a counter that indicates the address of the next instruction to be executed for the process.
?? CPU registers: registers vary in number and type, depending on the computer architecture. Register includes accumulator, index register, stack pointer, general-purposes registers, plus the condition-code information. Along
with the program counter, condition / status information must be stored
when disruption occurs, to allow the process to run / work correctly.
?? Information
memory management: This information can include any information as the
value of the base and limit registers, the page table / page or segment
table depending on system memory used by the operating system.
?? Information
recording: This information includes the amount of CPU and real time
used, time limits, account number, job number or a process, and more.
?? Information
status of I / O: The information includes the list of I / O devices are
in use in this process, a list of files that are being accessed and
more.
PCB simply serves as a repository of information that can vary from process
one another.
Thread
The process is a single thread that executes the program. This single thread of control allows the process to run only one task at a time. Many modern operating systems have had a concept that was developed to allow a process to execute multi-threads. For example, a user is typing work together and run a spell check in the same process. Thread is a basic unit of CPU utilization, which consists of the Thread ID, program counter, register set, and a stack. A
thread sharing code section, data section, and operating system
resources with other threads that are owned by the same process. Thread is also often called lightweight process. A traditional or heavyweight process has a single thread process that serves as the controller. The difference is the thread which banyakmengerjakan process more than one task at a time unit.
In general, the software running on a modern computer designed multithreading. An application is usually implemented as a separate process with several threads that serves as the controller. For
example, a thread has a web browser to display images or text while
another thread serves as a recipient of the data from the network.
Sometimes there is an application that needs to perform some similar tasks. For example, a web server can have hundreds of clients who access them concurrent. If the web server is running as a process that has only a single thread then it can only serve one client at at one time unit. If there are other clients who wish to make a request that he should wait until the client previously had been served. The solution is to create a web server to multi-threading. With
this it is a web server will create a thread that will listen to client
requests, while other requests filed then the web server will create
another thread that will serve these requests [MDGR2006].
Advantages Thread
Some advantages of the use of threads is as follows:
a. Responsive. Interactive application to remain responsive even though most of the programs are blocked or do lengthy operation to the user. For example, a thread from a web browser can serve user requests while another thread trying to display an image.
b. Sharing of resources. Threads share the memory and thread resources that are owned by the same process. The advantage of sharing the code is to allow an application to have several different threads in the same memory location.
c. Economical. Making a process requires allocating necessary memory and resources. The
alternative is to use threads, because threads share memory and other
resources that have the process be more economical to create and context
exchange thread. Be
difficult to measure the time difference between processes and threads
in terms of making and setting, but in general the creation and
regulation of processes longer than thread. On
Solaris, making the process more than 30 times compared to the
manufacture of thread, and context exchange process five times longer
than the context of the exchange thread.
d. Utilization of multiprocessor architectures. The
advantage of multithreading can be greatly increased in multiprocessor
architectures, where each thread can run in parallel on the different
processors. In
a single processor architectures, CPU runs each thread in turn but it
went very quickly so as to create the illusion of parallel, but in fact
only one thread is running the CPU at the time unit (CPU time on
satusatuan called time slice or quantum).
Multithreading Model
Support is provided to the user-level threads are user threads or tingka kernel to kernel threads. User
Threads provided by the kernel and set up without the support of the
kernel, whereas the kernel therads langusng supported and regulated by
the operating system. The relationship between user threads and kernel threads consist of three models of relations, namely:
Model Many to One: Model Many-to-One user level threads mapped to a kernel thread level. Setting thread done in user space, so efficient. Only one thread can access the user kernel thread at a time. Thus, multiple threads can not run in parallel on multiprocessor. User level threads are implemented on the operating system does not support kernel threads model Many-to-One.
Model One to One: Model One-to-One mapping every user level threads to kernel threads. He provides more concurrency than the model Many-to-One. Advantage alike to gain kernel thread. The disadvantage of this model is that each user thread creation requires a kernel thread creation. Because
the manufacture of thread can degrade the performance of an application
then implmentasi of this model, the number of threads is limited by the
system. Examples of operating systems that support the model of One-to-One is Windows NT and OS / 2.
Many
To Many Model: This model is me-multiplex many user level threads to
kernel threads that number fewer than or equal to the level of the user.
thread. The number of kernel threads specific to some applications or part of the machine. Many-to-One
models allow developers to create a user thread as much as he wants but
concurrency (walking together) could not be obtained because only one
thread can be scheduled by the kernel at a time. One-to-One
produces more concurrency but developers must be careful not to create
too many threads in an application (in some cases, developers can only
create a limited number of threads). Model Many-to-Many do not experience weakness of the two models above. Developers can create user threads as necessary, and the corresponding kernel threads can walks in parallel on multiprocessor. And also when a running thread blocking system call the kernel can schedule another thread for execution. Examples of operating systems that support this model are Solaris, IRIX and Digital UNIX.
Problems in Thread
System Calls fork () and exec ()
There are two possibilities in the UNIX system if the fork was called by one thread in the process:
a. All the thread is duplicated.
b. Only the thread that calls fork.
If
a thread calling the exec system call the program specified in the
parameter exec, will replace the whole process including the thread. The use of two versions of the fork on the subject of the application. If
exec is called immediately after the fork, then duplicating the entire
thread is not needed, because the program is specified in the parameter
exec will replace the entire process. In this case quite simply replace the thread that calls fork. But if the process is a separate not call exec after the fork a separate process should duplicate the entire thread.
Cancellation Thread
Cancellation
thread is termination task before the process is complete, for example
in a web page, the call to an image using multiple threads. If
the drawing is not perfect while the user presses the stop button, then
the whole portrayal by each thread will be canceled as a whole. Cancellation of a thread may occur in two different scenarios, namely:
a. Asynchronous cancellation: a thread instantly dismiss the target thread.
b. Deferred
cancellation: the target thread is perodik check if she should stop,
this method allows the target thread to terminate itself in order. Kejaidan
difficult of cancellation of a thread is when there is a situation
where resources are allocated for the thread to be canceled. Besides other difficulties is when the thread that is being aborted update the data that he shared with other threads. This will be a difficult problem when using asynchronous cancellation. The
operating system will retrieve resources from a canceled thread but
often the operating system does not take back all the resources of the
thread. The alternative is to use deffered cancellation. The workings of deffered cancellation is to use one thread that serves as pengindikasi that the target thread will be canceled. But the cancellation will only occur if the target thread has examined whether or not he should be canceled. This allows a thread to check if he had to stop at that point safely.
Signal Handling
Signal used in UNIX systems to notify a process that an event has occurred. A signal can be received in a synchronous or asynchronous depending on the source and the reason for an event signaled. All signals (asynchronous and synchronous) followed the same pattern, namely:
a. A signal generated by the occurrence of an event.
b. The signal generated is sent to the process.
c. Once delivered, the signal must be addressed.
Examples
of synchronous signal is when the process of conducting illegal memory
access or division by zero, the signal is raised and sent to the
processes that perform the operation. Examples
of such asynchronous signal we are sending a signal to kill the process
with the keyboard (CTRL + C) the asynchronous signal is sent to the
process. Each signal can be handled by one of two signal handling, namely:
1. Default signal handling.
2. Handling signals defined by the user.
Signal handling in programs that use only a single thread is easy enough just by sending a signal to the process. But it sends a signal to the program more complicated multithreading, because a process can have multiple threads. In general, there are four options where the signal should be sent as follows:
1. Sends a signal to the thread to which the signal.
2. Sends a signal to every thread in the process.
3. Sends a signal to a particular thread in the process.
4. Assign specific thread to receive all signals are directed to the process.
Method to send a signal depending on the type signal is raised. For
example, a synchronous signal needs to be sent to the thread that gave
rise to these signals is not another thread in the process. But the situation with asynchronous signals are not clear. Some asynchronous signal that serves as a signal to kill the process (eg Alt-F4) must be sent to all threads. Some
versions of UNIX that allow multithreading thread receives a signal
that he will receive and reject signals that he would refuse. Because it signals asynchronouns only be sent to the thread that does not block the signal. Solaris 2 implements the option to-4 to handle the signal. Windows
2000 does not provide facilities to support the signal, instead of
Windows 2000 using asynchronous procedure calls (APCs).
Thread Pools
In this situation there are two web server multithreading problems that arise, such as:
a. Measure of the time required to create a thread in the service request submitted will be excessive. In fact thread discarded when it completed its task.
b. Making an unlimited number of threads can degrade the performance of the system. The
solution is to use Thread Pools, the way it works is by making multiple
threads on the startup process and put them into pools, where the
thread is waiting for work. So when the server receives a request it will wake a thread from the thread pool and, if available, the request will be served. When the thread has finished its job then the thread is returned to the pool and
waiting for other jobs. If no threads are available when needed, the server waits until there is a single thread that is free.
The advantage of using thread pool are:
?? Generally faster in serving the demand for the existing thread than waiting for a new thread is created.
?? Thread pool limits the number of threads that exist at a time. It is important that the system can not support a lot of threads that can run simultaneously.
The
number of threads in the pool may depend on the number of CPUs in the
system, the amount of physical memory, and the number of concurrent
client requests. Thread
owned by a process is sharing data, but each thread may require copies
of specific data for itself in certain circumstances. This data is called a thread-specific data.