Sabtu, 21 Juni 2014

Parallel Computation

          Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel"). There are several different forms of parallel computing: bit-levelinstruction leveldata, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
            Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-coreand multi-processor computers having multiple processing elements within a single machine, while clustersMPPs, and gridsuse multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks.
            Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.The maximum possible speed-up of a single program as a result of parallelization is known as Amdahl's law.


Parallelism concept : Multiple copies of hardware unit used, All copies can operate simultaneously, Occurs at many levels of architecture, Term parallel computer applied when parallelism dominates entire architecture.
            Distributed processing refers to any of a variety of computer systems that use more than one computer, or processor, to run an application. Distributed processing is the type of processing whereby processing occurs on more than one processor in order for a transaction to be completed. In other words, processing is distributed across two or more machines and the processes are most likely not running at the same time.


Architectural parallel computer : Design in which computer has reasonably large number ofProcessors. Intended for scaling. Example: computer with thirty-two processors
Not generally classified as parallel computer
– Dual processor computer
– Quad processor computer


Programming with threads introduces new difficulties even for experienced programmers. Concurrent programming has techniques and pitfalls that do not occur in sequential programming. Many of the techniques are obvious, but some are obvious only with hindsight. Some of the pitfalls are comfortable (for example, deadlock is a pleasant sort of bug—your program stops with all the evidence intact), but some take the form of insidious performance penalties.
A “thread” is a straightforward concept: a single sequential flow of control. In a highlevel language you normally program a thread using procedures, where the procedure calls follow the traditional stack discipline. Within a single thread, there is at any instant a single point of execution. The programmer need learn nothing new to use a single thread.
Having “multiple threads” in a program means that at any instant the program has multiple points of execution, one in each of its threads. The programmer can mostly view the threads as executing simultaneously, as if the computer were endowed with as many processors as there are threads. The programmer is required to decide when and where to create multiple threads, or to accept such decisions made for him by implementers of existing library packages or runtime systems. Additionally, the programmer must occasionally be aware that the computer might not in fact execute all his threads simultaneously.
Having the threads execute within a “single address space” means that the computer’s

addressing hardware is configured so as to permit the threads to read and write the same memory locations. In a high-level language, this usually corresponds to the fact that the off-stack (global) variables are shared among all the threads of the program. Each thread executes on a separate call stack with its own separate local variables. The programmer is responsible for using the synchronization mechanisms of the thread facility to ensure that the shared memory is accessed in a manner that will give the correct answer. Thread facilities are always advertised as being “lightweight”. This means that thread creation, existence, destruction and synchronization primitives are cheap enough that the programmer will use them for all his concurrency needs.
GPU Refers to a specific processor GPU to accelerate and change the memory to speed up image processing. The GPU itself is usually located on the graphics card or laptop computer 
CUDA (Compute Unified Device Architecture) is a scheme created by NVIDIA as the GPU (Graphic Processing Unit) capable of computing not only to graphics processing, but also for general purposes. So with the CUDA we can take advantage of multiple processors from NVIDIA to do the calculation process much or computing.




sumber :
http://en.wikipedia.org/wiki/Parallel_computing
 http://uchaaii.blogspot.com/2013/07/parallel-computation.html