Sabtu, 21 Juni 2014

Parallel Computation

          Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel"). There are several different forms of parallel computing: bit-levelinstruction leveldata, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
            Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-coreand multi-processor computers having multiple processing elements within a single machine, while clustersMPPs, and gridsuse multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks.
            Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.The maximum possible speed-up of a single program as a result of parallelization is known as Amdahl's law.


Parallelism concept : Multiple copies of hardware unit used, All copies can operate simultaneously, Occurs at many levels of architecture, Term parallel computer applied when parallelism dominates entire architecture.
            Distributed processing refers to any of a variety of computer systems that use more than one computer, or processor, to run an application. Distributed processing is the type of processing whereby processing occurs on more than one processor in order for a transaction to be completed. In other words, processing is distributed across two or more machines and the processes are most likely not running at the same time.


Architectural parallel computer : Design in which computer has reasonably large number ofProcessors. Intended for scaling. Example: computer with thirty-two processors
Not generally classified as parallel computer
– Dual processor computer
– Quad processor computer


Programming with threads introduces new difficulties even for experienced programmers. Concurrent programming has techniques and pitfalls that do not occur in sequential programming. Many of the techniques are obvious, but some are obvious only with hindsight. Some of the pitfalls are comfortable (for example, deadlock is a pleasant sort of bug—your program stops with all the evidence intact), but some take the form of insidious performance penalties.
A “thread” is a straightforward concept: a single sequential flow of control. In a highlevel language you normally program a thread using procedures, where the procedure calls follow the traditional stack discipline. Within a single thread, there is at any instant a single point of execution. The programmer need learn nothing new to use a single thread.
Having “multiple threads” in a program means that at any instant the program has multiple points of execution, one in each of its threads. The programmer can mostly view the threads as executing simultaneously, as if the computer were endowed with as many processors as there are threads. The programmer is required to decide when and where to create multiple threads, or to accept such decisions made for him by implementers of existing library packages or runtime systems. Additionally, the programmer must occasionally be aware that the computer might not in fact execute all his threads simultaneously.
Having the threads execute within a “single address space” means that the computer’s

addressing hardware is configured so as to permit the threads to read and write the same memory locations. In a high-level language, this usually corresponds to the fact that the off-stack (global) variables are shared among all the threads of the program. Each thread executes on a separate call stack with its own separate local variables. The programmer is responsible for using the synchronization mechanisms of the thread facility to ensure that the shared memory is accessed in a manner that will give the correct answer. Thread facilities are always advertised as being “lightweight”. This means that thread creation, existence, destruction and synchronization primitives are cheap enough that the programmer will use them for all his concurrency needs.
GPU Refers to a specific processor GPU to accelerate and change the memory to speed up image processing. The GPU itself is usually located on the graphics card or laptop computer 
CUDA (Compute Unified Device Architecture) is a scheme created by NVIDIA as the GPU (Graphic Processing Unit) capable of computing not only to graphics processing, but also for general purposes. So with the CUDA we can take advantage of multiple processors from NVIDIA to do the calculation process much or computing.




sumber :
http://en.wikipedia.org/wiki/Parallel_computing
 http://uchaaii.blogspot.com/2013/07/parallel-computation.html



Selasa, 13 Mei 2014

Quantum Computation

1. Quantum Computation 
quantum computer (also known as a quantum supercomputer) is a computation device that makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data.  Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980 and Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1969.
As of 2014 quantum computing is still in its infancy but experiments have been carried out in which quantum computational operations were executed on a very small number of qubits. Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.
Large-scale quantum computers will be able to solve certain problems much more quickly than any classical computer using the best currently known algorithms, like integer factorization using Shor's algorithm or the simulation of quantum many-body systems. There existquantum algorithms, such as Simon's algorithm, which run faster than any possible probabilistic classical algorithm.Given sufficient computational resources, however, a classical computer could be made to simulate any quantum algorithm; quantum computation does not violate the Church–Turing thesis.



2. Entanglement
Quantum entanglement is a physical phenomenon that occurs when pairs or groups of particles are generated or interact in ways such that the quantum state of each particle cannot be described independently – instead, a quantum state may be given for the system as a whole.
Measurements of physical properties such as positionmomentumspinpolarization, etc. performed on entangled particles are found to be appropriately correlated. For example, if a pair of particles is generated in such a way that their total spin is known to be zero, and one particle is found to have clockwise spin on a certain axis, then the spin of the other particle, measured on the same axis, will be found to be counterclockwise. Because of the nature of quantum measurement, however, this behavior gives rise to effects that can appear paradoxical: any measurement of a property of a particle can be seen as acting on that particle (e.g. by collapsing a number of superimposed states); and in the case of entangled particles, such action must be on the entangled system as a whole. It thus appears that one particle of an entangled pair "knows" what measurement has been performed on the other, and with what outcome, even though there is no known means for such information to be communicated between the particles, which at the time of measurement may be separated by arbitrarily large distances.
Such phenomena were the subject of a 1935 paper by Albert EinsteinBoris Podolsky and Nathan Rosen, describing what came to be known as the EPR paradox, and several papers by Erwin Schrödinger shortly thereafter. Einstein and others considered such behavior to be impossible, as it violated the local realist view of causality (Einstein referred to it as "spooky action at a distance"), and argued that the accepted formulation of quantum mechanics must therefore be incomplete. Later, however, the counterintuitive predictions of quantum mechanics were verified experimentally. Experiments have been performed involving measuring the polarization or spin of entangled particles in different directions, which – by producing violations of Bell's inequality – demonstrate statistically that the local realist view cannot be correct. This has been shown to occur even when the measurements are performed more quickly than light could travel between the sites of measurement: there is no lightspeed or slower influence that can pass between the entangled particles.  Recent experiments have measured entangled particles within less than one part in 10,000 of the light travel time between them.  According to the formalism of quantum theory, the effect of measurement happens instantly. It is not possible, however, to use this effect to transmit classical information at faster-than-light speeds (see Faster-than-light → Quantum mechanics).
Quantum entanglement is an area of extremely active research by the physics community, and its effects have been demonstrated experimentally with photonselectronsmolecules the size of buckyballs, and even small diamonds. Research is also focused on the utilization of entanglement effects in communication and computation.


3. Qubit Data
A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, to represent the state of an n-qubit system on a classical computer would require the storage of 2n complex coefficients. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before measurement. Moreover, it is incorrect to think of the qubits as only being in one particular state before measurement since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation.
Qubits are made up of controlled particles and the means of control (e.g. devices that trap particles and switch them from one state to another).[10]
For example: Consider first a classical computer that operates on a three-bit register. The state of the computer at any time is a probability distribution over the 2^3=8 different three-bit strings 000, 001, 010, 011, 100, 101, 110, 111. If it is a deterministic computer, then it is in exactly one of these states with probability 1. However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states. We can describe this probabilistic state by eight nonnegative numbers A,B,C,D,E,F,G,H (where A = probability computer is in state 000B = probability computer is in state 001, etc.). There is a restriction that these probabilities sum to 1.
The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector (a,b,c,d,e,f,g,h), called a ket. Here, however, the coefficients can have complex values, and it is the sum of the squares of the coefficients' magnitudes|a|^2+|b|^2+...+|h|^2, that must equal 1. These square magnitudes represent the probability amplitudes of given states. However, because a complex number encodes not just a magnitude but also a direction in the complex plane, the phase difference between any two coefficients (states) represents a meaningful parameter. This is a fundamental difference between quantum computing and probabilistic classical computing.[11]
If you measure the three qubits, you will observe a three-bit string. The probability of measuring a given string is the squared magnitude of that string's coefficient (i.e., the probability of measuring 000 = |a|^2, the probability of measuring 001 = |b|^2, etc..). Thus, measuring a quantum state described by complex coefficients (a,b,...,h) gives the classical probability distribution (|a|^2, |b|^2, ..., |h|^2) and we say that the quantum state "collapses" to a classical state as a result of making the measurement.
Note that an eight-dimensional vector can be specified in many different ways depending on what basis is chosen for the space. The basis of bit strings (e.g., 000, 001, ..., 111) is known as the computational basis. Other possible bases are unit-lengthorthogonal vectors and the eigenvectors of the Pauli-x operatorKet notation is often used to make the choice of basis explicit. For example, the state (a,b,c,d,e,f,g,h) in the computational basis can be written as:
a\,|000\rangle + b\,|001\rangle + c\,|010\rangle + d\,|011\rangle + e\,|100\rangle + f\,|101\rangle + g\,|110\rangle + h\,|111\rangle
where, e.g., |010\rangle = \left(0,0,1,0,0,0,0,0\right)
The computational basis for a single qubit (two dimensions) is |0\rangle = \left(1,0\right) and |1\rangle = \left(0,1\right).
Using the eigenvectors of the Pauli-x operator, a single qubit is |+\rangle = \tfrac{1}{\sqrt{2}} \left(1,1\right) and |-\rangle = \tfrac{1}{\sqrt{2}} \left(1,-1\right).

Operation

While a classical three-bit state and a quantum three-qubit state are both eight-dimensional vectors, they are manipulated quite differently for classical or quantum computation. For computing in either case, the system must be initialized, for example into the all-zeros string, |000\rangle, corresponding to the vector (1,0,0,0,0,0,0,0). In classical randomized computation, the system evolves according to the application of stochastic matrices, which preserve that the probabilities add up to one (i.e., preserve the L1 norm). In quantum computation, on the other hand, allowed operations are unitary matrices, which are effectively rotations (they preserve that the sum of the squares add up to one, the Euclidean or L2 norm). (Exactly what unitaries can be applied depend on the physics of the quantum device.) Consequently, since rotations can be undone by rotating backward, quantum computations are reversible. (Technically, quantum operations can be probabilistic combinations of unitaries, so quantum computation really does generalize classical computation. See quantum circuit for a more precise formulation.)

Finally, upon termination of the algorithm, the result needs to be read off. In the case of a classical computer, we sample from the probability distribution on the three-bit register to obtain one definite three-bit string, say 000. Quantum mechanically, we measure the three-qubit state, which is equivalent to collapsing the quantum state down to a classical distribution (with the coefficients in the classical state being the squared magnitudes of the coefficients for the quantum state, as described above), followed by sampling from that distribution. Note that this destroys the original quantum state. Many algorithms will only give the correct answer with a certain probability. However, by repeatedly initializing, running and measuring the quantum computer, the probability of getting the correct answer can be increased.


4. Quantum Gates
In quantum computing and specifically the quantum circuit model of computation, a quantum gate (or quantum logic gate) is a basic quantum circuit operating on a small number of qubits. They are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.
Unlike many classical logic gates, quantum logic gates are reversible. However, classical computing can be performed using only reversible gates. For example, the reversibleToffoli gate can implement all Boolean functions. This gate has a direct quantum equivalent, showing that quantum circuits can perform all operations performed by classical circuits.
Quantum logic gates are represented by unitary matrices. The most common quantum gates operate on spaces of one or two qubits, just like the common classical logic gates operate on one or two bits. This means that as matrices, quantum gates can be described by 2 × 2 or 4 × 4 unitary matrices.


5. Shor's Algorithm 
Shor's algorithm, named after mathematician Peter Shor, is a quantum algorithm (an algorithm that runs on a quantum computer) for integer factorization formulated in 1994. Informally it solves the following problem: Given an integer N, find its prime factors.
On a quantum computer, to factor an integer N, Shor's algorithm runs in polynomial time (the time taken is polynomial in log N, which is the size of the input).  Specifically it takes time O((log N)3), demonstrating that the integer factorization problem can be efficiently solved on a quantum computer and is thus in the complexity class BQP. This is substantially faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in sub-exponential time — aboutO(e1.9 (log N)1/3 (log log N)2/3). The efficiency of Shor's algorithm is due to the efficiency of the quantum Fourier transform, and modular exponentiation by repeated squarings.
If a quantum computer with a sufficient number of qubits could operate without succumbing to noise and other quantum decoherence phenomena, Shor's algorithm could be used to break public-key cryptography schemes such as the widely used RSA scheme. RSA is based on the assumption that factoring large numbers is computationally infeasible. So far as is known, this assumption is valid for classical (non-quantum) computers; no classical algorithm is known that can factor in polynomial time. However, Shor's algorithm shows that factoring is efficient on an ideal quantum computer, so it may be feasible to defeat RSA by constructing a large quantum computer. It was also a powerful motivator for the design and construction of quantum computers and for the study of new quantum computer algorithms. It has also facilitated research on new cryptosystems that are secure from quantum computers, collectively called post-quantum cryptography.
In 2001, Shor's algorithm was demonstrated by a group at IBM, who factored 15 into 3 × 5, using an NMR implementation of a quantum computer with 7 qubits. However, some doubts have been raised as to whether IBM's experiment was a true demonstration of quantum computation, since no entanglement was observed. Since IBM's implementation, several other groups have implemented Shor's algorithm using photonic qubits, emphasizing that entanglement was observed. In 2012, the factorization of 15 was repeated. Also in 2012, the factorization of 21 was achieved, setting the record for the largest number factored with a quantum computer.In April 2012, the factorization of 143 was achieved, although this used adiabatic quantum computation rather than Shor's algorithm


references :
http://en.wikipedia.org/wiki/Quantum_entanglement
http://en.wikipedia.org/wiki/Quantum_gate
http://en.wikipedia.org/wiki/Quantum_computer
http://en.wikipedia.org/wiki/Shor's_algorithm

Selasa, 22 April 2014

CLOUD COMPUTING

1.    In computer networking, cloud computing is computing that involves a large number of computers connected through a communication network such as the Internet, similar toutility computing. In science, cloud computing is a synonym for distributed computing over a network, and means the ability to run a program or application on many connected computers at the same time. In common usage, the term "the cloud" is essentially a metaphor for the Internet. Marketers have further popularized the phrase "in the cloud" to refer to software, platforms and infrastructure that are sold "as a service", i.e. remotely through the Internet. Typically, the seller has actual energy-consuming servers which host products and services from a remote location, so end-users don't have to; they can simply log on to the network without installing anything. The major models of cloud computing service are known as software as a service, platform as a service, and infrastructure as a service. These cloud services may be offered in a public, private or hybrid network. Google, Amazon, IBM, Oracle Cloud, Rackspace, Salesforce, Zoho and Microsoft Azure are some well-known cloud vendors. Network-based services, which appear to be provided by real server hardware and are in fact served up by virtual hardware simulated by software running on one or more real machines, are often called cloud computing. Such virtual servers do not physically exist and can therefore be moved around and scaled up or down on the fly without affecting the end user, somewhat like a cloud becoming larger or smaller without being a physical object.
2.     From the above explanation of cloud computing , there are many benefits that we can take from cloud computing , namely :\ Scalability , with cloud computing we can increase our storage capacity without having to purchase additional equipment , such as hard drives , etc. . We simply add the capacity provided by the cloud computing service providers .
-      Accessibility , we can access data whenever and wherever we are , as long as we are connected to the Internet , making it easier for us to access the data when important .
-      Security , we can be assured ydata its security by cloud computing service providers , so for IT based company , the data can be stored securely in the cloud computing provider . It also reduces the cost required to secure corporate data .
-      Creator , the user can do / develop their creations or projects without having to submit their projects directly to the company , but the user can send it through the cloud computing service providers .
-      Anxiety , when a natural disaster strikes our proprietary data stored safely in the cloud even though we damaged hard drive or gadget
3.     Here is how the data storage and replication of data on the use of cloud computing technology . With Cloud Computing is no longer a local computer should run the heavy computational work required to run the application , no need to install a software package for every computer , we only perform the installation of the operating system on application  . Computer networks that make up the cloud ( Internet ) handles them instead . This server will be running all applications ranging from e - mail , word processing , to complex data analysis programs . When users access the cloud ( internet ) for a popular website , many things can happen . Users of Internet Protocol ( IP ) for example can be used to determine where the user is located ( geolocation ) . Domain Name System ( DNS ) services can then redirect the user to a server cluster that is close to the users so that the site can be accessed quickly and in their local language . The user is not logged into the server , but they login to their services using a session id or cookie that has been obtained is stored in their browser . What users see in the browser usually comes from a web server . Webservers run the software and interface presents the user with the means used to collect orders or instructions from the user ( click , type, upload , etc. ) These commands are then interpreted by webservers or processed by the application server . Information is then stored in or retrieved from a database server or file server and the user is then presented with a page that has been updated . The data is synchronized across multiple servers around the world for global access quickly and also to prevent loss of data . Web service has provided a general mechanism for the delivery of services , it            makes the service-oriented architecture ( SOA ) is ideal to be applied . The goal of SOA is to address the requirements of loosely coupled , standards-based , and protocol - independent distributed computing . In SOA , software resources are packaged as a " service , " a well-defined , self-contained modules that provide standard business functionality and context of other services . Maturity web service has enabled the creation of robust services that can be accessed on demand , in a uniform way .

4.     Cloud computing exhibits the following key characteristics:
·       Agility improves with users' ability to re-provision technological infrastructure resources.
·       Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way that a traditional user interface (e.g., a computer desktop) facilitates interaction between humans and computers. Cloud computing systems typically use Representational State Transfer (REST)-based APIs.
·       Cost: cloud providers claim that computing costs reduce. A public-cloud delivery model converts capital expenditure to operational expenditure. This purportedly lowersbarriers to entry, as infrastructure is typically provided by a third party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained, with usage-based options and fewer IT skills are required for implementation (in-house). The e-FISCAL project's state-of-the-art repository contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house.
·       Device and location independence enable users to access systems using a web browser regardless of their location or what device they use (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
·       Virtualization technology allows sharing of servers and storage devices and increased utilization. Applications can be easily migrated from one physical server to another.
·       Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for:
·       centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
·       peak-load capacity increases (users need not engineer for highest possible load-levels)
·       utilisation and efficiency improvements for systems that are often only 10–20% utilised.
·       Reliability improves with the use of multiple redundant sites, which makes well-designed cloud computing suitable for business continuity and disaster recovery.
·       Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis in near real-time(Note, the VM startup time varies by VM type, location, os and cloud providers), without users having to engineer for peak loads.
·       Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.
·       Security can improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford to tackle. However, the complexity of security is greatly increased when data is distributed over a wider area or over a greater number of devices, as well as in multi-tenant systems shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.
·       Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer and can be accessed from different places.
5.     Security in cloud computing
After exposure to the concept , technical , and architectural services builders in cloud computing , the next concern is in terms of network security information. Cloud Computing is one of the new technology remains to be seen what level of network security information. Based on the model of services in cloud computing can be seen , whether the information network security loopholes are in the service model of Software as a Service , and Platform as a Service , or , and nor does the Infrastructure as a Service .
Furthermore, the security of cloud computing can also be seen from its location on the protocols that govern data communication in the network . Protocol which is used as a reference in this paper is TCP / IP ( Transmission Control Protocol / Internet Protocol ) . 
There are many security issues surrounding cloud computing . With the technology that allows consumers to be able to access cloud services through a web browser or web services , there are three examples of security issues , namely : XML Signature Element Wrapping , Browser Security , Cloud Malware Injection Attack and Flooding Attacks.
From a research document issued by the Cloud Security Alliance 's titled Top Threats to Cloud Computing , the two security threats in cloud computing ie loss or data leakage and hijacking the account or service . Two of these threats is crucial because it affects the reputation , trust partners , employees , and customers also affecting business . Piracy accounts can also be bad if Attackers accessing a very important part of the cloud computing services , facilitate Attackers then to do things that can affect aspects of confidentiality, integrity , and availability of servicing existing service . To avoid the above types of security threats , identity management and access control are the main requirements for SaaS Cloud computing Company .
Identity Management in the cloud computing are also associated with the focus of discussion in this paper , the security of the cloud computing service model of Software as a Service it . With a detailed explanation of the components before forming a SaaS Cloud Computing is using Service Oriented Architecture ( SOA ) with Web Services standards ( XML language ) .

Identity management and access control in cloud computing Service Oriented Architecture

As defined earlier , the SOA has features that make loosely-coupled SOA is very open to security risks that can occur . Therefore, SOA must meet several key requirements to meet data security standards , among other things: service discovery , service authentication , user authentication , access control , confidentiality, integrity , availability , and privacy ( 17 ) . To ensure security in SOA development environment , which creates a community of open standards to build a web services security standard for Web services , which is known implementations of web services is the most widely used SOA . At the same time the identity management and access control in cloud computing has also arranged with the standard . 
As shown in the image above , for controlling access rights has been the adoption by the Security assertion markup language ( SAML ) and the eXtensible Access Control Markup Language ( XACML ) , meaning that when a user requests a service , the user must follow the security policies related to access control established .

SAML and Single Sign On
SAML is an XML standard for exchanging authentication and authorization of data between security domains. SAML features platform independent , and is mainly applied to the Single Sign -On ( SSO ) . Single Sign-On is one of the methods used in data security aspects of authentication and authorization on the application or cloud service . Technology Single - sign-on ( SSO ) is a technology that allows users to easily access network resources in a network using only one user account only. This technology is in high demand , especially in very large networks and heterogeneous ( in current operating systems and applications used by computers is derived from many vendors , and users are asked to fill in the information itself into each of the different platforms to be accessed by users ) . By using SSO , a user just simply attempt to authenticate only once to obtain permits access to all the services contained within the network .

SaaS Cloud Computing Security with Single Sign On in a layer Internet Protocol 

TCP / IP
From the description of the overall architecture of SaaS Cloud Computing can be mapped position SaaS Cloud Computing Security with Single Sign On in layer Internet protocol TCP / IP , so it is more clear understanding of network security information as part of a network of information science itself. Mapping architecture can be seen in the image below:

6.     Cloud Computing is the result of evolution and adoption of existing technologies and paradigms. The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs, and help the users focus on their core business instead of being impeded by IT obstacles.
The main enabling technology for cloud computing is virtualization. Virtualization generalizes the physical infrastructure, which is the most rigid component, and makes it available as a soft component that is easy to use and manage. By doing so, virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. On the other hand, autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process and reduces the possibility of human errors.
Users face difficult business problems every day. Cloud computing adopts concepts from Service-oriented Architecture (SOA) that can help the user break these problems intoservices that can be integrated to provide a solution. Cloud computing provides all of its resources as services, and makes use of the well-established standards and best practices gained in the domain of SOA to allow global and easy access to cloud services in a standardized way.
Cloud computing also leverages concepts from utility computing in order to provide metrics for the services used. Such metrics are at the core of the public cloud pay-per-use models. In addition, measured services are an essential part of the feedback loop in autonomic computing, allowing services to scale on-demand and to perform automatic failure recovery.
Cloud computing is a kind of grid computing; it has evolved by addressing the QoS (quality of service) and reliability problems. Cloud computing provides the tools and technologies to build data/compute intensive parallel applications with much more affordable prices compared to traditional parallel computing techniques.[35]
Cloud computing shares characteristics with:
·       Client–server model — Client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requestors (clients).[36]
·       Grid computing — "A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks."
·       Mainframe computer — Powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as: census; industry and consumer statistics; police and secret intelligence services; enterprise resource planning; and financial transaction processing.[37]
·       Utility computing — The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."[38][39]
·       Peer-to-peer — A distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client–server model).
·       Cloud gaming — Also known as on-demand gaming, is a way of delivering games to computers. Gaming data is stored in the provider's server, so that gaming is independent of client computers used to play the game. One such current example, would be a service by OnLive which allows users a certain space to save game data, and load games within the OnLive server.
















Referensi :
http://en.wikipedia.org/wiki/Cloud_computing
   http://id.wikipedia.org/wiki/Komputasi_awan
http://royanafwani.wordpress.com/2011/12/22/keamanan-pada-cloud-computing/