Electron microscopy
 
PythonML
Parallel Computing and Distributed Computing
- Python Automation and Machine Learning for ICs -
- An Online Book: Python Automation and Machine Learning for ICs by Yougui Liao -
Python Automation and Machine Learning for ICs                                                           http://www.globalsino.com/ICs/        


Chapter/Index: Introduction | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Appendix

=================================================================================

Parallel computing and distributed computing are two approaches used to process data more efficiently by dividing tasks into smaller parts that can be executed simultaneously. Though they share the common goal of improving computational speed and efficiency, they employ different architectures, have different resource requirements, and are suited for different types of problems. Table 3390 shows the comparison between parallel computing and distributed computing.

Table 3390. Comparison between parallel computing and distributed computing.

  Parallel Computing Distributed Computing

Definition

Involves performing many calculations or processes simultaneously by dividing a larger problem into discrete parts that can be solved concurrently. Each part runs on a processor within a single computer system, typically sharing memory (as in shared-memory architectures) or using more complex arrangements like distributed memory (in high-performance computing clusters). Consists of multiple autonomous computers that communicate through a network to achieve a common goal. Each computer in the network is independent and collaborates by passing messages to each other to process different parts of a larger task.

System Architecture

Usually takes place within a single machine or across tightly coupled systems. The processors can be multiple cores within a single CPU or multiple CPUs within a single system, sharing resources such as RAM and disk space. Spans multiple systems which are geographically dispersed and connected by a network. The computers do not share physical memory or a clock. Each system in a distributed environment can be different in terms of hardware and operating systems.

Communication

Processors communicate through shared memory or through techniques like message passing, particularly in clusters. Communication occurs over a network using protocols. The communication costs are generally higher due to network delays and the complexity of data coordination over disparate systems.

Scalability

Scaling typically involves adding more processors to a single system, which can have physical and practical limitations. Highly scalable as it can involve adding more machines to the network, potentially across the globe. It can scale out rather than just scaling up.

Use Cases

Often used in high-performance computing for tasks that require massive computation like simulations, complex calculations in physics, and video rendering. Commonly used for applications where tasks can be easily partitioned into smaller independent tasks such as web technologies, databases, and large-scale web services.

Fault Tolerance

Less fault-tolerant as the failure of a single processor can affect the entire system. More fault-tolerant due to the independent nature of nodes. If one node fails, others can potentially take over its tasks or work around it.
Memories In parallel computing, especially in shared-memory architectures, multiple processors (or cores) access a common shared memory space. This allows for quick and efficient communication between processors, but it also requires careful management to prevent conflicts and ensure data coherence. Shared-memory systems are typically seen in multi-core processors, high-performance computing (HPC) clusters with shared memory, and other tightly-coupled computer architectures. In distributed computing, each processor (or node) typically has its own local memory, and these nodes are often geographically distributed and connected via a network. Communication between nodes in a distributed system is achieved through message passing protocols, where data is explicitly sent from one node to another. This setup allows for great scalability and fault tolerance but introduces higher communication costs and complexities in data management.

Examples

Multicore processors in a computer, supercomputers. Cloud computing platforms like AWS and Azure, blockchain technologies, and distributed databases.

Both parallel and distributed computing have transformative impacts across various scientific, industrial, and technology fields. The choice between them depends largely on the specific requirements of the task, including speed, data volume, and the nature of the problem being solved.

===========================================

         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         

 

 

 

 

 



















































 

 

 

 

 

=================================================================================