In parallel programming/computing, programs/algorithms can be classified depending on how much interaction has to take place between threads. A thread is the smallest unit of process that can be performed in an operating system. In most modern operating systems, a thread exists within a process -- that is, a single process may contain multiple/several threads (multi-threading).
While multi-tasking can be viewed as computer execution that allows processes to run concurrently, multi-threading allows sub-processes to run concurrently.
A process in computing, is an instance of a computer program that is being executed.
Parallel programming/computing (PC) is a method of performing simultaneously the normally sequential steps of a computer program, using two or more processors. Another definition for parallel programming, is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problem can often be divided into smaller ones, which are then solved in parallel (concurrently).
A closely related concept to PC is Parallel Processing, a model of computer operations in which a process is split into parts that execute simultaneously on different processors attached to the same computer. When we talk of parallel computing, we in most cases speak of the computational element which is CPU/GPU. Parallel processing is more broad than PC and entails parallelizing everything in the system where we have bottleneck - memory, visualization (amount of pixels you need to process), etc. This simply means the machine can separate the computations over several GPU/CPU to speed up the calculations.
There is a very narrow distinction in terms of definition between Parallel Programming and Concurrent Programming. A system is said to be concurrent if it can support two or more actions in progress at the same time. The key concept and difference between parallel and concurrent programming is the phrase in progress. This means, in concurrent systems, multiple actions can be in progress (although may not be executed) at the same time. See references [1], Laws of Concurrent Programming.
Another approach to higher performance computing, is Embarrassingly Parallel Computing, defined as computations which are loosely coupled; this means there is little/no dependence/interaction between threads, in other words, there is no/little need for communication between the parallel tasks being computed or their results. Embarrassingly PC algorithms are suitable to problems which are easy to break into separate, completely independent tasks.
Furthermore, Distributed Computing is a model in which components of a software system are shared among multiple computers to improve efficiency and performance. In a broadest sense, a distributed computer system consists of multiple software components that are on multiple computers/nodes, but run as a single system. The computers that are in a distributed system can be physically close together and connected by a LAN, or they can be geographically distant and connected by a wide area network. A distributed system can consist of any number of possible configurations, such as mainframes, personal computers, workstations, minicomputers, and so on. The goal of distributed computing is to make such a network work as a single computer. Reference IBM.
Other frequently used terms in computing include:
High Performance Computing
High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.
Computer Cluster
A computer cluster is a single logical unit consisting of multiple computers that are linked through a LAN. The networked computers essentially act as a single, much more powerful machine. A computer cluster provides much faster processing speed, larger storage capacity, better data integrity, superior reliability and wider availability of resources.
Supercomputing
The term supercomputing refers to the processing of massively complex or data-laden problems using the concentrated compute resources of multiple computer systems working in parallel (i.e. a "supercomputer"). Supercomputing involves a system working at the maximum potential performance of any computer, typically measured in Petaflops. Sample use cases include genomics, astronomical calculations, and so forth. Supercomputing enables problem solving and data analysis that would be simply impossible, too time-consuming or costly with standard computers.
Core
A core (processor) is part of a CPU that receives instructions and performs calculations, or actions, based on those instructions. Processors can have a single core or multiple cores. A processor with two cores is called a dual-core processor, four cores is quad-core, and so on. The more cores a processor has, the more sets of instructions the processor can receive and process at the same time, which makes the computer faster. A multi-core processor allows multi-tasking which gives the effect of as if we have a very fast processor installed on our computer.
Node
In data communication, a node is any active, physical, electronic device/system, data points attached/connected to a network, and they all refer back to the concept of a graph. These devices are capable of either sending, receiving, or forwarding information; sometimes a combination of the three. For example, if a network connects a file server, five computers, two printers, one cell phone, there are nine nodes on this network. Each device on the network has a network address, such as a MAC address, which uniquely identifies it. The address helps to keep track of where data is being transferred to and from on the network. Definition of nodes on the internet, implies anything that has an IP address.
In parallel computing, multiple computers -- or even multiple processor cores within the same computer -- are called nodes. Each node in the parallel arrangement typically works on a portion of the overall computing problem.
-
- Active Topics
-
-
- by Eli 2 days ago Re: What is in Your Mind? View the latest post Replies 703 Views 301809
- by Eli 2 days ago Russia Invades Ukraine View the latest post Replies 663 Views 236843
- by Eli 3 days ago All in One: YouTube, TED, X, Facebook and Instagram Reels, Videos, Images and Text Posts View the latest post Replies 326 Views 36157
- by Eli 5 days ago PySpark for Large Data Processing View the latest post Replies 1 Views 7840
- by Eli 1 week ago President Museveni's Speech During International Development Association (IDA) Summit View the latest post Replies 1 Views 305
- by Eli 1 week ago From Simple Linear Regression Analysis to Covariance & Correlation to Independent Determinant, and R-Squared View the latest post Replies 11 Views 24908
- by Eli 1 week ago Collection of Greatest Christian Hymns of all Times View the latest post Replies 34 Views 68988
- by Eli 2 weeks ago Pondering Big Cosmology Questions Through Lectures and Dialogues View the latest post Replies 34 Views 58213
- by Eli 2 weeks ago Programmatically Manipulate Files: Renaming, Reading, Writing, Deleting, and Moving Files Between Folders View the latest post Replies 7 Views 17234
- by Eli 2 weeks ago Iran Launches Retaliatory Attack Against Israel, and Israel Retaliates by Attacking Iranian Isfahan Millitary Base View the latest post Replies 28 Views 19535
-
Basic Computing Terms and Their Definitions
-
- Information
-
Who is online
Users browsing this forum: No registered users and 0 guests