Basic Computing Terms and Their Definitions

Include usage of Supercomputers & Clusters, CPU and GPU optimization/acceleration, Parallel Computing (including Massively Parallel Computing), Distributed Computing, usage of: OpenMP (Open Multi-Processing) API, Message Passing Interface (MPI), Compute Unified Device Architecture (CUDA), OpenCL, framework, a variety of highly concurrent, multithreaded applications and single-process, multithreaded systems, and a lot of other different ways of optimizing codes and parallel programming
Post Reply
User avatar
Eli
Senior Expert Member
Reactions: 183
Posts: 5334
Joined: 9 years ago
Location: Tanzania
Has thanked: 75 times
Been thanked: 88 times
Contact:

#1

In parallel programming/computing, programs/algorithms can be classified depending on how much interaction has to take place between threads. A thread is the smallest unit of process that can be performed in an operating system. In most modern operating systems, a thread exists within a process -- that is, a single process may contain multiple/several threads (multi-threading).

While multi-tasking can be viewed as computer execution that allows processes to run concurrently, multi-threading allows sub-processes to run concurrently.

A process in computing, is an instance of a computer program that is being executed.

Parallel programming/computing (PC) is a method of performing simultaneously the normally sequential steps of a computer program, using two or more processors. Another definition for parallel programming, is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problem can often be divided into smaller ones, which are then solved in parallel (concurrently).

A closely related concept to PC is Parallel Processing, a model of computer operations in which a process is split into parts that execute simultaneously on different processors attached to the same computer. When we talk of parallel computing, we in most cases speak of the computational element which is CPU/GPU. Parallel processing is more broad than PC and entails parallelizing everything in the system where we have bottleneck - memory, visualization (amount of pixels you need to process), etc. This simply means the machine can separate the computations over several GPU/CPU to speed up the calculations.

There is a very narrow distinction in terms of definition between Parallel Programming and Concurrent Programming. A system is said to be concurrent if it can support two or more actions in progress at the same time. The key concept and difference between parallel and concurrent programming is the phrase in progress. This means, in concurrent systems, multiple actions can be in progress (although may not be executed) at the same time. See references [1], Laws of Concurrent Programming.

Another approach to higher performance computing, is Embarrassingly Parallel Computing, defined as computations which are loosely coupled; this means there is little/no dependence/interaction between threads, in other words, there is no/little need for communication between the parallel tasks being computed or their results. Embarrassingly PC algorithms are suitable to problems which are easy to break into separate, completely independent tasks.

Furthermore, Distributed Computing is a model in which components of a software system are shared among multiple computers to improve efficiency and performance. In a broadest sense, a distributed computer system consists of multiple software components that are on multiple computers/nodes, but run as a single system. The computers that are in a distributed system can be physically close together and connected by a LAN, or they can be geographically distant and connected by a wide area network. A distributed system can consist of any number of possible configurations, such as mainframes, personal computers, workstations, minicomputers, and so on. The goal of distributed computing is to make such a network work as a single computer. Reference IBM.

Other frequently used terms in computing include:

High Performance Computing

High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.

Computer Cluster

A computer cluster is a single logical unit consisting of multiple computers that are linked through a LAN. The networked computers essentially act as a single, much more powerful machine. A computer cluster provides much faster processing speed, larger storage capacity, better data integrity, superior reliability and wider availability of resources.

Supercomputing

The term supercomputing refers to the processing of massively complex or data-laden problems using the concentrated compute resources of multiple computer systems working in parallel (i.e. a "supercomputer"). Supercomputing involves a system working at the maximum potential performance of any computer, typically measured in Petaflops. Sample use cases include genomics, astronomical calculations, and so forth. Supercomputing enables problem solving and data analysis that would be simply impossible, too time-consuming or costly with standard computers.


Core

A core (processor) is part of a CPU that receives instructions and performs calculations, or actions, based on those instructions. Processors can have a single core or multiple cores. A processor with two cores is called a dual-core processor, four cores is quad-core, and so on. The more cores a processor has, the more sets of instructions the processor can receive and process at the same time, which makes the computer faster. A multi-core processor allows multi-tasking which gives the effect of as if we have a very fast processor installed on our computer.

Node

In data communication, a node is any active, physical, electronic device/system, data points attached/connected to a network, and they all refer back to the concept of a graph. These devices are capable of either sending, receiving, or forwarding information; sometimes a combination of the three. For example, if a network connects a file server, five computers, two printers, one cell phone, there are nine nodes on this network. Each device on the network has a network address, such as a MAC address, which uniquely identifies it. The address helps to keep track of where data is being transferred to and from on the network. Definition of nodes on the internet, implies anything that has an IP address.

In parallel computing, multiple computers -- or even multiple processor cores within the same computer -- are called nodes. Each node in the parallel arrangement typically works on a portion of the overall computing problem.
0
TSSFL -- A Creative Journey Towards Infinite Possibilities!
User avatar
Eli
Senior Expert Member
Reactions: 183
Posts: 5334
Joined: 9 years ago
Location: Tanzania
Has thanked: 75 times
Been thanked: 88 times
Contact:

#2

0
TSSFL -- A Creative Journey Towards Infinite Possibilities!
Post Reply

Return to “Parallel Programming, High-Performance Computing & Supercomputing”

  • Information
  • Who is online

    Users browsing this forum: No registered users and 4 guests