Skip to main content

Lab 1 - Introduction to Parallel Programming with Pthreads

Parallel Programming

Traditionally, the programs you have implemented so far have been written for serial computation. In this approach, a problem to be solved is divided into a discrete set of instructions that are executed on a single processor sequentially, one after the other. At any given time, only one instruction can be executed.

On the other hand, parallel programming involves the simultaneous use of multiple computing resources to solve a problem. In terms of the steps to solve it, the problem must first be divided into discrete components that can be solved concurrently. Then, each such component must be further divided into a set of instructions. Finally, to achieve parallelism, instructions from different components of the problem can be executed simultaneously on different processors. To accomplish this, a mechanism for coordinating the execution of different components of the problem is required.

To efficiently parallelize a problem, it needs to be logically divisible into separate components that can be executed simultaneously, and the parallel execution time of these components should be shorter when multiple computing resources are available compared to when only a single resource is available. To execute a parallel program, you need either a multi-processor/multi-core machine or an arbitrary number of such computing machines connected through a network (extending the concept of parallel programming to distributed programming).

When implementing a parallel program, you need to take into account various design considerations, such as:

  • How to partition the problem?
  • How to balance the workload?
  • How to handle communication between parallel-running components?
  • What data dependencies exist?
  • How to synchronize the parallel components of the program?
  • How much effort is required to parallelize a problem?

Throughout this semester, we will try to address as many of these issues as possible.