Skip to content

MPI laboratory project demonstrating collective communication primitives to perform distributed numerical computations on a vector. Implements broadcast, scatter, gather, reduce, and scan operations while managing vector segments across multiple processes (Introduction to Parallel Computing, UNIWA).

Notifications You must be signed in to change notification settings

Introduction-to-Parallel-Computing/Collective-Communication

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

UNIWA

UNIVERSITY OF WEST ATTICA
SCHOOL OF ENGINEERING
DEPARTMENT OF COMPUTER ENGINEERING AND INFORMATICS

University of West Attica · Department of Computer Engineering and Informatics


Introduction to Parallel Computing

Collective Communication

Vasileios Evangelos Athanasiou
Student ID: 19390005

GitHub · LinkedIn


Supervision

Supervisor: Vasileios Mamalis, Professor

UNIWA Profile

Supervisor: Grammati Pantziou, Professor

UNIWA Profile · LinkedIn

Co-supervisor: Michalis Iordanakis, Academic Scholar

UNIWA Profile · Scholar


Athens, January 2023



README

Collective Communication

The primary objective of this exercise is to manage and process a vector X of size N across p processes using MPI and collective communication.


Table of Contents

Section Folder Description
1 assign/ Assignment material for the Collective Communication laboratory
1.1 assign/PAR-LAB-EXER-II-2022-23.pdf Laboratory exercise description in English
1.2 assign/ΠΑΡ-ΕΡΓ-ΑΣΚ-ΙΙ-2022-23.pdf Περιγραφή εργαστηριακής άσκησης (Greek)
2 docs/ Documentation and theoretical background on collective communication
2.1 docs/Collective-Communication.pdf Theory and mechanisms of collective communication (EN)
2.2 docs/Συλλογική-Επικοινωνία.pdf Θεωρία Συλλογικής Επικοινωνίας (EL)
3 src/ Source code implementing collective communication operations
3.1 src/collective_communication.c C implementation of MPI collective communication primitives
4 README.md Project documentation
5 INSTALL.md Usage instructions

1. Architecture

The system follows a manager–worker model:

  • Process P₀ (Manager):

    • Initializes and owns the full vector
    • Distributes vector segments to all processes (including itself)
    • Coordinates global calculations and gathers results
  • Worker Processes (P₁ … Pₚ₋₁):

    • Perform computations on their assigned sub-vectors
    • Participate in collective communication operations

All calculations are executed locally first and then combined using MPI collective routines.


2. Features & Calculations

The program performs the following operations on the distributed vector X:

Question A - Comparison with Average

  • Computes the mean value of the vector
  • Counts:
    • Elements greater than the average
    • Elements less than the average

3. Question B - Dispersion (Variance)

The dispersion (variance) is calculated using:

$$ \text{var} = \frac{\sum_{i=0}^{n-1} (X_i - m)^2}{n} $$

where:

$$ m $$

is the mean value of the vector


4. Question C - Percentage Relationship Vector

Computes a normalized percentage vector

$$ D_i $$

:

$$ D_i = \frac{X_i - X_{min}}{X_{max} - X_{min}} \times 100 $$

This expresses each element’s relative position between the minimum and maximum values.


5. Question D - Maximum Value and Index

  • Identifies the maximum value in the vector
  • Determines its global index

6. Question E - Prefix Sum (Scan)

  • Computes the prefix sum vector of X
  • Each element contains the sum of all previous elements up to that position

7. Conclusion

This project demonstrates effective use of MPI collective communication for distributed numerical processing. It highlights practical applications of MPI_Bcast, MPI_Scatter, MPI_Reduce, and MPI_Scan, offering a strong foundation for understanding data-parallel computation and process coordination in high-performance computing environments.

About

MPI laboratory project demonstrating collective communication primitives to perform distributed numerical computations on a vector. Implements broadcast, scatter, gather, reduce, and scan operations while managing vector segments across multiple processes (Introduction to Parallel Computing, UNIWA).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages