This project presents a performance analysis of matrix multiplication implemented in C++ and Java, comparing both sequential execution and OpenMP-parallelized versions. The study explores execution time, scalability, and cache performance when handling large-scale computations. It was developed as part of the Parallel and Distributed Computing course at FEUP.
A detailed report with the complete performance study, methodology, and results is available in:
doc/CPD_Proj1_Report.pdf
Before running the programs, ensure the following tools are installed:
- A C++ compiler with OpenMP support (e.g.,
g++
orclang++
) - Java Development Kit (JDK 21 recommended)
- Bash (to execute the provided helper scripts)
⚠️ Note: The programs perform very large matrix multiplications, which may take a significant amount of time to finish. For the C++ implementations, always use the provided scripts to guarantee the correct compilation flags are applied.
Compile and run using the standard Java toolchain:
cd src
javac MatrixProduct.java
java MatrixProduct
Run the provided script to compile and execute:
cd src
./matrix.sh
Run the parallel version using the dedicated script:
cd src
./parallel.sh
- Afonso Neves (up202108884@up.pt)
- Francisco Mendonça (up202006728@up.pt)
This project is licensed under the terms of the MIT License.