I cannot exactly answer this question but I can write down a reply I got when I asked about MPI parallelization capabilities:
Before going to MPI, you have to be sure that you really need that... First: OpenMP allows you to use 8-16 cores of a good CPU as the current ones... Second: You can use the GPU version of our code.. that will be always more efficicent and chepar than machines with MPI.... you can buy a good GPU card for 500-600 euros...
We have only worked with MPI in the past in order to communicate different GPU cards hosted by different CPU that need MPI...
Based on this, 1. MPI is not implemented. MPI is often used for implementing parallelization on computational clusters. 2. OpenMP is only intended to communicate between different cores on a singular CPU. 3. GPU performs considerably better than a CPU (For me a factor 6 faster).
I hope this helps somewhat otherwise you will have to wait for someone more experienced to answer.
@Alex My university has a cluster that I can use. I heard from some colleague that the CPU core is more powerful than GPU core. Is that possible is the MPI implementstion can be shared so I can do comparations between CPU cluster and GPU?
@kevinxmu could you get this? im also looking for a version where i could run dualsphysics at the cluster in my university. i found the same presentation you saw on google and was wondering if version 4.4 runs it.
Comments
I hope this helps somewhat otherwise you will have to wait for someone more experienced to answer.
http://www.archer.ac.uk/training/virtual/2018-01-24/Presentation_ecse_pdf.pdf
In the presentation, It mentioned the reasons why MPI is necessary.
in your case, what are you going to use as a execution device? a PC or a cluster?
@kevinxmu could you get this? im also looking for a version where i could run dualsphysics at the cluster in my university. i found the same presentation you saw on google and was wondering if version 4.4 runs it.