Hello dtan I am new to dualSPHysics, but for what I have seen the CPU parallelization is done with OpenMP. So unless you have some shared memory machine I dont think is possible, but maybe I can be corrected
As Sergey has rightly said, the current version of DualSPHysics is OpenMP and CUDA based, this is not likely to change with future releases as DualSPHysics is aimed at GPU and non-distributed parallelism. If you want an MPI version of the SPHysics code to run on CPUs then you will need to stick with parallelSPHysics.
In future versions of DualSPHysics, MPI will likely be used to perform distributed computing over multiple GPUs in different clusters, however the release date for this is not yet fixed and will likely be after the forthcoming 3.0 release.
I'm wanting to work on getting DualSPHysics to work with MPI and CUDA for a local implementation. I have access to an HPC with 4 Tesla GPUs, so I have the equipment to play with. What are some of the challenges of doing this? What are some key files that I will have to focus on? I'll be digging through the Doxygen docs. Just looking for a little direction as I dig.
DualSPHysics v3.0 will be release these days. We have developed a multiGPU version using MPI communications among the CPUs that host the GPU cards. However the source files of this version will not be release now.
Have you considered a technology like vSMP Foundation which allows you to run OpenMP codes unmodified across multiple physical nodes? (disclosure: I work for the company who makes it, ScaleMP)
Comments
It's need special algorithm programming.
I am new to dualSPHysics, but for what I have seen the CPU parallelization is done with OpenMP. So unless you have some shared memory machine I dont think is possible, but maybe I can be corrected
It will be in v3.0 see News
As Sergey has rightly said, the current version of DualSPHysics is OpenMP and CUDA based, this is not likely to change with future releases as DualSPHysics is aimed at GPU and non-distributed parallelism. If you want an MPI version of the SPHysics code to run on CPUs then you will need to stick with parallelSPHysics.
In future versions of DualSPHysics, MPI will likely be used to perform distributed computing over multiple GPUs in different clusters, however the release date for this is not yet fixed and will likely be after the forthcoming 3.0 release.
Regards
Alex
Have you considered a technology like vSMP Foundation which allows you to run OpenMP codes unmodified across multiple physical nodes?
(disclosure: I work for the company who makes it, ScaleMP)