Linux configuration
Hello, I have been using DualSPHysics in windows but now I am trying to configure it in Linux (Debian).
I can execute GenCase without problem. When I try to run DualSPhysics5.0 there is an error:
"error while loading shared libraries: libdsphchrono.so: cannot open shared object file: No such file or directory"
The file is in the folder src/lib/linux_gcc but Linux does not find it. Why? How can I solve it ?
Thank you all for your help
Comments
Did you make sure to set "export dirbin=..." to the right path in your bash file?
As a workaround you can also use the binaries which are contained in the DesignSPHysics from github. The are precompiled and at least for Ubuntu 20.04 / 22.04 they work without any compilation.
So far I was not able to compile the newest versions of DualSPHysics from github in Linux. I am not sure on which environment they are compiling it, but the Installation steps in the README.txt are not working for me so I am sticking to the workaround.
When you launch the DualSPHysics executable make sure the path of the dynamic libraries (libdsphchrono.so & libChronoEngine.so) are in the environmental variable LD_LIBRARY_PATH. To do that, execute export LD_LIBRARY_PATH=DualSPHysics/bin/linux. Otherwise you can execute your case through a shell-script following the example provided in the folder DualSPHysics/examples/main/01_DamBreak.
What is the error you get when you try to compile it?
Firstly I had to disable the compute_30 flag for nvcc. Otherwise it would not let me compile at all.
Then with the latest github commit I get:
I have:
Kind regards,
Faro
Hello @zweihuehner , the problem is that Cuda 11.5 is not allowed to compile the code yet. Yo have to use Cuda 9.2
Regards
Hello, I just compile v5.0 with cuda11.7, It's not difficult. Just modify Makefile and match your GPU with Cuda arch . I recommend referring this website https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/
Hello, I am trying to run the latest release on HPC clusters. GenCase runs without a problem, DualSPHysics throws an error saying caseName_out is unrecognized or invalid.
Following this post, I did modify the Makefile to make sure that the cuda version and compute capability match with the the one on the cluster, I then built the executables; however, I keep getting the same error.
I saw couple of similar posts on this forum, but most of them seems to remain unanswered. I was hoping maybe someone knows how to fix it.
Thanks!
In my .sh file I modified the dualsphysicsgpu line as follows
${dualsphysicsgpu} -gpu :0 -dirout ${dirout}/${name} -name ${dirout} -dirdataout data -svres
After which, the Cuda device was successfully identified but then it throws another error which says
I am not sure why it is doing this when I already can see the .xml file in the case_out folder.
I had to change the line
${dualsphysicsgpu} -gpu :0 -dirout ${dirout}/${name} -name ${dirout} -dirdataout data -svres
to
${dualsphysicsgpu} -gpu :0 -name${dirout}/${name} -dirout ${dirout} -dirdataout data -svres
It seems to be working right now.
Forgive me for not replying in time,congratulations🙂