Linux configuration

Hello, I have been using DualSPHysics in windows but now I am trying to configure it in Linux (Debian).

I can execute GenCase without problem. When I try to run DualSPhysics5.0 there is an error:

"error while loading shared libraries: libdsphchrono.so: cannot open shared object file: No such file or directory"

The file is in the folder src/lib/linux_gcc but Linux does not find it. Why? How can I solve it ?


Thank you all for your help

Comments

  • Did you make sure to set "export dirbin=..." to the right path in your bash file?

  • As a workaround you can also use the binaries which are contained in the DesignSPHysics from github. The are precompiled and at least for Ubuntu 20.04 / 22.04 they work without any compilation.

    So far I was not able to compile the newest versions of DualSPHysics from github in Linux. I am not sure on which environment they are compiling it, but the Installation steps in the README.txt are not working for me so I am sticking to the workaround.

  • When you launch the DualSPHysics executable make sure the path of the dynamic libraries (libdsphchrono.so & libChronoEngine.so) are in the environmental variable LD_LIBRARY_PATH. To do that, execute export LD_LIBRARY_PATH=DualSPHysics/bin/linux. Otherwise you can execute your case through a shell-script following the example provided in the folder DualSPHysics/examples/main/01_DamBreak.

  • edited June 2022


    Firstly I had to disable the compute_30 flag for nvcc. Otherwise it would not let me compile at all.

    Then with the latest github commit I get:


    [ 44%] Building NVCC (Device) object CMakeFiles/DualSPHysics5.0_linux64.dir/DualSPHysics5.0_linux64_generated_JCellDivGpuSingle_ker.cu.o
    nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
    nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
    /usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with ‘...’:
     435 |        function(_Functor&& __f)
         |                                                                                                                                                ^ 
    /usr/include/c++/11/bits/std_function.h:435:145: note:        ‘_ArgTypes’
    /usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with ‘...’:
     530 |        operator=(_Functor&& __f)
         |                                                                                                                                                 ^ 
    /usr/include/c++/11/bits/std_function.h:530:146: note:        ‘_ArgTypes’
    CMake Error at DualSPHysics5.0_linux64_generated_JCellDivGpuSingle_ker.cu.o.cmake:280 (message):
     Error generating file
     /home/fafs/Dev/DualSPHysics/src/source/build/CMakeFiles/DualSPHysics5.0_linux64.dir//./DualSPHysics5.0_linux64_generated_JCellDivGpuSingle_ker.cu.o
    


    I have:

    • Ubuntu 22.04
    • Cuda 11.5
    • gcc 11.2


    Kind regards,

    Faro

  • Hello @zweihuehner , the problem is that Cuda 11.5 is not allowed to compile the code yet. Yo have to use Cuda 9.2

    Regards


  • Hello, I just compile v5.0 with cuda11.7, It's not difficult. Just modify Makefile and match your GPU with Cuda arch . I recommend referring this website https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/

  • Hello, I am trying to run the latest release on HPC clusters. GenCase runs without a problem, DualSPHysics throws an error saying caseName_out is unrecognized or invalid.

    *** Exception (JCfgRun::ErrorParm)
    Text: Parameter "caseName_out" unrecognised or invalid. (Level cfg:0, Parameter:3)
    

    Following this post, I did modify the Makefile to make sure that the cuda version and compute capability match with the the one on the cluster, I then built the executables; however, I keep getting the same error.

    I saw couple of similar posts on this forum, but most of them seems to remain unanswered. I was hoping maybe someone knows how to fix it.


    Thanks!

  • In my .sh file I modified the dualsphysicsgpu line as follows

    ${dualsphysicsgpu} -gpu :0 -dirout ${dirout}/${name} -name ${dirout} -dirdataout data -svres


    After which, the Cuda device was successfully identified but then it throws another error which says

    *** Exception (JSphGpuSingle::LoadCaseConfig) at JSph.cpp:920
    Text: Case configuration was not found.
    File: case_out.xml
    

    I am not sure why it is doing this when I already can see the .xml file in the case_out folder.

  • I had to change the line

    ${dualsphysicsgpu} -gpu :0 -dirout ${dirout}/${name} -name ${dirout} -dirdataout data -svres

    to

    ${dualsphysicsgpu} -gpu :0 -name${dirout}/${name} -dirout ${dirout} -dirdataout data -svres

    It seems to be working right now.

  • Forgive me for not replying in time,congratulations🙂

Sign In or Register to comment.