cuda core and tensor core usage for GPU

hello,

Iam trying the main.dam break case in Nvidia A100. I have multi gpu dgx box. Each node has 256 hyperthread cpu, and 8 gpu card. each gpu card has 6912 cuda cores 432 tensor cores.

I am new in GPU computing. How does DSPH use the tensor cores? Any reference read on that?

How do I use multi GPU that is if I want to use more than one GPU card on that node? Can I allocate all 8?


Thanks

Comments

  • Hello

    The current version of DualSPHysics does not allow for multi-GPU execution. So you can only run on one GPU at a time, but then in theory have 8 different simulations running on the same machine now, by using -gpu: <id_number> (0 to 7) in the batch execution file.

    But congratulations on the machine, really nice specs! Hopefully new releases of DualSPHysics will be able to utilize it fully.

    Kind regards

  • edited March 2022

    Hello Do you mean to say,

    1st scenario: I can run 8 different DSPH simulations (different types of physics and setups) on single GPU card simultaneously by tagging id number 0 to 7 respectively?

    2nd scenario: And if I run a single big job on a single card, it might use all 8 ids if it needs and if i don't limit the use of ids?

    Does DSPH uses the tensor cores at all? or that would be an improvement of the code?

Sign In or Register to comment.