Request: If anyone has the opportunity to test the new Nvidia graphics, when released

Hello!

I found this initial data sheet (not sure about validity):


This seems very promising.. if anyone would be able to showcase some benchmark running DualSPHysics I think much of the community would be interested, no matter which 3000 card it is.

I think it could be fun as a community effort to standardize some tests and be able to benchmark performances over time.

Kind regards

Comments

  • Hi,

    I think this is a very interesting topic !

    We could use this to test both:

    • the speed of each GPU (when running the same DSPH version)
    • the speed of DSPH between versions

    When comparing GPUs, we cannot trust entirely the product specifications. Depending on the brand of your GPU (Nvidia, Zotac, MSI....), the clock speed is different. So we would need both the GPU architecture name (3080,1080ti...) and the effective clock speed (this can be obtain via tools like GPUZ https://www.techpowerup.com/gpuz/ ).

    During benchmark, we should verify that the GPU is not bottlenecked...

    A simple first step could be to run one of the examples in DSPH (damBreak ??) and exchange performances ??

    Kind regards

  • Yes, I am very interested in doing this. I plan on upgrading my GPU for the simulations and I thought about buying an RTX 3XXX when they are out. But beforehand, it would be nice to hear from the developers, if it is possible to use an RTX 3XXX from the get-go and not worry about compatibility issues with DualSPHysics.

    If it comes to that point, I am willing to share my results with the community!

  • @TPouzol

    I think that is a good look at it! Personally I think we should say from DualSPHysics v5.0 and on-wards since knowing about older versions is not important right now. Then in future, when hopefully v6 comes we can compare with v5 and v6!

    One could argue v4 is important for performance optimization, but I think the developers already have tested a lot in this case.

    The dam-break is a great idea, since it is simple, both in physics and setup! We would then have to set some specific parameters, i.e. resolution, viscosity etc. to keep it consistent through versions. I propose ratios of dp's of 2 and use of symplectic time stepping, artificial viscocity since these seem to stay consistent. I believe we have to do both DBC and mDBC for boundary. Do you happen to have a fair test case setup you are thinking about?


    @Hannes

    Maybe @Alex can shed more light on this, I hope you are able to aquire one! Else I believe one can always try to compile from source, but that might be a hassle.

    I hope anyone with further inputs will chime in

    Kind regards

  • @Asalih3d Thanks. Yeah starting at v5 is a good idea.

    For the Dam example, what do you mean by a "ratio of dp's of 2". I think the best solution would be to settle on a xml + bat file that we could share so there is no difficulties or error in testing.

    Then from first results we could elaborate depending on the pertinence and needs.

    Unfortunately, I was not able to procure a RTX3080. maybe a RTX3090 in the next few weeks...

    Currently, I can test on 3 GTX1080ti (desktop) and 2 GTX1060 (desktop and laptop). They are from different brands and are associated with different CPUs.

    King regards

  • I meant that one should run each simulation at multiple dp's which are a ratio of two, such that dp = 8 -> 4 -> 2 -> 1 etc.


    Yeah the 3080 got caught by scalpers unfortunately, and probably the same with 3090.. performance wise in gaming the increase is between 25-75% for the 3080, but since the amount of CUDA cores has increased I think it might perform about two times as fast for DualSPHysics (if it is able to run) - this is of course only SPECULATION by me :-)

    I think maybe the 3d dambreak case could be a good candidate? The goal should be to choose a simulation which is not too short, since if a simulation takes 1 minute it can be difficult to measure the actual performance. We have already a very good timer setup by default in the RUN.out file, which should make it possible to compare rather well.

    I would be quite interested in a rough comparison between the three 1080ti's, do each vendor perform in the same margin? There might be some different stock settings, but I would expect them to perform in 10% range of each other.

    Kind regards

  • I'll try to test this next week ! I'll share xml and results. It would be nice if others could share.

    Kind regards

  • Awesome! I can try running your XML on my laptop GPU, please tag me when you get it done :-)

    Kind regards

Sign In or Register to comment.