Q: Is the method applicable to other physical phenomena?

A: Yes, we’ve been working on some other applications for quite some time. We plan to release videos when sufficient quality is reached at an appropriate event.

Q: Have the scenes in the demo been used for training?

A: No.

Q: Why is the file so big? Is it because of the training model?

A: Training model is approximately ~50MB. The same model is used for all fluid and granular materials. The large majority of the data are uncompressed baked animations of characters.

Q: Why isn’t there any editor in the demo?

A: There is no scene editor implemented. We decided to go for a static/dynamic library and plugins for existing systems. Artists are already used to their workflow and it would be easier for them to adopt our new technology.

Q: When will the plugins be released?

A: Our target is next summer. As there are lots of external factors, we can’t promise anything.

Q: Can PhysicsForests run on AMD GPUs?

A: The code is designed, such that it is easily portable (See HIPify), however, we did not have such hardware to test it on yet.

Q: How big simulation can be run on a single GPU?

A: Using the current setup approximately 50M particles can be run onĀ an 8GB GPU. With some adjustments, we could run at most 120M particles (1-2fps). Larger simulations can be done only using CPU-GPU transfers, which would slow down the calculations.

Q: Can PhysicsForests be run on multiple GPUs?

A: Not at the moment. But the algorithm is parallelizable, so it is principally possible.

Q: Are there any reasons the demo does not run on older GPU architectures?

A: The only reason is that it would be too slow. For this demo, we tuned optimal quality/speed tradeoff for GTX1080. It is possible to make simulations much faster (but worse) or better quality (but much slower).

Q: So does ML/AI have a future when it comes to physics simulations?

A: We think so. It is very unlikely that any of the standard solvers are going to be sped up 100x. Based on our experience in the last 3 years, you can expect a long series of incremental improvements, which will slowly converge in quality to traditional solvers. In the next year, we plan to focus on optimization of speed. Current vanilla implementation is rather naive and there is lots of space for runtime improvement.