Some previous work by some of the same people [0] where I was taking part in. Seems like this is a significant step up from the previous work including a novel idea that resolves quite a few issues we had. Love to see that.
This is nice, but I believe a simpler design could work better.
Simply make a model which transforms a 3d section of an image to an embedding vector. Make another model which can reverse the process (ie. encoder-decoder). Do that for every tile of a starting state.
Make an 'upscale' and 'downscale' model which can take a grid of embedding vectors and return a new vector representing the whole.
Then make an 'advance time' model, which takes an embedding vector and advances time by a given number of seconds/microseconds/days.
Now train all the models end to end to ensure that all combinations of upscaling/downscaling/advancing/encoding/decoding produce similar outputs to traditional physics models.
Use an ensemble of models or a sampling scheme to find places where outputs do not closely match, and insert more training data from the physical simulation at those points.
Interesting. I wonder what parts of this approach could be adapted to DEM models of solids. For those unaware - even though DEM is naturally chosen for fluids, depending on how you configure the force laws between particles you can easily model solids as well, where each particle is essentially a chunk of material. There are then some interesting choices to make about (a) what kind of lattice you set the initial particles up in, and (b) how you tune the force flaws to get the macroscopic properties you want around stiffness, etc and (c) if you use multiple “types” of particles forming a composite etc.
I'm sorry, but DEM is not for fluid simulation. It's used to simulate granular materials by default. Also the hopper discharge that is shown does not contain any fluid. The fluid is usually modeled using a different tool (e.g. using the finite volume method) which is then coupled to the particles.
Okay, fair, I was using fluid loosely (and inaccurately) to mean both granular and fluid behavior. But there’s nothing inherently incompatible between fluid dynamics and the discrete element method as far as I am aware, just like there is nothing inherently incompatible with solids. Sure SPH or LBM or FVM are the more traditional choices for fluids and computationally more tractable in most cases, but they aren’t necessarily “more right.”
Awesome paper on how powerful particle based methods can be:
No worries. I would still consider these methods to be very different from each other. SPH, FVM and so on are methods to discretize continuum equations. If you have a continuum equation that describes your granular material you can use them and DEM kind of interchangeably. But often times such continuum equations do not exist for granular media or they break down in certain flow regimes.
DEM on the other hand is not based on the continuum representation. Instead it is based on interaction forces that originate from particles being close by.
While it might be possible to link these two, afaik nobody has done this, but I'm no longer active in the field.
Take a look at the paper I linked, specifically section 4, which illustrates finding force laws to match the desired dynamics of a real physical material (in this case, Delrin, including elastic and plastic deformation.
Given, the recent noise around this paper https://arxiv.org/pdf/2407.07218 about "weak baselines" in ML x CFD work, I wonder how it resonates with this specific work..
I am not super familiar with DEM, but I know that other particle based model such as SPH benefit immensely from GPU acceleration. Does it make sense to compare with a CPU implementation ?
Besides, the output of the NeuralDEM seems to be rather coarse fields, correct ? In that sense, and again I'm not an expert of granular models so I might be entirely wrong, but does it make sense to compare with a method that is under a very different set of constraints ? Could we think about a numerical model that would allow to compute the same quantities in a much more efficient way, for example ?
Regarding your questions, yes, DEM also benefits a lot from GPU acceleration. So you can compare it to a CPU based code, but obviously there's an order of magnitude you can gain via GPU.
Usually you are not interested in the fine fields anyways. Think of some fine powder in a big process, where there are trillions of real particles inside. You can't and don't want to simulate that. Mostly you are interested in these course quantities anyways and getting statistical data, so for that there's no need for the fine resolution.
Regarding the numerical model that can compute these things in a more efficient way, they don't always exist. When you move to large numbers of particles you can sometimes go to continuum models, but they might not always behave as the real thing, as it's really difficult to find governing equations for such materials.
I haven't heard of this paper, very interesting read! Thank you for bringing it up here. Resonates very well with the (little) experience I have from playing around with CNN-based surrogate models years ago.
[0]: https://ml-jku.github.io/bgnn/
reply