LLMs are not involved anywhere. You start with some data. Either simulation data or experimental data. Then you train a model to either learn a time evolution operator or a force field. Then you apply it to more input data, and you visualize the results.
One typical use case is that the simulation data takes months to generate. So for experimental use cases, it is very slow. So the idea was, to train a model that can learn the underlying physics. The model will be small enough so that inference won't be prohibitively expensive. So you can then use the ML model in lieu of the classical physics based model.
Where this usually fails is that while ML models can be trained well enough to replicate the training data, they typically fail to generalize well outside of the domain and regime of the training data. So unless your experimental problems are entirely within the same domains and regimes as the training data, your model is of not much use.
So claims of generalizability and applicability are always dubious.
Lots of publications on this topic follow the same pattern: conceive of a new architecture or formalism, train an ML model on widely available data, results show that it can reproduce the training data to some extent, mention generalizability in the discussion but never test it.
One typical use case is that the simulation data takes months to generate. So for experimental use cases, it is very slow. So the idea was, to train a model that can learn the underlying physics. The model will be small enough so that inference won't be prohibitively expensive. So you can then use the ML model in lieu of the classical physics based model.
Where this usually fails is that while ML models can be trained well enough to replicate the training data, they typically fail to generalize well outside of the domain and regime of the training data. So unless your experimental problems are entirely within the same domains and regimes as the training data, your model is of not much use.
So claims of generalizability and applicability are always dubious.
Lots of publications on this topic follow the same pattern: conceive of a new architecture or formalism, train an ML model on widely available data, results show that it can reproduce the training data to some extent, mention generalizability in the discussion but never test it.