Physical AI
Assembly Line
NVIDIA Advances Physical AI With Accelerated Robotics Simulation on AWS
NVIDIA announced at AWS re:Invent that Isaac Sim now runs on Amazon Elastic Cloud Computing (EC2) G6e instances accelerated by NVIDIA L40S GPUs. And with NVIDIA OSMO, a cloud-native orchestration platform, developers can easily manage their complex robotics workflows across their AWS computing infrastructure.
Physical AI describes AI models that can understand and interact with the physical world. It embodies the next wave of autonomous machines and robots, such as self-driving cars, industrial manipulators, mobile robots, humanoids and even robot-run infrastructure like factories and warehouses. With physical AI, developers are embracing a three computer solution for training, simulation and inference to make breakthroughs.
A Framework For Efficiently Scaling Neural Operators
Universal Physics Transformers (UPTs) are a novel learning paradigm to efficiently train large-scale neural operators for a wide range of spatio-temporal problems - both for Lagrangian and Eulerian discretization schemes.
The architecture of UPT consists of an encoder, an approximator and a decoder. The encoder is responsible to encode the physics domain into a latent representation, the approximator propagates the latent representation forward in time and the decoder transforms the latent representation back to the physics domain.
On machine learning methods for physics
Simulation methods are employed to resolve the behaviour of matter (solids, fluids, gases, etc.), fields (electromagnetic, pressure, velocity, density), and any number of other physical phenomena that are driven by known local rules, particularly partial differential equations (PDEs). Therefore, traditional simulation methods typically involve some kind of discretisation of the physical domain of interest, such that the rules of the governing PDE can be locally well-approximated by a tractable computation. Local computations are stacked together and iterated upon until we converge to a solution. Beyond a narrow class of problems where a closed-form solution can be provided, this is generally how complicated problems are addressed. Many PDEs can exhibit chaotic behaviour in their full form, which often causes us to resort to simpler approximations at the PDE level, even before discretisation, to make them computationally feasible and ensure convergence.
ML methods in general provide a new approach to accomplishing engineering tasks. Any of these methods greatly accelerate iterations in the design optimisation workflow, as they allow us to search the space faster and guide our search towards promising areas. This results in better exploration, overall lower computational costs for simulating physics, and ultimately, higher quality designs in a shorter time-frame and with lower manual effort.
The models discussed so far do not leverage the fact that we often know the PDE that generates data and governs solutions; the focus has been to approximate the physical laws from observations, rather than impose them in the model structure explicitly. This is primarily because of the difficulty of incorporating such prior knowledge into the models, but also because simulation data may disobey the exact PDE due to the approximations required to facilitate numerical simulations. However, a new approach to simulation has recently been proposed that takes advantage of this prior knowledge in an effort to reduce data requirements and promote physically consistent solutions. Physics-Informed Neural Networks (PINNs), as presented by Raissi, Perdikaris, and Karniadakis (2017a, 2017b; Zhu et al. 2019; Karniadakis et al. 2021) introduces an artificial neural network (ANN) that takes as input the coordinates of any point in the domain of the PDE, and outputs the value for the solution field at that point. The ANN is tasked with representing the solution field, and is trained by sampling points randomly in the domain and penalising deviations from the PDE at those points. As long as the activation function of the ANN is sufficiently differentiable, residuals in the terms of the PDE can be easily evaluated, which can be combined into the loss function to be minimised with respect to the ANN parameters. The ANN is an ansatz about a parametrised form of the solution (albeit a particularly flexible one) and we attempt to fit the parameters such that it best matches the PDE. The idea harks back to older variational numerical simulation methods, like the generalised Galerkin approximation and others.