Researchers programmed an AI that think like Newton or Einstein

Physics faces the challenge of understanding how collective properties emerge from interactions at the microscopic level. These interactions are fundamental to almost all physical theories and are typically described by mathematical terms in the action.

The traditional method involves deriving these terms from basic processes and using the resulting model to predict the entire system’s behavior. However, what if we don’t know the underlying processes? Is it possible to flip the approach and figure out the microscopic action by observing the entire system?

The creation of new theories in physics has historically been linked with renowned figures such as Isaac Newton or Albert Einstein. Numerous Nobel Prizes have been awarded for groundbreaking theories in the field. Now, scientists at Forschungszentrum Jülich have developed an artificial intelligence (AI) capable of achieving a similar feat. This AI can identify patterns within intricate datasets and translate them into coherent physical theories.

In the interview below, Prof. Moritz Helias from Forschungszentrum Jülich’s Institute for Advanced Simulation (IAS-6) sheds light on the concept of the “Physics of AI” and how it differs from traditional approaches.

Physicists typically start by observing a system and then propose how its different components interact to explain its behavior. They derive predictions from these observations, which are then tested. For instance, Isaac Newton’s law of gravitation accurately predicts celestial body movements.

They employ a method called “physics for machine learning” and use physics to understand the complex functioning of AI. Their novel idea involves training a neural network to map complex behaviors to simpler systems. Then, they created an inverse mapping to develop a new theory based on these simplified interactions. This approach is akin to traditional physics, but they extracted interactions from the AI’s parameters.

Physicists analyzed a dataset of handwritten numbers, commonly used research when working with neural networks. They investigated how small image elements, like number edges, are formed by interactions between pixels. This helps them understand how AI processes visual information.

AI allowed them to handle large calculations efficiently. While computational efforts are still high due to numerous possible interactions, they parameterized them effectively. Currently, physicists can study systems with around 1,000 interacting components, but further optimization could expand this capability.

Unlike many AIs that learn implicit theories from data, their approach extracts and formulates learned theories using physics-based language. This makes their AI theories interpretable and falls under “explainable AI” and “physics of AI,” bridging the gap between complex AI processes and human-understandable theories.

Journal Reference:

  1. Claudia Merger, Alexandre René, Kirsten Fischer, Peter Bouss, Sandra Nestler, David Dahmen, Carsten Honerkamp, and Moritz Helias. Learning Interacting Theories from Data. Phys. Rev. X (2023), DOI: 10.1103/PhysRevX.13.041033

Source

Tags: