Characteristics of large systems that emerged from tiny simulations using super-resolution techniques

Tokyo, Japan – Researchers at Tokyo Metropolitan University have improved “super-resolution” machine learning techniques to study phase transitions. They identified key features of how large arrays of interacting “particles” behave at different temperatures by simulating tiny arrays before using a convolutional neural network to make a good estimate of what a larger array would look like using “correlation configurations”. The massive savings in computational costs can provide unique opportunities for understanding the behavior of materials.

We are surrounded by different states or “phases” of matter, ie gases, liquids and solids. The study of phase transitions, i.e. the transformation of one phase into another, is at the heart of our understanding of matter in the universe and remains a hot topic for physicists. In particular, the idea of universality, in which very different materials behave similarly thanks to some common functions, is a powerful one. That’s why physicists study model Systems, often simple grids of “particles” on an array, which interact via simple rules. These models distill the essence of the common physics of materials and, amazingly, still show many properties of real materials, such as phase transitions. Because of their elegant simplicity, these rules can be coded into simulations that tell us what materials look like under different conditions.

As with all simulations, however, the problem begins when we want to look at many particles at the same time. The required computing time becomes unaffordable, especially with phase transitions, in which the dynamics are slowed down, and the “correlation length”, a measure of how the state of one atom relates to the state of another at a certain distance, is getting larger and larger. This is a real dilemma if we want to apply this knowledge to the real world: Real materials generally contain many more orders of magnitude of atoms and molecules than simulated matter.

For this reason, a team led by Professors Yutaka Okabe and Hiroyuki Mori from Tokyo Metropolitan University, in collaboration with researchers from the Shibaura Institute of Technology and the Bioinformatics Institute of Singapore, investigated how one can reliably extrapolate smaller simulations to larger ones using a concept that than on. known is the inverse renormalization group (RG). The renormalization group is a fundamental concept in understanding phase transitions and resulted in Wilson receiving the 1982 Nobel Prize in Physics. Recently, the field met a strong ally in Convolutional Neural Networks (CNN), the same machine learning tool that helps machine vision identify objects and decipher handwriting. The idea would be to give an algorithm the state of a small array of particles and make it “guess” what a larger array would look like. There is a strong analogy to the idea of ​​super-resolution images, which uses blocky, pixelated images to create smoother, higher-resolution images.

The team investigated how this applies to “spin” models of matter, where particles interact with other particles in the vicinity via the direction of their “spins”. Previous attempts have found it particularly difficult to apply this to systems at temperatures above a phase transition where the configurations look more random. Instead of using spin configurations, i.e. simple snapshots of the direction in which the particle spins are pointing, they thought about it Correlation Configurations, each particle being characterized by how similar its own spin is to that of other particles, especially those that are very distant. It found that correlation configurations contain more subtle queues about how particles are arranged, especially at higher temperatures.

As with all machine learning techniques, the key is to be able to generate a reliable “training set”. The team developed a new algorithm called Block-cluster transformation for correlation configurations in order to reduce them to smaller patterns. Apply a improved estimator Technique on both the original and the reduced pattern, they had pairs of configurations of different sizes based on the same information. All that is left is to train the CNN to convert the small patterns into larger ones.

The group looked at two systems, the 2D Ising model and the three-state Potts model, both of which are important benchmarks for studies of condensed matter. For both, they found that their CNN could use a simulation of a very small array of points to reproduce how a measure of the correlation g (T) changes across a phase transition point in much larger systems. Compared to direct simulations of larger systems, the same trends were reproduced for both systems, combined with a simple temperature scaling based on data at any system size.

Successful implementation of inverse RG transformations promises scientists insight into previously inaccessible system quantities and helps physicists understand the properties of materials on a larger scale. The team now hopes to be able to apply its method to other models that can map more complex features such as a continuous range of spins and the study of quantum systems.

###

This work was supported by a Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science, a Research Fellowship for Young Scientists from the Japan Society for the Promotion of Science and the A * STAR (Agency for Science, Technology and Research) Research Attachment Program of Singapore.

LEAVE A REPLY

Please enter your comment!
Please enter your name here