Scientists from the University of Sydney and Japan’s National Institute for Material Science (NIMS) have found that an artificial network of nanowires can be set up so that it reacts in a brain-like manner to electrical stimulation.

The international team led by Joel Hochstetter with Professor Zdenka Kuncic and Professor Tomonobu Nakayama found that the network of nanowires “on the edge of chaos” is kept in a brain-like state and performs tasks at an optimal level.

This suggests that the underlying nature of neural intelligence is physical, and its discovery opens an exciting path for artificial intelligence to develop.

The study is published today in Nature communication.

“We used wires 10 micrometers long and no thicker than 500 nanometers, randomly arranged on a two-dimensional plane,” says lead author Joel Hochstetter, a PhD student at the Nano Institute and School of Physics at the University of Sydney.

“Where the wires overlap, they form an electrochemical connection, like the synapses between neurons,” he said. “We found that electrical signals passed through this network automatically find the best way to convey information. And this architecture enables the network to “remember” previous paths through the system.

ON THE EDGE OF THE CHAOS

Using simulations, the research team tested the random nanowire network to see how it works best to solve simple tasks.

If the signal stimulating the network was too weak, the pathways were too predictable and orderly and did not create complex enough to be useful. If the electrical signal overwhelmed the network, the output was utterly chaotic and useless for problem-solving.

The optimal signal to produce a usable output was on the verge of this chaotic state.

“Some neuroscience theories suggest that the human mind could operate on this edge of chaos, or what is known as the critical state,” said Professor Kuncic of the University of Sydney. “Some neuroscientists believe that in this state we can achieve maximum brain performance.”

Professor Kuncic is Mr. Hochstetter’s PhD supervisor and is currently a Fulbright Fellow at the University of California at Los Angeles, where he works at the interface between nanosciences and artificial intelligence.

She said, “The exciting thing about this result is that it suggests that these types of nanowire networks can be tuned to regimes with different, brain-like collective dynamics that can be used to optimize information processing.”

OVERCOMING COMPUTER DUALITY

In the nanowire network, the connections between the wires allow the system to integrate memory and operations into a single system. This is different from standard computers, which separate memory (RAM) and operations (CPUs).

“These transitions behave like computer transistors, but they have the additional property that they remember that signals have already traveled this path. As such, they are called “memristors”, ”said Hochstetter.

This memory takes on a physical form in which the connections at the crossing points between the nanowires act like switches, the behavior of which depends on the historical reaction to electrical signals. When signals are applied to these junctions, tiny silver filaments grow that activate the junctions by letting current flow through them.

“This creates a storage network within the random system of nanowires,” he said.

Mr. Hochstetter and his team created a simulation of the physical network to show how it can be trained to solve very simple tasks.

“For this study, we trained the network to convert a simple waveform into more complex waveforms,” ​​said Hochstetter.

In the simulation, they adjusted the amplitude and frequency of the electrical signal to see where the best performance was occurring.

“We found that if you push the signal too slowly, the network does the same thing over and over without learning and evolving. If we push it too hard and too fast, the network becomes unpredictable and unpredictable, ”he said.

The University of Sydney researchers work closely with staff from the International Center for Materials Nanoarchictectonics at NIMS in Japan and UCLA, where Professor Kuncic is a visiting Fulbright fellow. The nanowire systems were developed at NIMS and UCLA and Mr. Hochstetter developed the analysis in collaboration with co-authors and PhD students Ruomin Zhu and Alon Loeffler.

REDUCING ENERGY CONSUMPTION

Professor Kuncic said that the union of memory and operations has tremendous practical benefits for the future development of artificial intelligence.

“Algorithms that are needed to train the network to know which intersection should be assigned the appropriate ‘load’ or weight of information eats up a lot of power,” she said.

“The systems we develop make such algorithms superfluous. We are only allowing the network to develop its own weighting, which means that we only have to worry about signal input and output, a framework known as “reservoir computing”. The network weights are self-adjusting and potentially release large amounts of energy. “

This means that all future artificial intelligence systems using such networks would have a much smaller energy footprint.

###

DOWNLOAD the study, photos of researchers and nanowire networks at this link.

INTERVIEW

Joel Hochstetter | [email protected] (located in Sydney)

Professor Zdenka Kuncic | [email protected] (currently in Los Angeles)

MEDIA INQUIRIES

Marcus Strom | [email protected] | +61 423 982 485

EXPLANATION

The authors confirm the use of the Artemis High Performance Computing resource at the Sydney Informatics Hub, a core research facility at the University of Sydney.

LEAVE A REPLY

Please enter your comment!
Please enter your name here