Light-based processors enhance machine learning processing

IMAGE

IMAGE: Decorative representation of a processor for light-running matrix propagation. view more

Credit: Oxford University

The exponential growth of data traffic in our digital age poses real challenges in terms of processing power. And with the advent of machine learning and AI in, for example, self-driving vehicles and speech recognition, the upward trend is set to continue. All of this places a heavy burden on the ability of conventional computer processors to keep up with demand.

Now, an international team of scientists has turned to light to tackle the problem. The researchers developed a new approach and architecture that combines the processing and storage of data on a single chip using light-based, or “photonic”, processors that are shown to overlap. conventional electronic slots by processing information much faster and at the same time.

The scientists developed a hardware accelerator for so-called matrix-vector multiplications, which are the backbone of cloud networks (algorithms similar to the human brain), which themselves are used for machine learning algorithms. Since different light waves (colors) do not interfere with each other, the researchers were able to use several light waves for parallel calculations. But to do this, they used another innovative technology, developed at EPFL, a chip-based “frequency comb”, as a light source.

“Our study is the first to introduce frequency combinations in the field of arctic neural networks,” said Dr. Tobias Kippenberg at EPFL, one of the study’s leaders. Professor Kippenberg’s research has begun to develop frequency combos. “The frequency comb emits several optical waves that are processed independently of each other in the same photonic chip.”

“Light-based processors to accelerate tasks in the field of machine learning enable complex mathematical tasks to be processed at high speeds and throughputs,” says lead co-author Wolfram Pernice at the University of Münster, one of the professors who led the research. “This is much faster than conventional chipsets that rely on electronic data transfer, such as graphics cards or special hardware such as TPU’s (Tensor Process Unit).”

After designing and making the photonic slits, the researchers tested them on a neural network that recognizes handwritten numbers. Inspired by biology, these networks are a concept in the field of machine learning and are mainly used in image or audio data processing. “The validation work between entry data and one or more filters – which recognize edges in an image, for example, is well suited to our matrix architecture,” says Johannes Feldmann, now based at Oxford University’s Department of Materials. Nathan Youngblood (University of Oxford) said: “Taking advantage of wavelength multiplication allows higher data rates and computing densities, ie operations per processor area, that have not been achieved before.”

“This work is a true showcase of European collaborative research,” said David Wright of the University of Exeter, who is leading the EU FunComp project, which funded the work. “While every research organization involved is a world leader in its own way, it brought together all those parts that made this work truly possible. “

The study is published in Nature this week, and it has wide-ranging applications: simultaneously higher data processing (and energy saving) in artificial intelligence, larger cloud networks for more accurate forecasting and more detailed data analysis, much more clinical data for making diagnoses, advancing rapid evaluation of sensory data in self-driving vehicles, and expanding cloud computing infrastructures with increased storage space, computing power, and applications software.

###

Information

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, AS Raja, J. Liu, CD Wright, A. Sebastian , TJ Kippenberg, WHP Pernice, H. Bhaskaran. Parallel parallel processing using an integrated photonic tensor core. Nature 07 January 2021. DOI: 10.1038 / s41586-020-03070-1

Disclaimer: AAAS and EurekAlert! they are not responsible for the accuracy of press releases posted to EurekAlert! by sending institutions or for using any information through the EurekAlert system.

.Source