Science and Tech

We have a new data transfer record, and it is brutal: no less than 1.84 petabits per second

Life on Earth is possible thanks to this crucial and little-known mechanism of the Sun (and other stars)

Japan’s record has been short-lived. At the beginning of last June, the National Institute of Information and Communication Technologies (NICT) of Japan announced that its engineers had managed to transmit information through a fiber optic link with a transfer speed of no less than 1.02 petabits per second. It is a real barbarity.

The previous milestones reached in this area can help us put this figure in context. In 2020, NICT technicians achieved a then impressive transfer rate of 178 Tbps, and by early 2022 they accounted for 319 Tbps. Surprisingly, a few months later they broke their own record reaching 1.02 petabits per second that I mentioned in the previous paragraph, but they have just been dethroned. And, in addition, his brand has been beaten very forcefully.

This is the feat: 1.84 petabits/s sent to 7.9 km

This time those responsible for this achievement are not NICT engineers; are the researchers at the Technical University of Denmark in Copenhagen. And the real protagonist of this feat is the photonic processor to which they have resorted to deal with that overwhelming amount of information. No conventional computer equipped with one or more traditional microprocessors has the power to process and transmit this volume of data. Not even several computers coordinated and working in unison have it.

In fact, these Danish scientists have had to use their ingenuity to carry out their experiment. In this article we are not going to delve into the technical complexity of this milestone (if you want to know their experiment in detail, you can take a look at the scientific text they have published in Nature Photonics), but it’s worth taking a moment to take a look at the strategy they’ve devised to bring this test to fruition.

In order to manage this enormous volume of information, they have divided the data into 37 different lines

In order to manage this enormous volume of information, they have divided the data into 37 different lines so that each one of them is transmitted by an optical thread that is different from a single fiber optic cable. This is very important because one of the most relevant characteristics of this experiment is that these technicians have used a conventional fiber optic cable identical to those currently used by telecommunications service providers.

optical transmission

This scheme describes the architecture of the massively parallelized infrastructure that has been necessary to perfect to make possible the transfer of this enormous volume of information.

However, the “divide and conquer” strategy does not end here. And it is that each of those 37 lines was divided into 223 fragments (chunks), so that each of them was assigned to a specific portion of the optical spectrum. It seems complicated, and it is, but we can see it as a ruse that seeks to divide the information into several fragments so that it can be processed, encoded, sent, received and verified correctly. After all, it is easier to deal with several relatively manageable packages of information than it is with one gigantic and unmanageable one.

Either way, they got away with it. As they explain in their article, their experiment worked correctly and they managed to transmit 1.84 petabits per second of information at a distance of 7.9 km. To put this figure in context and intuit its magnitude with some precision, we can take into account that the average traffic that moves throughout the Internet at a given moment amounts to 1 petabit per second. This volume of data continues to grow, but thanks to innovations such as the one developed by these researchers, we can face the future of telecommunications with optimism.

Images: Guillaume Meurice | Nature Photonics

rmation: Nature Photonics

Source link