Tweet This! :)

Thursday, July 25, 2013

Remy promises to speed up the Internet

By Mark Ollig

What oversees how we send and receive information through the Internet in an orderly manner?

The exchange of data packets from one computer to another over this huge global network is currently managed via Transmission Control Protocols (TCP).

In 1974, Vinton Cerf, along with Robert Khan, created the basic communication TCP/IP (Internet Protocol) code which makes up the logical architectural design of how we use the Internet today. 

Our computers work with the Internet’s TCP for negotiating the packets of information and shared resources sent and received over it. 

Cerf and Khan probably could not have foreseen the billions of devices now connected to an endlessly-growing Internet.

These billions of devices are regularly requesting and exchanging information packets containing video, voice, pictures, texts, and other data.

Today, the Internet is taken for granted; we have become accustomed (some might say dependent) upon it for our communication, business, education, and entertainment.

With faster broadband speeds becoming commonplace, the negotiating between the Internet and the devices requesting packet information must be handled as efficiently as possible in order to avoid bottlenecks and delays. 

Our computing devices use a set of rules, or algorithms to negotiate this packet exchange. It is known as “TCP congestion avoidance algorithms.”

There are many TCP congestion avoidance algorithm programs designed to handle these packets of data for computing operating systems.

In Windows operating systems, there is one called Compound.

Some Linux operating systems use an algorithm called TCP Vegas.

Cubic is used as a TCP congestion avoidance algorithm in high-speed networks.

Apple’s Mac Operating system has a TCP algorithm called Reno, and I read some are using SACK (Selective Acknowledgement).

Presently used TCP algorithms are not as efficient in handling the Internet’s increasing data packet exchanges as some would like them to be.

Recently, a new type of TCP congestion prevention algorithm, named Remy, was the topic of a MIT (Massachusetts Institute of Technology) paper called “TCP: ex Machina: Computer-Generated Congestion Control.” 

Of course, the first question yours truly needed to have answered was “Why did they decide to call this TCP algorithm Remy?”

The MIT paper describes how in Douglas Hofstadter’s 1985 book “Metamagical Themas: Questing for the Essence of Mind and Pattern,” he lay claim to inventing the word “superrational” which means, “a substitute method for rational decision making.” 

The TCP code developers shrewdly discovered there was a “rat” in the word superrational. 

Those crafty developers associated their new superrational-like TCP code with the main character in the 2007 computer-animated movie “Ratatouille,” who was a rat called Remy. 

I find it clever (and sometimes amusing) how these super-smart computer science folks come up with these words.

Remy is a newly devised TCP algorithm, supporting a decision-making set of TCP rules based upon superrationality.

In the MIT paper, they describe in detail how the Internet is “a best-effort packet-switched network.”

If your Internet-connected computer could always be first in line for the delivery and reception of data packets, it would seem like a perfect world, for you would have no TCP congestion problems.

Of course, we know the Internet is used by billions of other computers and devices, and without some sort of shared control protocol of how data packets are sent and distributed to each computing device requesting them, there would be massive packet congestion and gridlock on the Internet.

TCP congestion control algorithms support the fair allocation of the data packets to each network endpoint.

A network endpoint is defined here as a device used to access a network service, such as the Internet.

The computers we use at home and work, our smart mobile devices, iPads, and other computing tablet devices we connect to the Internet with, are examples of network endpoints.

Remy, as I understand it, is an effort to synchronize, or coordinate, the TCP algorithms in each network endpoint so they work collectively, thus maximizing the distribution efficiency of the data packet requests among them.

According to the MIT paper, “Remy’s job is to find what that algorithm should be. We refer to a particular Remy-designed congestion-control algorithm as a RemyCC.”

Remy-generated TCP algorithms have proved themselves when tested using ns-2 (TCP network event simulator), where it bettered Compound, Vegas, and Cubic. 

Will we someday see a Remy algorithm working inside our computer’s operating system, or on a network, negotiating data packet exchanges to and from the Internet?

“Much future work remains before this question can be answered for the real-world Internet, but our findings suggest that this approach has considerable potential,” concluded Keith Winstein and Hari Balakrishnan, who wrote the MIT paper.

You can read the Remy MIT paper, and see all the fascinating algorithmic details at http://tinyurl.com/RemyMIT.