Tweet This! :)

Thursday, July 25, 2013

Remy promises to speed up the Internet

By Mark Ollig

What oversees how we send and receive information through the Internet in an orderly manner?

The exchange of data packets from one computer to another over this huge global network is currently managed via Transmission Control Protocols (TCP).

In 1974, Vinton Cerf, along with Robert Khan, created the basic communication TCP/IP (Internet Protocol) code which makes up the logical architectural design of how we use the Internet today. 

Our computers work with the Internet’s TCP for negotiating the packets of information and shared resources sent and received over it. 

Cerf and Khan probably could not have foreseen the billions of devices now connected to an endlessly-growing Internet.

These billions of devices are regularly requesting and exchanging information packets containing video, voice, pictures, texts, and other data.

Today, the Internet is taken for granted; we have become accustomed (some might say dependent) upon it for our communication, business, education, and entertainment.

With faster broadband speeds becoming commonplace, the negotiating between the Internet and the devices requesting packet information must be handled as efficiently as possible in order to avoid bottlenecks and delays. 

Our computing devices use a set of rules, or algorithms to negotiate this packet exchange. It is known as “TCP congestion avoidance algorithms.”

There are many TCP congestion avoidance algorithm programs designed to handle these packets of data for computing operating systems.

In Windows operating systems, there is one called Compound.

Some Linux operating systems use an algorithm called TCP Vegas.

Cubic is used as a TCP congestion avoidance algorithm in high-speed networks.

Apple’s Mac Operating system has a TCP algorithm called Reno, and I read some are using SACK (Selective Acknowledgement).

Presently used TCP algorithms are not as efficient in handling the Internet’s increasing data packet exchanges as some would like them to be.

Recently, a new type of TCP congestion prevention algorithm, named Remy, was the topic of a MIT (Massachusetts Institute of Technology) paper called “TCP: ex Machina: Computer-Generated Congestion Control.” 

Of course, the first question yours truly needed to have answered was “Why did they decide to call this TCP algorithm Remy?”

The MIT paper describes how in Douglas Hofstadter’s 1985 book “Metamagical Themas: Questing for the Essence of Mind and Pattern,” he lay claim to inventing the word “superrational” which means, “a substitute method for rational decision making.” 

The TCP code developers shrewdly discovered there was a “rat” in the word superrational. 

Those crafty developers associated their new superrational-like TCP code with the main character in the 2007 computer-animated movie “Ratatouille,” who was a rat called Remy. 

I find it clever (and sometimes amusing) how these super-smart computer science folks come up with these words.

Remy is a newly devised TCP algorithm, supporting a decision-making set of TCP rules based upon superrationality.

In the MIT paper, they describe in detail how the Internet is “a best-effort packet-switched network.”

If your Internet-connected computer could always be first in line for the delivery and reception of data packets, it would seem like a perfect world, for you would have no TCP congestion problems.

Of course, we know the Internet is used by billions of other computers and devices, and without some sort of shared control protocol of how data packets are sent and distributed to each computing device requesting them, there would be massive packet congestion and gridlock on the Internet.

TCP congestion control algorithms support the fair allocation of the data packets to each network endpoint.

A network endpoint is defined here as a device used to access a network service, such as the Internet.

The computers we use at home and work, our smart mobile devices, iPads, and other computing tablet devices we connect to the Internet with, are examples of network endpoints.

Remy, as I understand it, is an effort to synchronize, or coordinate, the TCP algorithms in each network endpoint so they work collectively, thus maximizing the distribution efficiency of the data packet requests among them.

According to the MIT paper, “Remy’s job is to find what that algorithm should be. We refer to a particular Remy-designed congestion-control algorithm as a RemyCC.”

Remy-generated TCP algorithms have proved themselves when tested using ns-2 (TCP network event simulator), where it bettered Compound, Vegas, and Cubic. 

Will we someday see a Remy algorithm working inside our computer’s operating system, or on a network, negotiating data packet exchanges to and from the Internet?

“Much future work remains before this question can be answered for the real-world Internet, but our findings suggest that this approach has considerable potential,” concluded Keith Winstein and Hari Balakrishnan, who wrote the MIT paper.

You can read the Remy MIT paper, and see all the fascinating algorithmic details at

Thursday, July 18, 2013

NASA close Windows on ISS

by Mark Ollig


NASA has discontinued using the Microsoft Windows OS (operating system) in computing devices on board the ISS (International Space Station).

As we know, an operating system is comprised of a set of software programs and utilities which allows a computer to function correctly.

An operating system known as Linux is now being used in the onboard computing devices of the ISS.

The Linux kernel, (which is the core of the Linux OS), was created by Linus Torvalds a little over 20 years ago.

Linux is very similar to Unix, which was developed in 1969 at Bell Laboratories, by the AT&T folks.

However, unlike Unix, Linux is an open-source operating system and is freely distributable to anyone. It can be used in individual computers, and computer servers connecting multiple users.

Linux is compatible with most personal computers and hardware platforms.

The ISS’s onboard computer laptops and mobile devices need a reliable and stable operating system. The astronauts depend upon them for many things, such as knowing their location above the Earth, inventory control, interfacing with the onboard cameras, performing maintenance, and day-to-day operations.

I learned 52 onboard computers are controlling various ISS systems.

The Debian 6 graphical user interface OS version of Linux has been installed in the ISS’s onboard computers.

“We migrated key functions from Windows to Linux because we needed an operating system that was stable and reliable,” said Keith Chuvala of the United Space Alliance, who is contracted by NASA to maintain the Operations Local Area Network (Ops LAN) on board the ISS.

Stable and reliable. I am sure many of us can relate to the frustrations we’ve endured whenever our computer freezes up, or a program stops working.

The training needed to migrate from Microsoft Windows to Linux, was provided by the Linux training staff via the Linux Foundation.

Courses which were “geared specifically for the USA/NASA team’s needs” included two tailor-made sessions: Introduction to Linux for Developers, and Developing Applications for Linux.

In 2008, a computer worm virus called W32.Gammina.AG spread itself across some of the space station’s computers. It originated from a USB-infected flash drive which had been brought from Earth by an astronaut.

Of course, not even Linux operating systems are immune from intentionally malicious software or malware files designed to damage (or even disable) computers and computer systems.

However, since Linux is an open-source OS, software patches and programming code fixes can be quickly uploaded to the ISS for resolving any computing software problems, or programming issues.

Debian for Linux is a free, open-source OS anyone can install and use. For more information, check out their Website at

Linux software is also being used to control the actions of a robot serving on board the ISS.

NASA began serious experiments for using robots in space during the early 1990s.

The first “Robonaut” was built in 1997 by NASA and other partners.

In 2007, General Motors worked with NASA on the next generation of Robonaut called Robonaut 2, or R2.

R2 includes a vision system allowing it to see objects, and very dexterous human-like arms, hands, fingers, and thumbs.

Robonaut 2 is able to manipulate very small items; like the screws holding a panel cover.

The miniature sensors throughout R2’s hands can detect even the smallest changes in pressure.

Robonaut 2, the first robot (torso) in space, has been on board the ISS since its delivery by the crew of the final flight of the space shuttle Discovery, on Feb. 24, 2011.

R2 was “turned-on” and displayed movement for the first time inside the ISS Oct. 13, 2011.

Measuring air-flow from vents inside the Destiny laboratory module connected to the ISS is just one of the duties Robonaut 2 has performed.

The routine tasks R2 can accomplish independently will provide more time for the ISS crew to work on space exploration and scientific experiments.

Later this year, Robonaut 2 will be fitted with an internal battery pack and robotic legs for climbing.

NASA plans include having R2 assisting astronauts working outside the ISS as they add or replace components, conduct experiments, and make needed repairs.

More information about Robonaut 2 can be found at its NASA homepage

“The space station, including its large solar arrays, spans the area of a US football field, including the end zones, and weighs 924,739 pounds. The complex now has more livable room than a conventional five-bedroom house, and has two bathrooms, a gymnasium and a 360-degree bay window,” according to NASA’s ISS facts and figures web page.

I’ve imagined taking a trip to the International Space Station and being inside its dome-shaped, Cupola Observation Module while looking down at the Earth through its 360-degree viewing windows.

I hope you can check out this photo of astronaut Tracy Caldwell Dyson, who appears deep in the thought while looking out the cupola’s windows and gazing at the Earth. It can be seen at

For more information about the International Space Station, visit

The Linux Foundation is a nonprofit association dedicated to promoting the development of Linux software. It is located at

Friday, July 12, 2013

Douglas Engelbart: A visionary ahead of his time

by Mark Ollig

It saddens me whenever I hear of the passing of an early computing pioneer.

Originally, I had planned to write about Douglas Engelbart for this year’s Dec. 9 column, as this date marks the 45th anniversary of his famous “Mother of All Demos” computing presentation.

Engelbart’s dream of creating a computerized, interactive workstation originated while serving in the Navy as a radar operator, in the Philippines, shortly after World War II.

He wondered why a computer could not be connected to a CRT (Cathode Ray Tube) monitor screen, and be used for information interaction by a person in a workplace situation. He believed computer work stations could be networked with each other, allowing information to be shared among people.

Engelbart then began working at Stanford Research Institute (SRI) as a researcher, starting in 1957.

From 1959 to 1960, he was able to work on his dream computing project, with financial assistance from the US Air Force Office of Scientific Research.

In one quarterly progress report, dated Oct. 30, 1959, Engelbart wrote, “The objective of this project is to provide organization and stimulation in the search for new and better ways to obtain digital manipulation of information.”

Starting in 1962, with funding from DARPA (Defense Advanced Research Projects Agency), Engelbart was able to complete his work on a visionary computer system called the NLS (oN-Line System).

Engelbart would present the results of his work in 1968, during The Fall Joint Computer Conference Dec. 9 – 11 in San Francisco, CA.

At that time, Engelbart gave a mesmerizing look into the future of human-computer networking.

Engelbart, using his NLS computer-based, interactive multiconsole display system, gave a demonstration titled, “A Research Center for Augmenting Human Intellect.”

This demonstration was given before the Fall Joint Conference attendees at Brooks Hall in San Francisco.

These attendees also witnessed a historical surprise.

“If, in your office, you as an intellectual worker were supplied with a computer display backed up by a computer that was alive for you all day and was instantly responsive, how much value could you derive from that?” Engelbart asked at the start of his presentation.

Engelbart’s onstage terminal console was connected to a huge 22-foot video projector.

The approximately 1,000 computing specialists in attendance could watch him on a large screen above the stage as he typed on his keyboard, while seeing what was being displayed on his CRT monitor screen.

His terminal console was also remotely linked via telephone lines to a computer located about 30 miles away, inside the Stanford Research Institute in Menlo Park, CA.

The big surprise was the pointing device he used to move the cursor dot on the CRT monitor screen during the presentation. Engelbart called it a “mouse.”

Engelbart invented the point-and-click device we use today which we still call mouse.

It was five years earlier, when Engelbart first worked on the first mouse prototype.

Why was it called a “mouse?” He said someone suggested this name in 1963, because the cord connected to it looked like a tail, and the wooden, hand-held device was small, so they affectionately called it a mouse.

Unfortunately for Engelbart, the patent for his creation was owned by the company where he developed it: Stanford Research Institute.

He never received any monetary royalties for this invention, but the world acknowledges and credits him with its creation.

Engelbart has accepted many prestigious awards over his lifetime for his work in computer technology, and of course, for inventing the mouse.

In one interview, he thought they would eventually come up with a more official sounding description for the mouse, but said the name stuck.

Engelbart’s first curser pointing device, or mouse, is here:

While watching a video of the 1968 90-minute presentation, one of the many things which impressed me was the professionalism Engelbart exuded during the entire demonstration.

Engelbart fascinated those in attendance as he expertly revealed the way hypertext links between files worked, and how to use statement coding to manage and organize files, and sub-files.

While typing on the built-in keyboard inside the computer console, and using his mouse to move the cursor dot around on the monitor screen, he would code programs on-the-fly.

He also showed how one could manipulate and organize the information contained inside the text files.

Using “computer screen windowing,” Engelbart presented how one could simultaneously view separate information categories by displaying them inside overlaid “windows” using a single display monitor.

He also demonstrated 2-way “video-inset” conferencing over his computer’s monitor screen with fellow researchers, using their computer monitor screens back in Menlo Park. It was eerily similar to today’s video conferencing programs, such as Skype.

Remember, folks – this was being demonstrated in 1968.

His presentation was greeted, at times, with wonderment, and much applause at its conclusion.

However, his NLS computer system never become popular.

It was said the statement coding needed for the creation and management of program files was just too complicated for the average person to grasp.

Engelbart replied using this analogy, “The tricycle may be easier to learn and use, but it is hard work to travel even a short distance. Riding a bicycle calls for considerably more skill . . . but the effort-to-performance ratio is dramatically higher.”

He understood it can be difficult to learn new skills in order to become more productive.

Many of the researchers from SRI went on to work at the Xerox PARC (Palo Alto Research Center).

In 1973, they built the Xerox Alto computer. The Alto used Xerox’s user-friendly GUI (Graphical User Interface), which took advantage of Engelbart’s mouse device for navigating through software programs.

Engelbart believed the use of computers would make the world a better place – and he was right.

In his later years, Engelbart spoke before students at universities and gave keynote speeches, seminars, and was interviewed countless times.

The National Medal of Technology, the nation’s highest award for technology innovation, was presented to him in 2000 by President Bill Clinton.

Douglas C. Engelbart passed away July 2, at the age of 88.

The December 9, 1968 presentation video is located in the Internet archive at

The Douglas Engelbart Institute is found at

Wednesday, July 3, 2013

Discovering your personal analytical profile

by Mark Ollig

Steven Wolfram calls it “personal analytics.”

Wolfram, as you may recall from a previous column, is the physicist who created the WolframAlpha Computational Knowledge Engine.

In the May 18, 2009 Bits & Bytes, I wrote how the WolframAlpha website “is a bit different from a normal search engine, because it computes answers and provides pictorial visualizations “on-the-fly” from a knowledge base of collected and structured data.”

It is located at

While writing today’s column, I re-visited the WolframAlpha website.

Of course, I needed to ask what the current distance Earth is from the planet Mars. As of last Monday morning, Earth was 2.453 au (astronomical units), or 228 million miles away from Mars.

WolframAlpha also informed me the distance from Earth of the Voyager 1 spacecraft launched in 1977, was 11.5 billion miles.

As I continued to query my other challenging questions into WolframAlpha, I rediscovered how occupied one could become with this analytical knowledge engine.

Speaking of being occupied with information, in a recent blog, Wolfram talked about his own fascination with his personal data usage. This interest caused him to archive all of the personal email he has accumulated since 1989.

He said this amounted to a million emails.

Wolfram immerses himself in accumulating data, and then logically deciphers it in an orderly manner, making the data understandable and useful, while explaining certain analytical rules about it.

So, why did he save all those emails?

In his blog, Wolfram said he always wanted to analyze this sizeable accumulation of email data, but discloses, “I’ve never actually gotten around to doing it.”

Finally, he did complete a personal analytical profile using his archived email.

On one graphic, Wolfram’s analytical program diagramed the hourly times of when a third of his million emails since 1989, were sent.

During 1990, I noted he was very consistent in sending out emails from 12 a.m to 6 a.m, with just a trace of activity from 7 a.m to 1 p.m.

The decade from 2002 until 2012, showed regularity of sending emails between 12 a.m. and 3 a.m. and then inactivity from 3 a.m to 10 a.m. I assumed he slept during these hours, and, according to his blog, this was the case.

Wolfram said he was “something of a night owl during this time.”

During this same 10-year period, he sometimes sent emails between 11 a.m and 6 p.m; a break in his email activity was noted from 7 p.m until 9 p.m.

Wolfram understood how this data lined up with identifiable events and trends in his personal life; such as during the 1990s, when he was writing his book, “A New Kind of Science;” and in 2009, while working on the WolframAlpha Computational Knowledge Engine.

Wolfram has now brought his personal analytical data collecting techniques to Facebook.

“Gain insight on yourself and your social network” WolframAlpha’s personal analytics for Facebook page declares.

Use of Wolfram’s software algorithms allows creation of our own personalized Facebook-user analytical profile.

And yes, this humble Facebook user took the plunge and used his program.

I will admit being impressed with the results of my personalized, analytical report.

The information was easy to understand, and its statistical and graphical data was well organized.

There are several ways of viewing one’s WolframAlpha Facebook analytics. One method is using the viewing options as presented alongside each section of information.

Graphs showed an hourly breakdown of the times when I am the most active using Facebook.

My personal WolframAlpha Facebook analytical report included the highest number of Facebook-liked photos I have uploaded, along with the photos with the most user comments.

My most-liked photo on Facebook received 42 “likes.”

A map of the US displayed red pin dots from the locations I had logged into Facebook from.

In addition to my Minnesota locations, WolframAlpha accurately disclosed my checking in from San Francisco, a few weeks ago.

A graphical map of the world shows where all my Facebook friends are from.

In addition, the data also revealed the reported ages of my Facebook friends.

While my youngest Facebook friend is a 19-year-old nephew, the oldest is said to be a 107-year-old.

The report disclosed my 107-year-old Facebook friend is the Winsted Summerfest, which listed its birthday to Facebook as occurring Aug. 11, 1905.

The Web-harvested data gleamed from Facebook I found particularly interesting, was my personalized Facebook “word cloud.”

The word cloud shows an image of a cloud containing the unique words I have repeated the most in my 931 Facebook Wall text postings.

The larger the word appears inside the cloud, the more I have used it.

My word cloud shows about 150 of my most-used words: Internet, bits, bytes, online, Mars, space, NASA, and computer, are most weighted, or the ones I use a lot when on Facebook.

You can see my Facebook word cloud at

In summary, I found it very informative reviewing my personalized Facebook analytics using WolframAlpha.

To learn more, and to run your own personalized, analytical Wolframalpha Facebook usage report, go to

Above is my Facebook word cloud.