April 5, 2010
by Mark Ollig
In the tech world, new jargon and buzzwords explode onto the scene with regularity.
Lately, Twitter has been asking me if I want to “add my location” whenever I send out messages, or ‘tweets’ to all 183 of my faithful followers.
Twitter has a new Application Programming Interface (API) called a “geotag” available.
Usually ignoring this request, this morning I finally gave up my Orwellian fear and clicked “yes.”
Twitter then continued to test my patience by asking me, “Before Twitter can get your location . . . check 1 to remember this site . . . click 2 to share this location.”
Now, I am thinking to myself if I really want everyone to know where I am. I mean, they already know where my general location is because I have it listed in my public Twitter profile.
I want to find out more about this location thing. Does enabling this show other Twitter folks the exact location of my desk in my office on the exact street where I live to the citizens of Twitterville?
Going further, if I take this into Twilight Zone mode, will Twitter acquire access to some earth orbiting satellite and then triangulate my location and focus one of those NASA space cameras onto my position, through my apartment’s front window, and finally, live-stream (broadcast) my actions in real-time as I type (with proficiency) on my keyboard, drinking my usual dark roast coffee with cream?
You think I am paranoid?
Hey, this could happen because I saw the technology being used in the 1998 movie “Enemy of The State,” which starred one of my favorite actors, Gene Hackman, and the person who ended up being monitored and thus considered an enemy of the state, actor Will Smith.
If you have seen this movie, then you will understand my apprehension over having Twitter (or any other online service) know where my exact location is when I send out messages.
Or, you might be right and I am just paranoid.
Wanting to investigate this further, I clicked on the Twitter “help” link and then typed in the search term, “getting your location.”
“How many other paranoid Twitter users have done this,” I muse, as Twitter shows me the nine search results.
The first search result has the topic of “How to tweet with Your Location,” which tells me “Once you’ve opted-in, you will be able to add your location information to individual Tweets as you compose them on Twitter.com.”
Twitter told me, “This feature is off by default and you will need to opt-in to use it.”
I need to opt-in.
Now, do I really want to opt-in?
What happens then?
Will I be able to opt-out?
Why do I make myself go through these agonizing decisions?
Another Twitter user had added the topic “About the Tweet with Your Location Feature,” which was helpful in explaining and showing Google snapshots of Twitter folks using this location feature.
Under this topic, Twitter explains, “All geolocation information begins as an exact location (latitude and longitude), which is sent from your browser or device. Twitter won’t show any location information unless you’ve opted-in to the feature.”
There it was. Did you catch the new buzzword of the year?
Geolocation.
In the online social-networking circles, “where are you” is becoming one of the most popular trending questions.
When using Twitter, I only know where the originating tweeter (messenger) is located by what they typed into their profile – unless they have opted-in to geolocation.
I see from the Twitter location examples, they are using Google Maps to show the users; or “tweet-sender’s” geolocation.
My fears were somewhat relieved when I learned I controlled if I wanted to send my location with each tweet-message. So if I was tweeting a message at the Minneapolis airport, I could opt-in and allow my location to be seen by my followers.
“Even once you turn ‘Tweet with Your Location’ on, you have additional control over which Tweets (and what type of location information) is shared,” the topic said.
My real-time location on Google Maps is referenced with a red-headed pin marker next to my latest tweet message.
Global Position System (GPS) technology is used in many geolocation applications.
GPS chips are built-into mobile devices such as the iPhone, Blackberry, and Google’s Nexus One. The new Apple iPad also has true GPS – if you buy their 3G model.
Thus, your friends can know where you are.
These mobile devices with the GPS chip built-in, use satellite data to determine your exact position, which online services such as Google Maps can plot.
The GPS works best when one is outside; however, there’s other ways to determine your geolocation.
My iPodtouch ‘maps application’ will show my current location using a Wi-Fi connection.
I am able to find driving routes from my ‘current location’ (which I see right away) when using the maps app.
Your humble columnist has learned another new jargon word for the day.
So, what’s your geolocation, and do you want everyone else knowing it?
You, too, can follow my rants on Twitter. My username there is “bitsandbytes.”
Wednesday, March 31, 2010
Friday, March 26, 2010
Linking our vehicles to an 'informational' superhighway
March 29, 2010
by Mark Ollig
An intelligent network may be connecting to a highway and vehicle near you soon.
Cooperative Vehicle-Infrastructure Systems (CVIS) is a major European research and development project.
CVIS distributes real-time intelligence regarding road conditions, unsuspecting road hazards and accident avoidance instructions directly to vehicles as they travel down the highway.
The goal of CVIS is to design, develop and test the technologies required to allow vehicles to not only collect and communicate information with each other, but with the nearby networked roadside structures, and then back to an intelligent central traffic management system.
This scenario was discussed some years ago, but at that time, we just didn’t have the right kind of technology in place to realistically implement it.
CVIS gathers information from all nearby inter-networked vehicles and stationary sensor devices networked along the driver’s route.
CVIS is an intelligent management system which offers drivers enhanced informational awareness relating to other vehicles, advanced notices of road hazards, pedestrian locations, current road conditions, and more.
Specific information is analyzed and then delivered to the driver of the appropriate vehicle.
The CVIS allows all properly equipped vehicles and infrastructure elements to communicate with each other in a continuous and transparent way.
Delivery methods of these services to a vehicle can consist of current wireless broadband and other wireless technologies, including satellite, cellular 2G, 3G, and 4G, along with WiMAX, LTE, and newly- allocated wireless spectrums as they become available.
CVIS uses open source software, meaning the source code of the software program can be improved upon or “tweaked” by the user of it.
The networked connection to our vehicle’s “intelligent electronics interface” must be seamlessly maintained to ensure the information chain is not broken as we drive down the road.
Imagine the benefits of having all vehicles and city traffic management infrastructures cooperating by continuously sending and receiving information with each other in real-time. This type of system is referred to as Vehicle-to-Infrastructure (V2I).
The intelligent electronics in our vehicle would also collect and share information with other vehicles on the road utilizing Vehicle-to-Vehicle (V2V) systems.
I highly recommend my readers watch the CVIS and Safe-Spot demonstration video which presents the benefits of V2I. This shortened URL takes you there: tinyurl.com/ygzdoyz.
The theatrical-style delivery of this informative and entertaining video was presented live and on stage during the 2009 Intelligent Transport Systems and Services (ITS) World Congress in Stockholm, Sweden.
Three real-life actors are in this video. One represents our driver who finds himself in various situations, and the other two actors represent the intelligent response system called the “cooperative system,” which oversees the road conditions ahead. The intelligent response system provides assistance and audible suggestions to our driver.
In one scenario, the driver of a car is being informed of changing road conditions. For example; after two warning tones are heard, the system audibly states to the driver “Caution! Slippery road ahead.”
Another demonstration shows a jogging pedestrian crossing the road ahead and to the left of our driver. However, a large metro bus is parked and is blocking our driver’s vision to the left. Our driver cannot see the pedestrian as he jogs across the road. Suddenly, we hear two quick beeps and the cooperative system says “Brake! Crossing pedestrian from the left!” The driver responds and makes a quick stop – just in time – as the pedestrian runs in front of his vehicle and to the other side of the road.
How did the cooperative system know to alert the driver?
The pedestrian activated a sensor operating inside the bus, which was triggered as he approached the bus. This information, along with the information of our driver’s position and road conditions were sent to the cooperative system. The cooperative system instantly calculated everyones positions and road situation and decided to send an alert warning message to the driver of the vehicle along side the bus since the risk of collision was extremely high.
The cooperative system would also send information about our driver’s reaction to the vehicles behind it, and if the cooperative system concludes there is any risk of collision it will warn those drivers as well.
All the calculations and alerts happened in real-time.
Another situation finds our over-the-road transport vehicle operator heading out and traveling in the right lane to his first pick-up. We notice our vehicle operator traveling behind a large truck. Suddenly, two caution beeps are heard, and an “Obstacle Ahead” visual sign is displayed on our vehicle operator’s front window. A second later, an audible alert warning message tells our operator “Be aware. High risk of collision.” The truck in front of our vehicle operator suddenly swerves to the left lane exposing a slow-moving vehicle in the right lane to our driver, who, having been alerted in time, is able to avoid a collision.
One “Orwellian-like” scene even shows our over-the-road transport vehicle operator being alerted when going over the speed limit.
The cooperative system even suggested time-saving routes.
Real-world situations are also being developed by a company called Safe-Spot, which is working to develop cooperative systems for road safety based on vehicle to vehicle (V2V) and vehicle to infrastructure (V2I) communication. There are six test sites, one of which is located in Torino, Italy.
Safe-Spot’s web site is www.safespot-eu.org.
The CVIS web site is www.cvisproject.org.
by Mark Ollig
An intelligent network may be connecting to a highway and vehicle near you soon.
Cooperative Vehicle-Infrastructure Systems (CVIS) is a major European research and development project.
CVIS distributes real-time intelligence regarding road conditions, unsuspecting road hazards and accident avoidance instructions directly to vehicles as they travel down the highway.
The goal of CVIS is to design, develop and test the technologies required to allow vehicles to not only collect and communicate information with each other, but with the nearby networked roadside structures, and then back to an intelligent central traffic management system.
This scenario was discussed some years ago, but at that time, we just didn’t have the right kind of technology in place to realistically implement it.
CVIS gathers information from all nearby inter-networked vehicles and stationary sensor devices networked along the driver’s route.
CVIS is an intelligent management system which offers drivers enhanced informational awareness relating to other vehicles, advanced notices of road hazards, pedestrian locations, current road conditions, and more.
Specific information is analyzed and then delivered to the driver of the appropriate vehicle.
The CVIS allows all properly equipped vehicles and infrastructure elements to communicate with each other in a continuous and transparent way.
Delivery methods of these services to a vehicle can consist of current wireless broadband and other wireless technologies, including satellite, cellular 2G, 3G, and 4G, along with WiMAX, LTE, and newly- allocated wireless spectrums as they become available.
CVIS uses open source software, meaning the source code of the software program can be improved upon or “tweaked” by the user of it.
The networked connection to our vehicle’s “intelligent electronics interface” must be seamlessly maintained to ensure the information chain is not broken as we drive down the road.
Imagine the benefits of having all vehicles and city traffic management infrastructures cooperating by continuously sending and receiving information with each other in real-time. This type of system is referred to as Vehicle-to-Infrastructure (V2I).
The intelligent electronics in our vehicle would also collect and share information with other vehicles on the road utilizing Vehicle-to-Vehicle (V2V) systems.
I highly recommend my readers watch the CVIS and Safe-Spot demonstration video which presents the benefits of V2I. This shortened URL takes you there: tinyurl.com/ygzdoyz.
The theatrical-style delivery of this informative and entertaining video was presented live and on stage during the 2009 Intelligent Transport Systems and Services (ITS) World Congress in Stockholm, Sweden.
Three real-life actors are in this video. One represents our driver who finds himself in various situations, and the other two actors represent the intelligent response system called the “cooperative system,” which oversees the road conditions ahead. The intelligent response system provides assistance and audible suggestions to our driver.
In one scenario, the driver of a car is being informed of changing road conditions. For example; after two warning tones are heard, the system audibly states to the driver “Caution! Slippery road ahead.”
Another demonstration shows a jogging pedestrian crossing the road ahead and to the left of our driver. However, a large metro bus is parked and is blocking our driver’s vision to the left. Our driver cannot see the pedestrian as he jogs across the road. Suddenly, we hear two quick beeps and the cooperative system says “Brake! Crossing pedestrian from the left!” The driver responds and makes a quick stop – just in time – as the pedestrian runs in front of his vehicle and to the other side of the road.
How did the cooperative system know to alert the driver?
The pedestrian activated a sensor operating inside the bus, which was triggered as he approached the bus. This information, along with the information of our driver’s position and road conditions were sent to the cooperative system. The cooperative system instantly calculated everyones positions and road situation and decided to send an alert warning message to the driver of the vehicle along side the bus since the risk of collision was extremely high.
The cooperative system would also send information about our driver’s reaction to the vehicles behind it, and if the cooperative system concludes there is any risk of collision it will warn those drivers as well.
All the calculations and alerts happened in real-time.
Another situation finds our over-the-road transport vehicle operator heading out and traveling in the right lane to his first pick-up. We notice our vehicle operator traveling behind a large truck. Suddenly, two caution beeps are heard, and an “Obstacle Ahead” visual sign is displayed on our vehicle operator’s front window. A second later, an audible alert warning message tells our operator “Be aware. High risk of collision.” The truck in front of our vehicle operator suddenly swerves to the left lane exposing a slow-moving vehicle in the right lane to our driver, who, having been alerted in time, is able to avoid a collision.
One “Orwellian-like” scene even shows our over-the-road transport vehicle operator being alerted when going over the speed limit.
The cooperative system even suggested time-saving routes.
Real-world situations are also being developed by a company called Safe-Spot, which is working to develop cooperative systems for road safety based on vehicle to vehicle (V2V) and vehicle to infrastructure (V2I) communication. There are six test sites, one of which is located in Torino, Italy.
Safe-Spot’s web site is www.safespot-eu.org.
The CVIS web site is www.cvisproject.org.
Thursday, March 18, 2010
FCC seeks 'ultra-high-speed' link to all communities
March 22, 2010
by Mark Ollig
A blistering one gigabit per second is the speed the FCC would like each community in America to be able to access the Internet with.
One gigabit per second, as we know, is equal to 1000 Mb/s (megabits per second).
This is definitely faster than those old Hayes 28.8 kbps “smart modems” we used back in the day.
Last Tuesday, the FCC held an open meeting and made public the long-awaited details of “The National Broadband Plan.”
This plan is the federal government’s strategy for maximizing everyone’s access onto the “information superhighway” (ok, no one uses that term much anymore), or the Internet.
The FCC and C-Span web sites were live-streaming this open meeting and of course, your humble columnist was watching and taking notes.
There was even a Twitter hashtag “#BBplan” for this announcement, and many of us were tweeting out messages to each other.
The 376-page National Broadband Plan PDF document (which I downloaded and read) is divided into 17 chapters. You can also download or view it online at: www.broadband.gov/download-plan.
This plan is more or less a blueprint for how US regulators intend to provide broadband Internet access to the approximately 100 million Americans that currently do not have this type of access.
According to the latest report from the Information Technology Industry Council, America’s average download speeds of 4 megabits per second rank 15th in the world.
What is broadband?
In trying to respond in a technical manner, this question would lead to an assortment of answers.
I recognize factors like throughput; latency and bandwidth need to be considered.
Actually, an entire column could be devoted in attempting to define what broadband is, and opposing arguments as to its definition would still linger.
In a report released June 12, 2008, the FCC describes “basic” broadband speed as 768kbps to 1.5Mbps.
Today, on the FCC’s web site, they answer the question of what is broadband with “The FCC defines broadband service as data transmission speeds exceeding 200 kilobits per second (kbps), or 200,000 bits per second, in at least one direction: downstream (from the Internet to the user’s computer) or upstream (from the user’s computer to the Internet),” per www.fcc.gov/cgb/broadband.html.
The National Broadband Plan, itself, does not provide a de facto rate of data transfer speed for the term “broadband.”
If you want to test your Internet data connection’s current downloading speed, go to www.speedtest.net, or take the FCC’s consumer broadband test at: www.broadband.gov/qualitytest.
Other countries have different descriptions when it comes to the definition of broadband speeds. This Wikipedia link is a good place to start from: tinyurl.com/yk9fu7m.
There are several “pipeline” technologies which currently provide broadband speeds. Some of these include: cable modems, fiber optical interfaces, wireless, satellite – and, since I am in the business – Digital Subscriber Lines (DSL), and even combined telephone company T-1 lines. There is also a technology which has been talked about over the years called Broadband over Powerlines (BPL).
The National Broadband Plan was mandated by the American Recovery and Reinvestment Act in February 2009. The summary report titled OBI (Omnibus Broadband Initiative) was produced by an FCC task force, which I wrote a column about back on March 8 of this year.
During the FCC open meeting, Blair Levin, executive director of Omnibus Broadband Initiative, compared the need for having broadband Internet in the same way we needed a national electrical grid to transform the country. Levin went on to say broadband is “. . . a profound and enabling technology whose impact will ripple throughout every aspect of our economy and society.”
“The National Broadband Plan is a 21st century roadmap to spur economic growth and investment, create jobs, educate our children, protect our citizens, and engage in our democracy,” said FCC Chairman Julius Genachowski.
The following is a list of goals taken from National Broadband Plan which are hoped to be implemented during the next 10 years:
Goal 1: At least 100 million US homes should have affordable access to actual download speeds of at least 100 megabits per second, and actual upload speeds of at least 50 megabits per second.
Goal 2: The United States should lead the world in mobile innovation, with the fastest and most extensive wireless networks of any nation.
Goal 3: Every American should have affordable access to robust broadband service, and the means and skills to subscribe if they so choose.
Goal 4: Every community should have affordable access to at least 1 Gbps broadband service to anchor institutions such as schools, hospitals, and government buildings.
Goal 5: To ensure the safety of Americans, every first responder should have access to a nationwide public safety wireless network.
Goal 6: To ensure that America leads in the clean energy economy, every American should be able to use broadband to track and manage their real-time energy consumption.
You can watch the complete two-hour-and-twenty-two minute FCC open meeting on the National Broadband Plan broadcast on the FCC’s YouTube channel. Here is a shortened link, which goes directly to the video: tinyurl.com/ybza6r2.
Other transcripts and videos covering the National Broadband Plan are available at www.broadband.gov.
This is indeed a bold plan. The next move will be for the US Congress to approve the roughly 200 proposals in it – so stay tuned.
by Mark Ollig
A blistering one gigabit per second is the speed the FCC would like each community in America to be able to access the Internet with.
One gigabit per second, as we know, is equal to 1000 Mb/s (megabits per second).
This is definitely faster than those old Hayes 28.8 kbps “smart modems” we used back in the day.
Last Tuesday, the FCC held an open meeting and made public the long-awaited details of “The National Broadband Plan.”
This plan is the federal government’s strategy for maximizing everyone’s access onto the “information superhighway” (ok, no one uses that term much anymore), or the Internet.
The FCC and C-Span web sites were live-streaming this open meeting and of course, your humble columnist was watching and taking notes.
There was even a Twitter hashtag “#BBplan” for this announcement, and many of us were tweeting out messages to each other.
The 376-page National Broadband Plan PDF document (which I downloaded and read) is divided into 17 chapters. You can also download or view it online at: www.broadband.gov/download-plan.
This plan is more or less a blueprint for how US regulators intend to provide broadband Internet access to the approximately 100 million Americans that currently do not have this type of access.
According to the latest report from the Information Technology Industry Council, America’s average download speeds of 4 megabits per second rank 15th in the world.
What is broadband?
In trying to respond in a technical manner, this question would lead to an assortment of answers.
I recognize factors like throughput; latency and bandwidth need to be considered.
Actually, an entire column could be devoted in attempting to define what broadband is, and opposing arguments as to its definition would still linger.
In a report released June 12, 2008, the FCC describes “basic” broadband speed as 768kbps to 1.5Mbps.
Today, on the FCC’s web site, they answer the question of what is broadband with “The FCC defines broadband service as data transmission speeds exceeding 200 kilobits per second (kbps), or 200,000 bits per second, in at least one direction: downstream (from the Internet to the user’s computer) or upstream (from the user’s computer to the Internet),” per www.fcc.gov/cgb/broadband.html.
The National Broadband Plan, itself, does not provide a de facto rate of data transfer speed for the term “broadband.”
If you want to test your Internet data connection’s current downloading speed, go to www.speedtest.net, or take the FCC’s consumer broadband test at: www.broadband.gov/qualitytest.
Other countries have different descriptions when it comes to the definition of broadband speeds. This Wikipedia link is a good place to start from: tinyurl.com/yk9fu7m.
There are several “pipeline” technologies which currently provide broadband speeds. Some of these include: cable modems, fiber optical interfaces, wireless, satellite – and, since I am in the business – Digital Subscriber Lines (DSL), and even combined telephone company T-1 lines. There is also a technology which has been talked about over the years called Broadband over Powerlines (BPL).
The National Broadband Plan was mandated by the American Recovery and Reinvestment Act in February 2009. The summary report titled OBI (Omnibus Broadband Initiative) was produced by an FCC task force, which I wrote a column about back on March 8 of this year.
During the FCC open meeting, Blair Levin, executive director of Omnibus Broadband Initiative, compared the need for having broadband Internet in the same way we needed a national electrical grid to transform the country. Levin went on to say broadband is “. . . a profound and enabling technology whose impact will ripple throughout every aspect of our economy and society.”
“The National Broadband Plan is a 21st century roadmap to spur economic growth and investment, create jobs, educate our children, protect our citizens, and engage in our democracy,” said FCC Chairman Julius Genachowski.
The following is a list of goals taken from National Broadband Plan which are hoped to be implemented during the next 10 years:
Goal 1: At least 100 million US homes should have affordable access to actual download speeds of at least 100 megabits per second, and actual upload speeds of at least 50 megabits per second.
Goal 2: The United States should lead the world in mobile innovation, with the fastest and most extensive wireless networks of any nation.
Goal 3: Every American should have affordable access to robust broadband service, and the means and skills to subscribe if they so choose.
Goal 4: Every community should have affordable access to at least 1 Gbps broadband service to anchor institutions such as schools, hospitals, and government buildings.
Goal 5: To ensure the safety of Americans, every first responder should have access to a nationwide public safety wireless network.
Goal 6: To ensure that America leads in the clean energy economy, every American should be able to use broadband to track and manage their real-time energy consumption.
You can watch the complete two-hour-and-twenty-two minute FCC open meeting on the National Broadband Plan broadcast on the FCC’s YouTube channel. Here is a shortened link, which goes directly to the video: tinyurl.com/ybza6r2.
Other transcripts and videos covering the National Broadband Plan are available at www.broadband.gov.
This is indeed a bold plan. The next move will be for the US Congress to approve the roughly 200 proposals in it – so stay tuned.
Thursday, March 11, 2010
Next-generation Internet backbone is about to throttle up
March 15, 2010
by Mark Ollig
As the Internet’s growth increases, heavy demands for bandwidth are being asked of its network by the billions of mobile devices and computers businesses and folks like us have connected to it.
There are core routers within the backbone of the Internet which act like “data traffic controllers,” and they have been stressed as of late.
The ability of the Internet backbone to provide enough bandwidth in order to handle large amounts of data traffic is being tested as the increased use of high bandwidth applications and services like video-streaming, mobile computing, gamming, the migration of cloud computing, and other data-intensive requirements appear to be slowly sucking the life out of it.
No, the Internet is not on life-support – just yet.
One thing I am sure of, the future use of the Internet will definitely see it working more with high bandwidth applications like cloud computing and broadcast video.
Yours truly wrote a column about cloud computing back on March 30, 2009, and said “. . . cloud computing essentially enables computer users to easily access the applications they normally use directly over the Internet, instead of having them stored on their local hard drives or business computer servers . . . an alternative to having your software data and applications reside in your computer’s hard drive, they would be accessible from a remote central server, which would distribute them like any other application resource to you via the Internet.”
Cloud computing will – and slowly is – becoming one of the major players, requiring more bandwidth usage and processing within the core routers of the Internet.
Core routers send information from inside the backbone to their destinations as quickly as possible.
Today, the amount of “throughput” needed by the ISP’s (Internet Service Providers) and large company “edge routers” (which connect to the Internet backbone) for their subscribers and employees just keeps increasing.
Well-organized transferring of the information from the Internet to those edge routers and then to our computing devices also relys upon the throughput efficiency and processing power of those hard-working core routers within the Internet itself.
As more end user applications, high-definition video, cloud-connections, voice, two-way video calls, and eventually, broadcast television become totally merged onto the Internet network, the increased demands for improved processors, network broadband access, larger bandwidth, faster speeds, and reliable throughput will become critically essential, less the Internet becomes overwhelmed and overloaded.
The backbone of the Internet uses core routers, which support many types of access interfaces to them. They also distribute the massive amounts of IP (Internet Protocol) packet information.
Most of these core routers are made by Cisco and since 2004, their Carrier Routing System (CRS-1) core routers have mostly been used to provide the bandwidth throughput on the backbone of the Internet.
The other large maker of core routers is a company called Juniper.
A complete CRS-1 can provide up to 92Tb/s (terabits per second) of bandwidth capacity.
Last week Cisco announced their new core router, called the CRS-3.
At an incredible 322 Tb/s, the fully configured CRS-3 core router has three times the bandwidth handling capability as the CRS-1.
How fast is 322Tb/s? Utilizing this bandwidth speed would allow the entire printed collection of the Library of Congress to be downloaded in just over one second.
Three-hundred and twenty-two Tb/s would allow every man, woman and child in China to make a video call simultaneously.
And in honor of this year’s Oscars, I will mention the CRS-3 could download every motion picture ever made in less than five minutes.
Ten years ago, an Internet core router operated at only 2.5 Gbp/s (gigabits per second).
“The Cisco CRS-3 is well positioned to carry on the tradition of the Cisco CRS-1, become the flagship router of the future, and serves as the foundation for the world’s most intelligent and advanced broadband networks,” said Pankaj Patel, senior vice president and general manager of Service Provider Business from Cisco.
Your humble “core columnist” here at Bits & Bytes watched Cisco’s CRS-3 YouTube video. This nicely edited video is about four minutes long and shows the new core router and how it is used on the Internet network. You can see it at this shortened URL: tinyurl.com/yjvnftd.
The CRS-3’s will improve the transferring packet data capacities within the Internet backbone and thus will provide the Internet backbone with a more efficient throughput.
As far as we, the end user seeing immediate increases in downloading speed once CRS-3 core routers are installed, one must be aware that our end user speed and throughput results are also affected by any slow-downs or bottle-necks at the network backbone edge level, all of the hardware processor speed maximums and the bandwidth available to the content source.
The old saying about a chain being only as strong as its weakest link still holds true.
You can read more about the CRS-3 at Cisco’s news web page using this shortened URL: tinyurl.com/yh2mwh8.
by Mark Ollig
As the Internet’s growth increases, heavy demands for bandwidth are being asked of its network by the billions of mobile devices and computers businesses and folks like us have connected to it.
There are core routers within the backbone of the Internet which act like “data traffic controllers,” and they have been stressed as of late.
The ability of the Internet backbone to provide enough bandwidth in order to handle large amounts of data traffic is being tested as the increased use of high bandwidth applications and services like video-streaming, mobile computing, gamming, the migration of cloud computing, and other data-intensive requirements appear to be slowly sucking the life out of it.
No, the Internet is not on life-support – just yet.
One thing I am sure of, the future use of the Internet will definitely see it working more with high bandwidth applications like cloud computing and broadcast video.
Yours truly wrote a column about cloud computing back on March 30, 2009, and said “. . . cloud computing essentially enables computer users to easily access the applications they normally use directly over the Internet, instead of having them stored on their local hard drives or business computer servers . . . an alternative to having your software data and applications reside in your computer’s hard drive, they would be accessible from a remote central server, which would distribute them like any other application resource to you via the Internet.”
Cloud computing will – and slowly is – becoming one of the major players, requiring more bandwidth usage and processing within the core routers of the Internet.
Core routers send information from inside the backbone to their destinations as quickly as possible.
Today, the amount of “throughput” needed by the ISP’s (Internet Service Providers) and large company “edge routers” (which connect to the Internet backbone) for their subscribers and employees just keeps increasing.
Well-organized transferring of the information from the Internet to those edge routers and then to our computing devices also relys upon the throughput efficiency and processing power of those hard-working core routers within the Internet itself.
As more end user applications, high-definition video, cloud-connections, voice, two-way video calls, and eventually, broadcast television become totally merged onto the Internet network, the increased demands for improved processors, network broadband access, larger bandwidth, faster speeds, and reliable throughput will become critically essential, less the Internet becomes overwhelmed and overloaded.
The backbone of the Internet uses core routers, which support many types of access interfaces to them. They also distribute the massive amounts of IP (Internet Protocol) packet information.
Most of these core routers are made by Cisco and since 2004, their Carrier Routing System (CRS-1) core routers have mostly been used to provide the bandwidth throughput on the backbone of the Internet.
The other large maker of core routers is a company called Juniper.
A complete CRS-1 can provide up to 92Tb/s (terabits per second) of bandwidth capacity.
Last week Cisco announced their new core router, called the CRS-3.
At an incredible 322 Tb/s, the fully configured CRS-3 core router has three times the bandwidth handling capability as the CRS-1.
How fast is 322Tb/s? Utilizing this bandwidth speed would allow the entire printed collection of the Library of Congress to be downloaded in just over one second.
Three-hundred and twenty-two Tb/s would allow every man, woman and child in China to make a video call simultaneously.
And in honor of this year’s Oscars, I will mention the CRS-3 could download every motion picture ever made in less than five minutes.
Ten years ago, an Internet core router operated at only 2.5 Gbp/s (gigabits per second).
“The Cisco CRS-3 is well positioned to carry on the tradition of the Cisco CRS-1, become the flagship router of the future, and serves as the foundation for the world’s most intelligent and advanced broadband networks,” said Pankaj Patel, senior vice president and general manager of Service Provider Business from Cisco.
Your humble “core columnist” here at Bits & Bytes watched Cisco’s CRS-3 YouTube video. This nicely edited video is about four minutes long and shows the new core router and how it is used on the Internet network. You can see it at this shortened URL: tinyurl.com/yjvnftd.
The CRS-3’s will improve the transferring packet data capacities within the Internet backbone and thus will provide the Internet backbone with a more efficient throughput.
As far as we, the end user seeing immediate increases in downloading speed once CRS-3 core routers are installed, one must be aware that our end user speed and throughput results are also affected by any slow-downs or bottle-necks at the network backbone edge level, all of the hardware processor speed maximums and the bandwidth available to the content source.
The old saying about a chain being only as strong as its weakest link still holds true.
You can read more about the CRS-3 at Cisco’s news web page using this shortened URL: tinyurl.com/yh2mwh8.
Wednesday, March 3, 2010
From the 60's to the FCC asking us about broadband use
March 8, 2010
by Mark Ollig
As a kid in the 1960s I recall watching George Jetson going to work in his flying aero car with the transparent bubble top.
I thought, someday everyone would travel this way.
It’s 2010 . . . so where is my flying car?
Looks like I have another 52 years to wait, as the futuristic Jetsons lived in the year 2062.
Today we find ourselves traveling along the digital informational superhighway known as the Internet.
The Internet is very common today – but not back in the late 1960s.
If you would have asked this adventurous 10-year old in 1968 what the Internet was, I would have probably answered, “some kind of a net to hold fish or frogs in?”
Unbeknownst to that 10-year old, the next year something historic would occur. The first actual message was sent between two host computers or “nodes” over what was called the Advanced Research Projects Agency Network or ARPANET, which, as we all know, evolved into today’s Internet.
A historical transmission occurred between a host computer located at UCLA in Los Angeles and another one located 400 miles north at the Stanford Research Institute Oct. 29, 1969.
Charley Kline, working at the UCLA host computer, was attempting to login to the host computer at Stanford, where Bill Duvall was located.
The first message was the start of the text word “LOGIN.” The “L” and the “O” letters were successfully transmitted – but then the system crashed.
The first “official” transmission of text sent over what is today the Internet was “LO.”
An hour later, the host computers recovered from the crash and the complete text of “login” was successfully transmitted.
During this historical moment, your humble columnist had just turned 11 years old and was out collecting frogs from around the neighborhood.
This was one of the outdoor activities kids did back in the ‘60s.
As I reminisce, it was a warm and sunny late autumn day; I had brought home probably around a hundred frogs during the afternoon.
I was keeping those frogs in large pails of water which I had innocently placed under my parents bedroom window.
Why did I keep the pails of frogs under their bedroom window? Because the outdoor water faucet was located right underneath their bedroom window, and when you’re 11 years old you think, “Why move the pails?”
Being it was warm out on that particular day, many of the windows (including my folks’ bedroom window) were left open.
So, at the end of the day, before I went into the house, I looked over the collection of frogs in the pails of water (I remember adding some grass and flies into the mix) and hoped no one would steal them. The frogs were fairly quiet and so I thought they would probably just sleep during the night.
The next morning, I eagerly went out to check on my frogs and discovered those frogs must have not slept much – I was about to soon discover that most frogs like to practice loud croaking sounds during the night.
This audible activity by the frogs was apparently not appreciated very much by my father, who had “released the frogs back into the wild” as evidenced by all the tipped over water pails lying scattered in the front yard.
I don’t recall my mom ever telling that 11-year-old boy the exact words my father had used to express his displeasure with the whole matter.
So ended my frog hunting adventures for 1969 and the rest of my childhood.
Last month, our friends at the Federal Communications Commission released an interesting 52-page summary report titled OBI (Omnibus Broadband Initiative) Working Paper Series No.1.
The reason for this summary report is to determine what is keeping Americans from obtaining higher broadband speed access to the Internet.
The source of this report was an FCC survey of 5,005 Americans in October and November of 2009.
This national FCC survey was conducted from a random telephone survey that interviewed 5,005 adult Americans, 2,671 of whom use “basic broadband” to access the Internet at home.
The 2,334 remaining in this survey either did not have broadband access in their homes, or said they did not use the Internet, or said they are Internet users – but without access to it from their homes.
This information was collected to better understand the current use of the Internet by the FCC and how access to faster broadband networks can be expanded to the people.
This FCC summary report classified the use of “broadband users” as those who said they used a cable modem, a DSL-enabled phone line, fixed wireless or satellite, a mobile broadband wireless connection, a fiber optic connection, or even a T-1 network to access the Internet.
In 2009, the FCC upgraded their definition of “basic broadband” as an always-on Internet connection with a minimum of 768Kbps in at least one direction, downstream or upstream.
Not everyone has a basic broadband connection.
Six percent of the 5,005 people surveyed said they use a telephone dial-up connection to access the Internet.
Page 28 of the report says about half of the dial-up users are satisfied with their service or are not heavy Internet users.
The OBI survey said that although a measurable majority of Internet users have the means to go online from home, 6 percent do not – they access the Internet away from home. These “not-at-home” users access the Internet possibly from work, or at the library – but not from where they live.
On page 27 under Exhibit 18, the percentages are given for the reasons why “non-users” of the Internet, surveyed don’t use the Internet.
Forty-seven percent say it’s the monthly cost and 46 percent say it’s because they are not comfortable around computers.
Surprisingly, thirty-five percent of the non-users said it was because there is nothing on the Internet they want to see or use.
Thirteen percent responded, because it is not available where they live.
FCC Chairman Julius Genachowski said in a recent statement, “To bolster American competitiveness abroad and create the jobs of the future here at home, we need to make sure that all Americans have the skills and means to fully participate in the digital economy.”
The full 52-page FCC OBI summary can be read at tinyurl.com/yc3pptm.
The Stanford Research Institute story of the first ARPANET message can be read at: tinyurl.com/y9mydj3.
The FCC’s National Broadband Plan web site is at www.broadband.gov.
The Queensland Frog Society’s web site is tinyurl.com/ylbgmh5.
by Mark Ollig
As a kid in the 1960s I recall watching George Jetson going to work in his flying aero car with the transparent bubble top.
I thought, someday everyone would travel this way.
It’s 2010 . . . so where is my flying car?
Looks like I have another 52 years to wait, as the futuristic Jetsons lived in the year 2062.
Today we find ourselves traveling along the digital informational superhighway known as the Internet.
The Internet is very common today – but not back in the late 1960s.
If you would have asked this adventurous 10-year old in 1968 what the Internet was, I would have probably answered, “some kind of a net to hold fish or frogs in?”
Unbeknownst to that 10-year old, the next year something historic would occur. The first actual message was sent between two host computers or “nodes” over what was called the Advanced Research Projects Agency Network or ARPANET, which, as we all know, evolved into today’s Internet.
A historical transmission occurred between a host computer located at UCLA in Los Angeles and another one located 400 miles north at the Stanford Research Institute Oct. 29, 1969.
Charley Kline, working at the UCLA host computer, was attempting to login to the host computer at Stanford, where Bill Duvall was located.
The first message was the start of the text word “LOGIN.” The “L” and the “O” letters were successfully transmitted – but then the system crashed.
The first “official” transmission of text sent over what is today the Internet was “LO.”
An hour later, the host computers recovered from the crash and the complete text of “login” was successfully transmitted.
During this historical moment, your humble columnist had just turned 11 years old and was out collecting frogs from around the neighborhood.
This was one of the outdoor activities kids did back in the ‘60s.
As I reminisce, it was a warm and sunny late autumn day; I had brought home probably around a hundred frogs during the afternoon.
I was keeping those frogs in large pails of water which I had innocently placed under my parents bedroom window.
Why did I keep the pails of frogs under their bedroom window? Because the outdoor water faucet was located right underneath their bedroom window, and when you’re 11 years old you think, “Why move the pails?”
Being it was warm out on that particular day, many of the windows (including my folks’ bedroom window) were left open.
So, at the end of the day, before I went into the house, I looked over the collection of frogs in the pails of water (I remember adding some grass and flies into the mix) and hoped no one would steal them. The frogs were fairly quiet and so I thought they would probably just sleep during the night.
The next morning, I eagerly went out to check on my frogs and discovered those frogs must have not slept much – I was about to soon discover that most frogs like to practice loud croaking sounds during the night.
This audible activity by the frogs was apparently not appreciated very much by my father, who had “released the frogs back into the wild” as evidenced by all the tipped over water pails lying scattered in the front yard.
I don’t recall my mom ever telling that 11-year-old boy the exact words my father had used to express his displeasure with the whole matter.
So ended my frog hunting adventures for 1969 and the rest of my childhood.
Last month, our friends at the Federal Communications Commission released an interesting 52-page summary report titled OBI (Omnibus Broadband Initiative) Working Paper Series No.1.
The reason for this summary report is to determine what is keeping Americans from obtaining higher broadband speed access to the Internet.
The source of this report was an FCC survey of 5,005 Americans in October and November of 2009.
This national FCC survey was conducted from a random telephone survey that interviewed 5,005 adult Americans, 2,671 of whom use “basic broadband” to access the Internet at home.
The 2,334 remaining in this survey either did not have broadband access in their homes, or said they did not use the Internet, or said they are Internet users – but without access to it from their homes.
This information was collected to better understand the current use of the Internet by the FCC and how access to faster broadband networks can be expanded to the people.
This FCC summary report classified the use of “broadband users” as those who said they used a cable modem, a DSL-enabled phone line, fixed wireless or satellite, a mobile broadband wireless connection, a fiber optic connection, or even a T-1 network to access the Internet.
In 2009, the FCC upgraded their definition of “basic broadband” as an always-on Internet connection with a minimum of 768Kbps in at least one direction, downstream or upstream.
Not everyone has a basic broadband connection.
Six percent of the 5,005 people surveyed said they use a telephone dial-up connection to access the Internet.
Page 28 of the report says about half of the dial-up users are satisfied with their service or are not heavy Internet users.
The OBI survey said that although a measurable majority of Internet users have the means to go online from home, 6 percent do not – they access the Internet away from home. These “not-at-home” users access the Internet possibly from work, or at the library – but not from where they live.
On page 27 under Exhibit 18, the percentages are given for the reasons why “non-users” of the Internet, surveyed don’t use the Internet.
Forty-seven percent say it’s the monthly cost and 46 percent say it’s because they are not comfortable around computers.
Surprisingly, thirty-five percent of the non-users said it was because there is nothing on the Internet they want to see or use.
Thirteen percent responded, because it is not available where they live.
FCC Chairman Julius Genachowski said in a recent statement, “To bolster American competitiveness abroad and create the jobs of the future here at home, we need to make sure that all Americans have the skills and means to fully participate in the digital economy.”
The full 52-page FCC OBI summary can be read at tinyurl.com/yc3pptm.
The Stanford Research Institute story of the first ARPANET message can be read at: tinyurl.com/y9mydj3.
The FCC’s National Broadband Plan web site is at www.broadband.gov.
The Queensland Frog Society’s web site is tinyurl.com/ylbgmh5.