Friday, July 29, 2016

Positioning part of 'the cloud' underwater

by Mark Ollig

Copyright © 2016 Mark Ollig

We know much of the data managed using our computing devices is accessed and stored in the “cloud.”

“The cloud” of course, is a common metaphor for “the internet.”

If you shopped online at Etsy or Amazon, you were cloud computing.

For me, cloud computing, in addition to online shopping, is remotely accessing computer servers, software programs, services, and the storing and the retrieval of information from computer data servers located inside a building within the same city, or across the country.

This building is called a data center.

Major online players such as Google, Microsoft, Amazon, along with internet service providers, telecommunications, cable, and wireless service providers make use of their own data centers.

Companies sometimes share the space inside a data center; leasing space and equipment from another provider, or controlling the entire operation of their own data center.

Corporations and businesses accessing their data center usually do it over a private, dedicated network connection.

The data centers are usually networked to each other via fiber-optic cables.

Early this morning, I uploaded some 200 photos I had stored to my Google Photos account, which is, of course, in the cloud.

Come to think of it, I believe my Amazon cloud app automatically uploads and stores my photos for me to their data center service located in the cloud.

I therefore have double-cloud redundancy – amazing.

Of course, I am placing much faith in the cloud; hoping my precious photos will not be accidently deleted, lost, or hacked into by some foreign superpower.

Maybe I should just print them out on high-quality photography paper, and place them in the photo album on the coffee table in the living room like I did in the 1990s.

IBM describes cloud computing as “the delivery of on-demand computing resources – everything from applications to data centers – over the internet on a pay-for-use basis.

Many years ago, companies maintained their own computer systems located within a dedicated, climate-controlled computer room.

It cost a company a lot of money, time, and resources to support their own internal computer department.

With access to today’s high-speed broadband and dedicated networks, many companies have decided to totally do away with maintaining their own internal computer department, and are having their computer needs outsourced.

Enter the data center.

Data centers or “data farms” are physical buildings, located throughout the country, with many right here in Minnesota.

A data center provides a company with controlled computing costs, dedicated network access, security protection of their information, outsourced maintenance responsibilities, and multiple power and data storage system redundancies.

An individual data center can take up thousands, or even over a million square feet of floor space.

It’s critically important to keep the inside of the data center cool, especially where the data servers and other computing components are located.

The electronics used within a data center are kept cool in order for them to operate more efficiently and to last longer.

With the great amount of heat being released by power systems, computer systems, electronic components, and other hardware, many redundant “chillers” or cooling systems are used to vent-off or “chill” this heat.

As mentioned above, cooling is needed in order to remove the tremendous amount of heat generated by the many rows of shelving bays containing thousands of data servers.

The card slots in the individual bay shelves contain printed circuit boards embedded with numerous electronic and power components.

The cost of cooling and the amount of available surface land area for data centers in coastal regions of the world were two reasons one large, well-known computing company decided to install a data enter cloud – underwater.

The folks at Microsoft are researching how to manufacture and operate an underwater data center.

They call it Project Natick.

An underwater cloud-computing data center is intended to service major populations living near large bodies of water.

Microsoft states 50 percent of society resides near large bodies of water, which is one of the reasons they are researching underwater data centers.

I mentioned chillers, and how cooling plays a critical role plays in keeping data centers operating. Microsoft states; “Deepwater deployment offers ready access to cooling, renewable power sources, and a controlled environment.”

Utilization of the energy generated by the motion of the tides and currents is being considered for generating the electricity to power their underwater data center.

Connecting an underwater data center to the cloud network, uses undersea fiber-optic cables, which route onto a land station hub with connections to the Internet.

Microsoft said an underwater data center would have a lifespan of five years, after which it would be retrieved, and “reloaded with new computers, and redeployed.”

Although Project Natick is still in the early research stage, it does looks promising.

So, what does Natick mean?

Microsoft says this is simply a code name, it has no special meaning.

However, your always-investigative columnist discovered there is a town in Massachusetts called Natick.

The only familiar tech-industry connection name I found associated with Natick, MA, is Phil Schiller; a senior vice president of worldwide marketing with Apple, Inc. who was born in Natick, and is regularly seen onstage during Apple-related events.

I assume Microsoft, in choosing this project name which happens to be the same name of the town where a highly recognized Apple employee was born, is merely a coincidence.

The website for Project Natick is: http://natick.research.microsoft.com.


Find my Twitter messages (kept above water) at the @bitsandbytes user handle.


Photo Source: http://natick.research.microsoft.com



Thursday, July 21, 2016

Preparing for the next-generation wireless network

by Mark Ollig

Copyright © 2016 Mark Ollig

Today in the US, we are using nearly 350 million smartphones, computing tablets, and electronic wearables.

Amazingly, these devices carry more than 100,000 times the voice/data traffic than was generated in 2008.

It’s been estimated, by 2020, the entire planet will have 200 billion devices connected to wireless networks.

These devices will include smart-sensors attached onto electronic gadgets, appliances, machines, cars, and just about every other mechanism you can think of.

These new smart-devices; known as the Internet of Things (IoT), will of course, be connected to the Internet.

Many of these IoT in the future will require “ultra-high-speed” transmission systems, and a larger chunk of the radio frequency spectrum or bandwidth, for the wireless networks supporting them.

Improved wireless network capacity will also be needed for the continuously increasing number of smartphone, and other wireless computing devices we use.

Most of us are using 4G, or fourth-generation cellular data technology, on our smartphones.

The LTE we see next to 4G means: “Long Term Evolution,” which is an agreed upon industry standard for the specific methods used with high-speed wireless data transmission.

Some examples of digital “data” used with our smartphones and smartdevices include: voice calls, up/downloading of photos and video, email, texting, medical and other telemetries, and streaming live video, such as a Facebook Live session.

If you’re playing Pokémon GO on your smartphone (which uses augmented reality and the Global Positioning System), you are sending and receiving data over a wireless network.

Please be careful playing Pokémon GO when you are outside.

It’s estimated 98 percent of the citizens in this country have access to 4G/LTE wireless broadband data services.

Less than two weeks ago, the National Science Foundation (NSF) announced it received $400 million via the White House Advanced Wireless Research Initiative.

The $400 million was pledged by private and public sources.

The funds will assist “in experimenting with and testing novel technologies, applications, and services capable of making wireless communications faster, smarter, more responsive and more robust,” says the NSF website.

The NSF acknowledges the need for improving wireless technologies for future vehicle-to-vehicle communications, self-driving cars, and smart cities.

I’m ok with wireless smart cities; however, I am still not comfortable with the reliability of current self-driving car technology.

Immersive video and virtual-reality content for enhancing personal experiences and productivity, telemedicine, and remote surgery, also benefits from increased wireless data speeds, according to the NSF.

The “next big thing” in wireless data will be 5G, or fifth- generation cellular technology, which reportedly provides 100 times the speed, and greater bandwidth than 4G.

The Federal Communications Commission has already freed up radio spectrum for 5G technology to be deployed over.

Currently, testing of 5G is being performed by several wireless carriers. The final 5G technological standards still need to be agreed upon. I’ve read the telecom industry foresees an official 5G standard by 2019. By 2020, Samsung says it will be marketing 5G.

South Korea has scheduled to rollout trial services of 5G next year, and begin offering the service by 2020.

Verizon, AT&T, T-Mobile, Sprint, and other wireless carriers are now deep into test trials, as they prepare their networks for 5G.

You and I should be able to start using 5G in our smart devices by late 2019 or 2020.

Using 5G will be quite the experience for us.

Imagine downloading a full-length, high-definition movie to our wireless smartphone or computing tablet in 5 seconds or less.

With an ultra-fast wireless network, smart machines in factories, for example, connected to the Internet (as an IoT), will communicate their operating status, run self-diagnostics, and repair or perform their own software program updates – very quickly.

I would think (hope) a human would be involved in monitoring this.

Using real-time video chatting apps like: Skype, FaceTime, Google Hangouts, Tango, and even one called ooVoo, should have noticeably improved resolution using a 5G network.

During the Mobile World Conference in Barcelona during February, demonstrations of 5G technologies showed how it will improve the operation of robotics, building security systems; autos linked the Internet, and energy management systems.

The NSF announced $50 million of the $400 million will be used for advanced wireless testing platforms in four cities, beginning in 2017.

The $350 million balance will be applied to the wireless technology and research to be used for supporting these testing platforms.

The US Federal Technology Agency’s National Institute of Standards and Technology (NIST) will participate in a 5G mmWave (millimeter Wave) Channel Alliance discussion during the Institute of Electrical and Electronics Engineers (IEEE) conference Dec. 4.

These discussions will center on the coordination of 5G communications and its standards.

The 5G mmWave Channel Model Alliance Wiki platform is located at: http://tinyurl.com/bits-5Gmm.

“It’s still early in the 5G game; however, the excitement is beginning,” yours truly wrote in a March 17, 2014 column.

More than two years later, many of the 5G players have made their first strategic openings; like chess pieces being played.

Let’s watch as the 5G game now unfolds before us.

Read my daily Twitter messages using your current 4G wireless devices, via my @bitsandbytes user name.


Monday, July 18, 2016

Behind-the-scenes fun with Facebook Live: Part One

        
by Mark Ollig

Copyright © 2016 Mark Ollig


Facebook now allows its users to broadcast live video via its Facebook Live feature.

According to Facebook; “Live lets people, public figures and Pages share live video with their followers and friends on Facebook.”

A single Facebook Live broadcast video stream can last up to 90 minutes.

Many social media and news organizations have taken advantage of this new way of networking with their online viewing audiences.

The Facebook Live video streams I’ve seen are usually being broadcast from a smartphone.

The media realizes this new method of online social interaction, featuring on-air personalities and other employees performing their jobs from behind the camera, is a new way of attracting and connecting with people on a more personal level.

Local TV stations are having Facebook Live sessions to allow viewers a behind-the-scenes look.

I feel this new interaction also forms a “comfort bonding” with new followers, who then become online Facebook friends with the media personalities.

As a result, these new followers will likely watch more of their programing.

The TV and radio social media stations I follow sometimes “go live” on Facebook to share a breaking-news or weather update, or report on a human-interest story.

Users who click a Facebook Live video link from their News Feed, or the broadcaster’s Page, are taken to the content provider’s live-streaming video, where they watch and listen to what is happening in real-time.

There, Facebook viewers can communicate by typing comments which are seen by the social media source broadcasting.

Other Facebook users participating in the Facebook Live broadcast can also see and respond to comments.

Last week, Facebook user: Alice@97.3 was broadcasting live video directly from inside an on-air radio station’s studio, located at KPIX CBS in San Francisco.

A round-table discussion was taking place with three folks wearing headphones, and speaking into those cool-looking radio studio microphones.

The right-hand side of my screen showed Facebook user’s comments and questions as they scrolled by.

Someone at the radio station was monitoring these messages, and would pass them along to the on-air radio personalities.

Floating below the live video feed from right to left, were the “Like” “Love” and “Ha-ha” reaction emoticons Facebook users would click as they joined this Facebook Live session.

The Facebook page of the World Health Organization (WHO), based in Geneva, Switzerland, is one of many social media outlets I follow.

Last Tuesday, WHO alerted its followers about participating in a Facebook Live conversation with “global health experts live from the World Health Assembly.”

Facebook users could directly participate during this livestream by posting questions or comments on the live-video cast Facebook user comment section.

Users of “the other social media network” or Twitter, could also post comments seen by WHO using the Twitter hashtag: “#AntimicrobialResistance” which happened to be the current topic of the WHO Facebook Live broadcast.

One particular Facebook media page I have been watching and participating in is NBC Bay Area.

NBC Bay Area is a television media outlet broadcasting from San Jose, CA; however, they mostly feature the local news, weather, traffic, and sports around the San Francisco bay area.

I began watching their daily Facebook Live segments a few weeks ago.

NBC Bay Area has promoted their station by going “live in studio,” using the Facebook Live program app on one of their producer’s smartphones.

Early each morning, an alert message and link appears under my Facebook notifications saying: “NBC Bay Area is live. Producer Nitin is making his rounds behind-the-scenes of Today in the Bay. Also check us out on TV.”

I click the link and see live video from Nitin’s smartphone appear.

I see the inside of the NBC Bay Area newsroom as he walks down a hallway.

Nitin gives viewers a daily tour, encountering the people who produce, research, interview, write, direct, and broadcast the stories for their “Today in the Bay” morning TV broadcast.

He sometimes surprises the folks in their office cubes by abruptly announcing they are being watched by “Our Facebook friends.”

“Good morning, everyone, we are live on Facebook . . . behind the scenes Today in the Bay!” says Nitin as he walks by people seated at their desks’ working on various stories.

I noted, in the upper right- hand corner, a flashing red icon indicating the broadcast was live.

A display read 102 viewers were currently participating in this Facebook Live session.

The on-air TV news anchors were finishing their reports on a couple stories as the station went into a break (commercial).

Nitin approached the news desk with his cellphone’s camera pointed at the NBC Bay Area morning news anchor team behind the desk: Sam Brock and Laura Garcia-Cannon.

“This is our team!” Nitin said to us as he approached the news desk.

The two morning news anchors noticed the approaching cellphone directed at them, and smiled.

Be sure to read next week’s exciting conclusion as we bring you: Behind-the-scenes fun with Facebook Live: Part Two.

Yours truly gets his moment to shine in the spotlight with the NBC Bay Area morning news anchors.

I usually learn something new in these Facebook Live broadcasts, which can originate from any place around the country – or the planet.

It’s also a lot of fun. 


Be sure to follow my tweets (which I post live) on Twitter using my @bitsandbytes user handle.


Fiber optics: Information 'packed into light'

by Mark Ollig

Copyright © 2016 Mark Ollig

The ability to guide light in other than a straight line was demonstrated in 1841.
Jean-Daniel Colladon, a Swiss physicist, showed how a ray of light, traveling inside a curved arc of flowing water would bend, thus becoming “refracted.”
He called this event “light guiding.”
It was; “one of the most beautiful and most curious experiments that one can perform in a course on optics,” Colladon later wrote.
Jumping ahead 135 years, we find engineers have successfully sent and received information over a beam of light.
This was accomplished by having the light encode the information to be transmitted, using direct modulation via multiplexing laser frequencies within the modulated light, guided through a transparent, fiber-optic glass strand.
The “laser” acronym means: Light Amplification by Stimulated Emission of Radiation.
“It is a device which produces light. Tunable lasers can produce light of a single frequency, or visible color, in human terms. By turning the laser light signal on and off quickly, you can transmit the ones and zeros of a digital communications channel,” is the description of a laser from my trusty Newton’s Telecom Dictionary.
The two most common light sources for fiber-optic transmission are: LED’s (light-emitting diodes), and laser diodes.
In 1976, information encoded within laser light was successfully sent over a fiber-optic telephone cable installed at AT&T’s research facility in Atlanta, GA.
During the same year, the copper wiring harness from an A-7 Corsair US military fighter jet was replaced with a fiber optical link network.
This optical link contained 250 feet, consisting of 13 fiber-optic cables, weighing 3.75 pounds
The A-7 Corsairs’ previous copper wiring harness was comprised of 300 individual copper wire circuits, totaling 4,133 feet, and weighing 88 pounds.
Today’s fiber optics installed in modern aircraft, allows military pilots to make instantaneous decisions using high-resolution imaging systems, displays, and flight controls linked with fiber optic networks providing real-time data.
In 1977, AT&T’s first commercial application of transmitting communications over a fiber-optic cable took place in Chicago.
Unbeknownst to the public, certain fiber-optic technologies were stealthily being used in the 1960s.
Your investigative columnist did some digging.
I discovered NASA’s black- and-white lunar television camera, used by the Apollo 11 astronauts on the surface of the moon in 1969, incorporated fiber-optic technology.
The use of fiber-optic technology in the lunar television camera was considered “classified CONFIDENTIAL” in NASA’s “Lunar Television Camera Pre-installation Acceptance (PIA) Test Plan” document 28-105, dated March 12, 1968.
This Apollo 11 lunar surface camera was kept inside a storage compartment on the descent stage of the lunar module “Eagle.”
Westinghouse Electric Corporation’s Aerospace Division developed and manufactured the Apollo Lunar Television Camera (part number 607R962).
The original lunar television camera remains on the surface of the moon, in the Sea of Tranquility, near the Apollo 11 landing site.
But I digress from today’s topic.
We had previously believed fiber-optic technology would not reach its data-capability bandwidth ceiling for many years to come; however, this is no longer the case.
According to the University of the Witwatersrand, Johannesburg, South Africa, the increases in amounts of data being used, coupled with the advances in information technology, are going to have troublesome repercussions on the current technology used for providing the bandwidth needed to transport data – even over fiber optics.
“Traditional optical communication systems modulate the amplitude, phase, polarization, color, and frequency of the light that is transmitted,” the news release from the University stated.
Researchers from this university began working with the South Africa’s Council for Scientific and Industrial Research.
They demonstrated how over 100 separate light patterns (representing data) could be individually sourced through a single optical communications link.
The researchers created a computer hologram encoded with over 100 light pattern configurations in multiple colors.
These light pattern configurations were sent through an engineered liquid crystal, which acted as a single spatial light modulator.
As this single light was transmitted, each individual light pattern was “demultiplexed,” meaning, the 100 individual light configuration patterns were extracted and sent to their unique designated locations.
All 100 light patterns were detected simultaneously.
This test was performed in a laboratory; the next stage involves demonstrating this new technology in a “real-world” situation.
This increase in light patterns will allow for a huge expansion in available bandwidth used for the transmission of voice, video, and other data.
The additional bandwidth availability will result in information being more efficiently “packed into light.”
Back in the day, this old telephone cable splicer used to worry about having enough copper pairs to handle voice traffic.
Today, with fiber optics, we’re worried about having enough light to handle data bandwidth requirements.
Go figure.
To see the March 12, 1968, NASA booklet describing the “Lunar Television Camera,” go to: http://tinyurl.com/lunarcam.
In this booklet, Section 1.1 “Security Classification” includes the “classified CONFIDENTIAL” portion of the fiber optics used with the lunar television camera.

Allow me to light your way through Twitter, via my @bitsandbytes user handle.





London Technology Week: predictions for 2036

by Mark Ollig

Copyright © 2016 Mark Ollig

Our friends from across the big pond are making some bold predictions for the next 20 years.

One includes humans being outnumbered by artificially-intelligent robots.

In 2036, it will be common to hear the overhead buzzing of an aerial drone delivering a package, or a pizza to someone’s front door.

How will we feel about sharing the highway with autonomous, self-driving cars; which some say, will outnumber human-driven automobiles by 2036?

More than 2,000 British adults were polled for their insights on what life might be like 20 years from now.

A not-for-profit private partnership called London & Partners did the polling for London Technology Week.

London was last week’s venue for bringing together high-tech companies, and celebrating its innovators and entrepreneurs during the city’s technology event.

London Technology Week focused on the city’s effort to be the “forefront of developing the cutting-edge technologies, which are creating growth for the city’s economy.”

Commercial space flights using major airports are considered a real possibility by 2036, according to 37 percent of the Brits interviewed.

NASA says humans aboard the Orion spacecraft will arrive at Mars in the early 2030s, so it may be possible we will be able to book a commercial space flight for traveling to the moon.

It’s 2036, and I find myself sipping coffee, while comfortably seated aboard a luxury passenger commercial space flight, orbiting the moon.

Looking out a window, I notice our spacecraft slowly beginning to descend; the moon is getting closer.

I am able to see details of the lunar surface.

My first impression is how very grayish in color it is, at times resembling charcoal.

There are large, and many small craters, along with various-sized boulders and rocks.

The surface we are hovering above appears to be very finely grained.

The lunar topsoil looks almost like a powder.

A pleasant voice is heard from the spacecraft’s overhead speaker, “We are now passing over the Sea of Tranquility . . . we are coming up to Tranquility Base, which is the site of the historic Apollo 11 moon landing.”

Ah yes, I can see the Apollo 11 Lunar Module Eagle descent stage, with its footpads resting on the lunar surface.

There’s the lunar television camera, seismometer, retroreflector, and the footprint trails Neil and Buzz made while walking on the moon’s surface.

Sadly, the American flag looks to have been blown over, as it’s resting on the surface, not far from the descent stage.

I remember Buzz Aldrin saying he saw the flag blow down when the Eagle’s ascent stage blasted off from the moon, and the exhaust struck the flag.

Everything else appears to not have been disturbed by time.

Maybe, someday, we really will be able to take this trip to the moon.

Another prediction calls for replacing a human with an artificially-intelligent mechanism on the board of directors within large corporations, according to 23 percent questioned.

Having computerized “avatar girlfriends and boyfriends” could become commonplace in 2036, says 19 percent of the Brits polled.

For those of you who are not up on your computing avatars, I checked one definition which says an avatar is “an icon or figure representing a particular person in computer games, Internet forums, chatrooms, etc.”

An avatar should not be confused with an anime.

I learned computing animation animes were created in Japan; however, an avatar isn’t considered an anime based on the opinions expressed by many online.

Another prediction has 37 percent polled believing we will be placing some sort of communication device inside our body by 2036.

A number of the Britons surveyed, believe in 20 years, they will no longer need to travel to see their doctor when they are ill.

If you’re feeling ill in 2036, you’ll have an in-home, face-to-face doctor appointment, using virtual-reality technology.

Those polled also believe if it is determined you need a human organ replacement, 3D printing machines will be able to manufacture one for you in 2036.

My personal prediction: The use of medical chip implants for human health tracking, and treatment will become routine 20 years from now.

London Technology Week used the Twitter hashtag “#LDNTechWeek” for folks attending the event, to tag messages sent out over the social media network.

I wonder what using Twitter will be like in 2036.

Maybe by then, they will have finally decided to increase their 140-character limit.

The webpage for London Technology Week is http://londontechnologyweek.co.uk.

For the foreseeable future, you’ll be able to follow me on Twitter via my @bitsandbytes user handle.


A fast-moving 'tide' is crossing the Atlantic

by Mark Ollig

Copyright © 2016 Mark Ollig

With its information traveling at an incredible 160 trillion bits per second (160Tbps), a new high-speed underwater data “pipe” will soon be crossing the ocean.

The pipe is a transatlantic fiber-optic submarine cable.

Today, the buzz regarding blazing-fast, home Internet speeds is the much talked-about 1 billion bits per second (1Gbps), or “Gig.”

Converting 160Tbps to Gbps, we find it equals 163,840 Gbps.

Imagine if we could download a movie or a TV show using this speed; hypothetically, it would be faster than a blink of an eye.

Looking at it this way, we find 167,772,160 (million bits per second or Mbps) equals 160 Tbps.

Here’s a handy shortened link for the data speed conversions I used from unitconversion.org: http://tinyurl.com/bits-conv.

In today’s internet, the name of the game is speed, along with reliable, secure access for our smart devices to the cloud and online services.

Facebook is partnering with Microsoft, and will soon be providing the highest-capacity data speeds ever seen over the Atlantic Ocean using a single fiber-optic cable.

MAREA is the name of their joint-venture, fiber-optic submarine cable system.

The submarine cable contains eight fiber-optic pairs, and will run from a new landing station in Virginia Beach, VA, to an existing data center hub in Bilbao, Spain.

The cable‘s length will be some 4,100 miles.

“MAREA was chosen as the Spanish word meaning ‘tide,’ a reference to the subsea effort and the route to Bilbao, Spain,” said a spokesperson from the Microsoft Server and Cloud Platform Team.

“The new MAREA cable will help meet the growing customer demand for high- speed, reliable connections for cloud and online services for Microsoft, Facebook, and their customers,” said Frank Rey, director, Global Network Acquisition, Microsoft Cloud Infrastructure and Operations.

“We want to do more of these projects in this manner – allowing us to move fast with more collaboration,” Najam Ahmad, vice president of network engineering at Facebook, said in a statement.

Ahmad was the general manager of Global Networking Services at Microsoft, prior to his working with Facebook.

A Microsoft representative commented on their blog, saying they had completed a marine survey of the entire submarine fiber-optic route to be used, in order to find the most “optimal route to lay the cable in optimal locations.”

MAREA is part of Microsoft and Facebook’s overall plan for providing its infrastructure cloud services to the other half of the planet.

The Azure Platform is the name of data centers using Microsoft’s cloud services, of which there are currently 24 in use today, with a planned 32 Azure regions, globally.

Yours truly obtained the June 30, 2016 FCC Report Number SCL-00185S: “Streamlined Submarine Cable Landing License Applications Accepted For Filing.”

This report included submarine cable license SCL-LIC-20160525-00012 for the MAREA cable system.

“MAREA will provide Edge USA and its affiliates with capacity to support Facebook’s global platform to connect its users and data centers. MAREA will also provide Microsoft Infrastructure and its affiliates with capacity to support Microsoft’s cloud services offerings and connect its data centers,” as stated in the FCC’s Submarine Cable Landing License.

This FCC document also shows Edge Cable Holdings USA, LLC (Edge USA) will own 25 percent of MAREA’s “wet segment” in US –territorial waters, and a 25 percent interest in the Virginia Beach cable landing station.

A subsidiary of the Spanish telecom company Telefonica; Telxius will operate the submarine cable.

A majority of MAREA’s bandwidth will be controlled by Facebook and Microsoft.

The complete FCC public notice regarding MAREA can be seen here: http://tinyurl.com/FCC-MAREA.

According to a recently released FCC International Circuit Capacity Report, the installation of submarine cable has grown about 36 percent per year for 2007 – 2014, and will grow 29 percent for 2014 – 2016.

Here is a screen-capture photo I took of the MAREA transatlantic cable route: http://tinyurl.com/bits-MAREA.

The announced in-service date for MAREA is tentatively scheduled for October 2017.

Be sure to follow and read my carefully typed, high-speed Twitter messages, via my @bitsandbytes user handle.




Independence Day, US patents, and a yellow submarine

by Mark Ollig
Copyright © 2016 Mark Ollig
I admit coming up with a technology column keeping in theme with the celebration of Independence Day was a bit challenging.
“How many tech-related US patents were approved on the fourth of July,” I wondered.
After checking the US Patent Office, I learned there were a few.
“Aerial for Wireless Signaling” is the title of US Patent 793,651, issued July 4, 1905.
The inventor, Reginald A. Fessenden described his patent’s benefits related to “certain improvements in aerials for transmission and receipt of electromagnetic waves over long distances.”
Improvements are obtained by the triangular arrangement of supporting antenna guy wires, and the placement of certain elements (such as porcelain) using Fessenden’s unique design methods.
“Vending-machine” is the title of US Patent 1,189,954 ,granted July 4, 1914.
Its inventor, Alonzo Jacobs states his machine “ . . . comprises a suitable base whose upper portion is formed into a tray compartment, this compartment containing a circular chambered delivery tray similar to the well-known dropping-plate employed in seed-planting machinery.”
So, the next time you use a vending machine and you watch your item drop down into the delivering tray, you can thank Alonzo Jacobs . . . and the seed-planting machine he got the idea from.
Although not issued July 4, I found this next patent (with a Minnesota connection) appropriate for our summer here in the land of 10,000 lakes.
US Patent 2,854,787, was for a self-propelled toy fish, invented by Paul E. Oberg, from Falcon Heights Oct. 7, 1958.
“This invention relates to a novel device for propelling and steering a buoyant object in a liquid medium. More particularly, the invention is concerned with means for propelling toy aquatic creatures,” he described.
Since we’ve had a patent for a self-propelled toy fish, it should not come as an unexpected surprise, within a few years, we would see one for a self-propelled toy whale.
US Patent 2,990,645 titled “Toy Whale” was issued on July 4, 1961, for an electrically powered, self-propelled toy – you guessed it; whale.
Dean A. Polzin invented this animated toy whale, which has the power “to move through the water and to alternately dive and surface while moving through the water,” he described.
In addition to a battery motor inside the hollow body of the toy whale, a hand-crank is used to wind tension cords; the energy released propels the toy whale under and over the water.
Internal spring-actuated counterbalancing methods are used to correct any overbalancing positions of the whale as it traverses the water.
Polzin thoughtfully incorporated a small hole in the toy whale, which shoots water as it rises to the surface, simulating the spouting action of a real whale.
US Patent 1,271,272 titled “Toy Submarine” was granted two days before Independence Day, July 2, 1918.
Speaking of toy submarines, in the mid-1960s, one of my favorite toys was from the TV series “Voyage to the Bottom of the Sea.”
It was the Seaview submarine made by REMCO.
The Seaview was a 17-inch- long, yellow plastic submarine model designed to be used on land or in the water.
“Yellow Submarine,” coincidently, was the name of a popular 1966 song by the Beatles.
I recall one summer day, when I was probably 7 or 8years old, being at my grandmother’s vacation home on Lake Minnie-Belle, near Litchfield, MN.
I had brought along the Seaview, and looked forward to seeing it in action in a real body of water, instead of the family bathtub.
Standing in the water a few feet from the shoreline, I looked out and envisioned Lake Minnie-Belle as if it were the Pacific Ocean.
The Seaview toy used “elastic motor propulsion” which meant I needed to wind-up the rubber-band located inside the submarine, using the built-in blue plastic crank handle.
I remember gently placing the now “fully-charged” yellow submarine on the water and watching it travel on the “ocean” – just like it did on TV.
Yippee! I remember how excited I was at the time.
By the way, my Seaview came with a gray plastic whale, two scuba divers, and a scary-looking undersea creature (with working claws).
The last time I checked eBay, the exact REMCO Seaview toy model I had was selling for around $500. “New-in-the-box” models were going for an unbelievable $2,300.
My advice to youngsters: Take all your toys with you when you’re old enough to move out of the house.
I wish I still had that 50-year-old yellow Seaview submarine.
US Patent 3,329,109, titled “Automatic program-controlled sewing machine” was issued July 4, 1967, to Benjamin Bloom, James E. Hibbs, and Lawrence A. Portnoff.
“The primary object of the present invention [is] to provide an automated program-controlled sewing machine which is capable of performing either linear or curved stitching operations . . . without requiring guidance on the part of the operator,” stated the description.
Schematic wiring diagrams showing individual electrical connections and component circuitry, as well as the drawings of the physical sewing machine itself, were included in their patent.
This automatic program-controlled sewing machine would be used in the garment-making industry.
According to the US Patent and Trademark Office, as of January, there have been 9,226,437 US patents issued since 1836.
You can follow my unpatented tweets from July 4, and every other day of the year, via my @bitsandbytes Twitter handle.


Friday, July 15, 2016

Any limits to supercomputer processing speed?

by Mark Ollig

Copyright © 2016 Mark Ollig

Three years ago, I wrote about a supercomputer which could process data at “an amazing 33.86 quadrillion calculations per second.”

Yes, this is a challenging number to wrap our heads around.

The supercomputer was made in China, and is called the TH-2 (Tianhe-2) High Performance Computer System.

The TH-2 can perform 33.86 TFlop/s or Trillion-Floating-point operations (calculations) per second as measured in “Teraflops” computing speed.

TOP500 benchmarks and ranks the top 500 most powerful, commercially available computer systems by the speed in which they solve a set of linear equations, using floating-point arithmetic.

Supercomputers use their incredible processing power to explore subjects such as: quantum mechanics, astrophysics, our planet’s weather, energy shortages, pollution, and global climate changes.

Science, research, and engineering requiring complex calculations are modeled using supercomputer’s processing abilities – which are far faster than on today’s best personal computing devices – or yours truly’s 4-year-old Apple MacBook.

Supercomputing technology is used to assist in new medical drug discoveries, precision medicine research, and simulations and optimization for high-speed trains, aircraft, automobiles, and I imagine, military simulation scenarios.

The US Department of Energy (DOE) national labs has supercomputers available for universities, other academia, and private sector companies.

This year, the fastest supercomputer in the world performed a mind-boggling 93 quadrillion calculations per second.

Let’s try wrapping our heads around 93,000,000,000,000,000 (93 quadrillion) calculations per second, or 93 petaflops (93 million billion) floating-point operations per second.

One petaflops is equal to 1,000 teraflops, or 1,000,000,000,000,000 (quadrillion), or, a thousand trillion floating-point operations per second.

Once again, China is the maker of the number one ranking supercomputer, per the TOP500.

Sunway TaihuLight is the name given by China for this newest supercomputer.

This supercomputer, equipped with 10,649,600 computing processing cores, was developed by China’s National Research Center of Parallel Computer Engineering and Technology.

The Sunway TaihuLight is twice as fast as its predecessor, the Tianhe-2, and uses Chinese-made ShenWei computer processing chips.

The processor chips used with the Tianhe-2 were made by Intel.

The Sunway TaihuLight has a peak processing performance level stated as being 125 petaflops.

So, where does the United States stand in the supercomputer rankings?

The Titan – Cray XK7, with 560,640 computing cores, 710 terabytes (710,000 gigabytes) of memory, and a peak processing speed exceeding 27,000 trillion calculations per second (27 petaflop/s), is made by Cray Inc. in the US, and ranks third in the TOP500.

The number-two spot still belongs to the Tianhe-2 from China.

In fact, China has 167 supercomputers on the Top500 listings, with the US having 165.

The latest supercomputers operate in the quadrillion, or TFlop/s processing range.

The next “big thing” in supercomputing processing will be an “E-Scale supercomputer” which will process calculations per second on the exascale level of an “exaflop,” or a million trillion calculations per second.

A future supercomputer processing 1 Eflop/s (Exaflops) would possess processing power equal to 1,000 Pflop/s (one thousand petaflops).

The DOE has an Exascale Computing Initiative group currently working on exascale speeds achieving 1 Eflop/s processing speeds.

Exascale processing may be available by 2023 – the DOE originally planned for 2020.

During the recent 12th HPC Connections Workshop in Wuhan, China, Beihang University Professor Depei Qian, who is director of China’s high-performance computing project, talked about China’s own aggressive exascale program.

China plans on having an exascale Exaflop/s supercomputer working in 2020.

And so, my dear readers, in the not-too-distant future, the world will witness the first supercomputer to break the processing speed barrier of 1 quintillion (1,000,000,000,000,000,000) calculations per second.

Now, that’s a number to wrap one’s head around.

Which country will break the exascale supercomputing processing barrier first?

Your continuously-processing columnist’s brain is monitoring supercomputing advancements, and will report future updates as warranted.

Is there a limit to supercomputer processing speed?

So far, we haven’t reached one.

My online social media messages are processed at a reasonable 140 cp/t (characters per tweet) via the @bitsandbytes user handle found on Twitter.