Thursday, July 24, 2014

Supporting our Internet freedom



by Mark Ollig


“The Internet is for everyone.”

This is the guiding vision of the Internet Society (ISOC), a non-profit organization with headquarters in Geneva, Switzerland, and Reston, VA.

ISOC, formed in 1992, recently published Global Internet Report 2014.

In this informative 146- page report, they highlight the Internet’s milestones; such as in 2003, when the number of Internet users surpassed 1 billion.

Today, the global Internet user base is closing in on 3 billion.

The China Internet Network Information Center recently announced China’s Internet user population was 632 million – making China the number one Internet user base in the world.

Of these 632 million, 572 million are accessing the Internet using mobile devices.

Meanwhile, the current US Internet user population is approximately 280 million, according to Internetlivestats.com.

India comes in third with 243 million, followed by Japan with 109 million users.

To see the current Internet country rankings, visit: http://tinyurl.com/bytes-stats1.

The ISOC 2014 report discusses the benefits and challenges of maintaining an open and sustainable Internet, and the Internet relationship between government, business, and the end user (you and me).

ISOC states we, as engaging Internet users, maintain an open and supported Internet through our participation, collaboration, sharing, commerce, and entertainment activities within it.

They describe how an actual Internet user experience may differ, depending upon the country where the user is located.

In an open Internet, all public websites, and other related Internet resources, are freely accessible to the end user.

However, some Internet network operators within certain countries are ordered to filter, and even block content or user access to otherwise legally viewable websites and social networks. This is because of a government’s regulatory policies, or specific Internet restriction enforcements.

ISOC acknowledges the Internet is a positive force for social advancement, but concedes it is not immune from governmental controls.

The Internet needs to be accessible from any location; however, a majority of those living on the planet still have no access to it.

As we approach 2015, our world still lives in a “digital divide” of those having access to quality Internet services, those having no access, and those who are unable to afford Internet services.

Barriers to this divide include areas of the globe where the deployment costs for fixed broadband Internet accessibility are very expensive, locations are sparsely populated, and potential end user incomes are low.

Bridging this divide, according to the ISOC, includes having countries provide government investment to support low-income users, developing more mobile broadband network infrastructure (which is less costly, and faster to build), and removal of certain taxes on equipment and services in order to reduce costs for end users.

Countries and their Internet providers need to uphold network confidence by wisely using technology to promote end user trust, maintain equal access to Internet sites, and protect end user privacy.

They also need to assure that public Internet content will be easily accessible to all.

ISOC wisely advises our Internet access should not be taken for granted.

This columnist suggests we need to be vigilantly on guard, adding our own voices to the discussions of the day, in order to protect our Internet freedoms.

We should not presuppose, or become apathetic, when it comes to the Internet.

Being connected to the Internet does not guarantee we will always be able to freely and easily share or access information, ideas, and views on certain topics in the future – especially if we allow large networking entities, or governmental agencies to filter, limit, or block our individual content, social commentaries, political opinions, or bandwidth access to certain websites.

We’re using the Internet to become more socially, politically, and economically engaged with each other.

This type of Internet interaction is healthy, and must not be suppressed.

We are not just staying current with the latest news, political, and social matters of the day via the Internet; we’re becoming actively involved in them.

Many of us engage in real-time conversations with others locally, nationally, and around the world using social media sites on the Internet.

The “mainstream media” regularly monitors and shares information it gleans from Internet social media streams.

The pioneers and visionaries of today’s Internet, such as Vinton Cerf, continue to speak out on the importance of the Internet remaining a freely open network.

The Internet must remain an open venue, where everyone is afforded an equal opportunity to use and contribute to its resources for the benefit of all.

The Internet provides the platform for the social media networks we use to voice our individual concerns and opinions; it’s where we converse, contribute, share knowledge, experiences, and cultures.

Using all forms of social media, we participate in the topics of the day with others who also desire to communicate within an equally available and freely open Internet.

The Global Internet Report 2014 can be found at: http://tinyurl.com/bytes-isoc.

The ISOC website is: http://www.internetsociety.org.

The Internet is for everyone.




Wednesday, July 16, 2014

New object recognition technology is here



by Mark Ollig

Does hearing all the bells and whistles about ingenious gadgets and technologies promising to make our lives a bit more enjoyable and productive catch your attention?

Of course, yours truly is usually quick to embrace these new, promising technological revolutions even when they are in the development stage. 

Six years ago, I wrote a column titled “Cell phones may soon be your tourist guide.” 

In it, I described a proposed new cellphone software application which would provide instant information on what you saw by taking its picture. 

The picture would be transmitted from your cellphone to a central computer system, which would interface to databases located on the Internet.

These databases would retrieve information collected about the picture.

This information would be transmitted back to your cellphone and displayed in real-time.

The technology would be used to identify buildings, scenery, plants, and even animals. 

It’s six years later, and I have not read anything new from SuperWise Technologies AG, located in Wolfratshausen, Germany, about their concept. 

It was five years ago, when yours truly wrote a column about a similar technology actually being used called Google Goggles. 

In this column, I talked about how the Google Goggles application “takes the picture from our cell phone and sends it to the web, where a search for information about it is performed. The information is then returned to us in real-time.”

Google Goggles can be used to identify pictures, such as landmarks, artwork, bar and quick response codes, and a few others. It is currently being used in today’s smartphones. 

For more information about installing Google Goggles, visit http://tinyurl.com/bytes-goggles1

I did note on Google’s website, that Goggles is not very good for identifying pictures taken of animals.

This is where I jump to 2014, and reveal in this column, the latest object recognition technology which appears to be very good at identifying animals.

This new machine learning, artificial intelligence computing technology, is named Project Adam. 

Yes, indeed, our old friends at Microsoft Research are the ones developing this new artificial object identification intelligence. 

According to Microsoft, the objective of Project Adam “is to enable software to visually recognize any object.”

Microsoft’s current artificially intelligent virtual personal assistant, Cortana, was integrated with Project Adam’s technology.

Cortana is to Microsoft’s Windows Phone, what Siri is to Apple’s iPhone. 

Cortana, by the way, is also the name of the artificial intelligence character from Microsoft Studios Halo video game. 

Project Adam was shown during last week’s annual Microsoft Research Faculty Summit in Redmond, WA. 

The demonstration of Project Adam’s capabilities was given on stage before a live audience, using three different breeds of dogs.

Johnson Apacible, Project Adam researcher, aimed his smartphone and took a picture of Cowboy, which was the name of a Dalmatian sitting on the stage. 

“Cortana, what dog breed is this?” asked Apacible into the smartphone. 

On the smartphone’s display screen, the word “Dalmatian” appeared. 

Apacible then pointed his smartphone at another dog (without taking a picture) and asked, “Cortana, what kind of dog is this?” 

“Could you take a picture for me?” Cortana’s voice over the smartphone’s speaker asked.

Laughter could be heard from the people in the audience.

Apacible pointed the smartphone’s camera to the dog named Millie, and snapped a picture.

Project Adam’s technology came through again by correctly identifying the particular dog breed, with Cortana saying, “I believe this is a Rhodesian Ridgeback.

The audience showed its appreciation with their applause. 

The last breed of dog on stage, an Australian Cobberdog named Ned, was also correctly identified by Cortana. 

Apacible wanted the audience to know Project Adam’s technology could tell the difference between a dog and a person, and so he directed the smartphone’s camera at Harry Shum, Microsoft’s executive vice president of Technology and Research. 

“I believe this is not a dog,” Cortana correctly stated. 

The human brain uses trillions of its neural pathway connections in order to identify objects; Project Adam uses 2 billion in its artificial neural network. 

Eventually, it is hoped, this research will also allow one to take a smartphone picture of what you are eating, and instantly obtain its nutritional value. 

This technology may someday lead to being able to take a picture of a rash, or other unusual skin condition, and receive an accurate medical diagnosis. 

Imagine you’re camping out in the woods and come across some unfamiliar plants; by taking their picture and having it analyzed, you would be able to determine which ones are edible and which are poisonous. 

Indeed, Project Adam has the possibility of developing into a very promising new technology.

A short video of Cortana identifying breeds of dogs can be seen at http://tinyurl.com/bytes-adam.

The 2014 Microsoft Research Faculty Summit webpage is at http://tinyurl.com/bytes-Summit.


Thursday, July 10, 2014

Audible text via the FingerReader

by Mark Ollig


 
According to the World Health Organization, about 285 million people world-wide, are living with some type of visual impairment.

In the US, approximately 11.2 million people have a visual impairment, per the US Census Bureau.

Researchers at the Massachusetts Institute of Technology Media Lab, have developed a device, when worn on the index-finger, assists the visually impaired by reading words.

Made using a 3D printer, this small device (worn like a ring), includes a tiny built-in camera used for scanning words and text lines.

It’s called the FingerReader.

Wearing this ring-like device and pointing your index finger at some text, will cause the FingerReader to audibly read the words it sees.

This text could be from a newspaper, book, menu, or even from a computer display screen.

The FingerReader makes use of computing hardware and programming software.

Software programs include a text extraction algorithm using a Tesseract optical character reader (OCR) for processing video input.

A Flite Text-to-Speech (TTS) program is also used, as well as different tactile, haptic (vibration) output expressions, which guide the user during the course of reading through the text.

The FingerReader is attached to a thin cable, which, in turn, is connected to a computer.

The software used works on both Windows and Mac operating systems.

While watching a demonstration video, yours truly observed a person opening a book. While wearing the FingerReader, this individual pointed to the first line of text.

These first words, as seen by the FingerReader’s mini-camera, were sent to the OCR program, where they were deciphered.

The TTS program then audibly read the words over the computer’s speakers.

Each word the user’s index-finger points to along the text line, is seen by the camera, analyzed, and converted into audible speech.

As the user’s finger scans the lines of text, the visual input is being sent through the computing program.

If the user’s finger veers away from the text line, an audible and tactile feedback response is given to guide them back to the line of text.

A haptic signal output to the user’s finger is also transmitted when an end-of-text line or beginning text line is found.

The FingerReader, in addition to reading aloud the current word as the finger passes over it, is also looking ahead and analyzing the next word.

The tiny embedded camera is reading the text from a fixed distance.

Because the camera is stationary on the index finger, it can more easily focus on the line of text it needs to identify.

In addition to assisting those with visual impairments, the FingerReader may someday be used for language translation support.

It is envisioned, a user will be able to point their finger at text written in an unknown language, speak the word, translate, and have the text automatically read back in the desired language.

Granted, there are applications currently available on smart devices for translating text.

However, wearing a flexible, ring-like device you operate by finger-swiping along a text line would be more convenient, and faster.

The FingerReader looks to have the potential of becoming a beneficial, user-wearable technology one can easily operate, and have with them while on-the-go.

Currently, most TTS readers are large stand-alone or handheld devices.

I feel confident we will be hearing more about this wearable FingerReader in the near future.

The researchers emphasized how the FingerReader employs the natural technique of using one’s index finger for following text on a written page.

They also said advantages of the FingerReader include providing real-time feedback on the selected text “within the progression of the [FingerReader’s camera] scan,” versus other devices which instead capture one whole page of text at a time.

According to the researchers, the FingerReader is still undergoing development, and is being improved upon.

One future improvement could enable the FingerReader to be interfaced with a smartphone or other mobile device via wireless Bluetooth technology.

In addition to assisting people with visual impairments, the researchers believe the FingerReader has the potential to be of value to the elderly, children, language students, and tourists.

Whom to work with, and how best to market the final FingerReader product into the public domain, is still being considered.

Devices and programs currently available for capturing and analyzing text include Apple’s SayText, which is available on iTunes at http://tinyurl.com/bytes-SayText.

The Google Play store provides an ABBYY TextGrabber + Translator at http://tinyurl.com/bytes-abbyy.

Text Detective is an application which can be used on an iPhone or Android smartphone. This app will detect text and read it aloud. It’s available at http://blindsight.com/textdetective.

You can see a screen capture of the FingerReader during its testing I uploaded to my Photobucket page at http://tinyurl.com/bytes-finger2.

More information about the FingerReader can be found on the Massachusetts Institute of Technology’s website at http://tinyurl.com/bytes-finger1.

Wednesday, July 2, 2014

Wearing a glove to learn skill sets



by Mark Ollig


Last week, I watched video about the Georgia Institute of Technology glove which is able to teach skill sets to people.

Individuals obtained a needed dexterity skill, via sensory vibrations, while wearing a special computer-controlled glove.

The method used was based on a combination of passive haptic learning; which is a technique of teaching one to learn via muscle-vibration memory, along with sounds, and visual stimuli.

Think of how you are able to type without looking down at the keyboard. This is because of all the time you spent having the muscle in your fingers learn where the correct keys you want to type are located on the keyboard.

Yours truly learned to type in Mr. Harold Knoll’s typing class back in the day when we used manual and electric typewriters.

I admit, I am no longer as agile and fast as I once was, but my typing fingers can still dance fairly quickly over the QWERTY keyboard.

What the researchers at the Georgia Institute of Technology created was a user “tactile interface.” Namely, a glove with special tactors utilizing a tiny vibration motor on each finger.

The glove in the video is shown on a user’s left hand, with the vibrating motors sewn inside the glove above each knuckle. 

The glove is connected to a small circuit board via a ribbon cable. A connector on the circuit board is plugged into a laptop USB port.

When learning to type the Braille system, a computing program controls which of the fingers is to receive a vibration, or stimuli.

The tiny motors vibrate, causing stimulation of the finger corresponding with the specific pattern of a pre-determined phrase in Braille. 

There were also audio prompts for notifying the individual of the Braille letters being used.

“Remarkably, we found that people could transfer knowledge learned from typing Braille, to reading Braille,” said Ph. D. student Caitlyn E. Seim, of the Georgia Institute of Technology.

Back in 2012, students and professors at the Georgia Institute of Technology developed a vibrating, re-enforcing learning technology for using a computer-connected glove to teach a person how to play a piano.

It was a fingerless glove, called the mobile music touch (MMT), and when worn, teaches the fingers of the wearer how to play a piano melody. 

This MMT glove employs vibrations inside the glove allowing it to tap each finger in the proper sequence needed to play the notes of a particular song.

In a video I watched, I saw a person who had no prior experience in playing a piano, have his hand become conditioned with the proper muscle memory for playing a simple song. 

After 30 minutes, the person took the glove off, and was able to play the song on the piano.

The process involves having the participant hear the actual song they will be learning played all the way through on a piano. The correct piano key lights up as each note is heard. 

While the learner is wearing the glove, they feel the individual notes vibrate on their fingers, while seeing the keys on the piano light up as the song plays.

What’s occurring is a vibration inside the glove is tapping the correct finger to be used to play the exact note needed as the song progresses. 

The song is learned in parts; not all at one time.

As one part of the song is audibly played, the user feels the vibration in their fingers; they also see the notes played on the piano by observing the lights of each key.

The person then attempts to recreate what they learned by pressing the piano keys they felt were being played as the muscles in their fingers learned the song.

One researcher said what surprised them most, based from their study, was the difference in sensation people got back after using the glove, compared to before using it. 

Some folks reported being able to pick up very small objects they were previously unable to pick up before using the glove.

The muscle conditioning benefits from the MMT glove can also be used in other skill sets, such as improving typing skills, as one story explained.

A quadriplegic partook in an eight-week study using the glove for about two hours a day. This person was able to improve dexterity in their typing skills from using one finger to type, to using two fingers on one hand.

On this video, the study participant smiled and quipped, saying, “This allowed me to regain some dexterity; but also learn how to play the piano.”

“Passive Haptic Learning of Typing Skills Facilitated by Wearable Computers” is a detailed, six-page document recently written by Seim and two others at the Georgia Institute of Technology.

The document can be read at http://tinyurl.com/bytes-glove.

Their research is improving people’s ability to acquire skill sets using passive haptic learning, by wearing a tactile-responsive, computer-interfaced glove.

Screen-capture from:
Georgia Tech YouTube channel