Wednesday, December 19, 2012

IBM's futuristic predictions revealed


by Mark Ollig
 
We process information using touch, smell, hearing, taste, and sight. 

Technology giant IBM predicts we will soon be using advanced computing technology to augment these five senses.

IBM recently released their “5-in-5” technology prognostications for the next five years. 

These predictions focus on developing advanced, cognitive computing sensory systems. 

During the 1980s, AT&T ran a commercial about using your telephone to “reach out and touch someone.” 

Within five years, our mobile devices and smartphone touch display screens may allow us to actually feel what those Egyptian cotton sheets we are looking at online are like, by “touching” the picture on the display screen. 

Digital image correlation processing, and infrared and haptic technologies producing calibrated vibrations on the display screen will allow the capturing of the texture qualities our sense of touch will be able to distinguish. 

“By matching variable-frequency patterns of vibration to physical objects, so that when a shopper touches what the webpage says is a silk shirt, the screen will emit vibrations that match what our skin mentally translates to the feel of silk,” the IBM 5-in-5 report said. 

IBM Research says they are bringing together the virtual (online) and real-world shopping experiences to our smartphones. 

Soon, we may be able to use our sense of touch to get physical feedback via the display screen on our smartphones. 

To watch an IBM video on the future of touch, go to http://tinyurl.com/cdy22ra.

Smartphones in the future will also possess the sense of smell. IBM says tiny sensors having the ability to “sniff” will be contained within our smart mobile devices. 

These future smartphones will detect certain chemical biomarkers that can be found in the air. 

The biomarkers from our breath will be analyzed (via cognitive computing processing) through our smartphones. This type of application could be used to identify any existing health conditions.

Also, within the next five years, when in the grocery store, we might be able to check the freshness and quality of the meat, fruit, and produce by having our smartphones “sniff” the item. 

To view the IBM video on smell technology, go to http://tinyurl.com/cwxuydb.

Within five years, we may be downloading a hearing application onto our smartphones. 

IBM predicts the sounds emanating from various sources; human, nature, animals, and even the weather will be received and coded into our smartphones and processed into meaningful information for us. 

Hearing sensor technology using auditory signaling processing to more efficiently extract and transform sound into information the human brain can comprehend would be applicable to hearing aids, or cochlear implants. 

An IBM video on futuristic hearing is at http://tinyurl.com/cn9sb88.

How will IBM Research develop computing cognitive systems having the ability to taste? 

By having a professionally trained chef, now a computer engineer, on their team. 

IBM is developing a cognitive computing system which analyzes the chemical compounds in food, and how they react with each other. 

By using this information, along with psychophysical data modeling on specific chemicals in food which creates perceptions of “pleasantness, familiarity, and enjoyment,” IBM is hoping the result will be new, unique food recipes containing precise ingredient combinations that are “scientifically flavorful.”

IBM’s cognitive taste technology video is at http://tinyurl.com/dyg8ajx.

Seeing is believing, and IBM research is working on cognitive computer vision systems that, according to IBM, will “help us understand the 500 billion photos we’re taking every year.” 

Although we know how the human eye processes images, today’s computing technology is unable to exactly replicate human sight.

IBM is trying to get the computer to “reason out” what a particular image is, by having it detect patterns in a photo, or determine what it is “seeing” from video taken on a smartphone. 

For example, by providing photos of various sandy beach scenes to a cognitive computing program, it is thought the program would eventually be able to distinguish the individual features, color distributions, texture patterns, surroundings, and other information. Motion-related information data would be used if the source came from a video. 

The cognitive programming would “learn” the distinct features of various beach scenes and their surroundings. 

It is hoped that with enough learning, the cognitive computer would be able to distinguish one sandy beach from another, and thus determine if a shoreline it “sees” is located in France or in California. 

The idea is to have a cognitive computer learn various outdoor environments, including outdoor structures, plants, trees, animals, birds – you get the idea. Thus, the next time we are in France, and hold our smartphone’s camera in front of say, the Eiffel Tower (or an outdoor location we are unfamiliar with), the phone will “recognize” it and provide us with detailed information.

IBM’s sight technology video is at http://tinyurl.com/ccmbp2f.

This columnist would like to say Merry Christmas to his mom, and to all his faithful readers.