Jan. 2, 2012
by Mark Ollig
How does a French factory operating in 1912, improve sales of their chocolate products?
By creating imaginative promotional advertisement cards showing how technology would look 100 years into the future.
The folks from the Lombart chocolate company came up with the idea to use the “technology in the year 2012” theme, to increase sales of their various chocolates.
These 1912 advertisement cards (about the size of a postcard) were quality-made, fully illustrated, and in color.
The cards were produced and printed by the highly regarded French Norgeu family.
These future-themed cards were titled “En I’an 2012” meaning “In the year 2012.”
Six cards were made for En I’an 2012.
Each skillfully hand-drawn card, depicts how a specific future technology from the year 2012, would assist in the delivery of yummy-tasting Lombart chocolate.
Included with the chocolates Lombart sold to customers in 1912, was one of the 2012 cards.
The good folks at Lombart, no doubt, hoped these thought-provoking, futuristic 2012 cards would entice customers to want more cards, which would mean purchasing more of their chocolate.
One of the six 1912 cards which immediately caught this telecommunication veteran’s attention, was titled “Picturephone of the year 2012.”
The card shows a father standing next to the mother, who is sitting down at a table and speaking into the transmitter of a circa 1912 telephone handset.
Both are also looking straight ahead at the living room wall.
The parents are watching and participating in a real-time video phone call with their son, who they can see is talking to them from his telephone in a distant location.
This real-time, video phone call is being displayed, or projected onto the parent’s living room wall, roughly 5 feet in front of them.
The live broadcast transmission of their son seen on the wall is coming from a movie projector-like device sitting on the table, which is wired into a small, enclosed electrical device, along with the telephone handset the mother is using.
Since this card was intended to help sell more Lombart chocolate, the mother is reportedly saying “Hello, my child. We sent your chocolate Lombart by the aircraft.”
The Picturephone of the year 2012 as hand-drawn in 1912 can be seen at http://tinyurl.com/77qdqz7.
The oldest picture I found showing a phone call where both parties could see and talk with each other in real-time, was hand-drawn in 1879 by George Du Maurier.
In this picture, Du Maurier shows parents conversing live with their daughter over a large screen on a wall in their home using the Edison Telephonoscope.
Thomas Edison had envisioned a communication device that would “transmit light as well as sound” and be capable of showing real-time events, such as allowing two groups of people, who were separated by a great distance, to see and talk with each other in real-time.
George Du Maurier’s futuristic picture from 1879 can be seen at http://tinyurl.com/7mlpemp.
Little did they realize in 1879 (or 1912) that in 2012, we would be using software applications such as Skype, and Apple’s iChat and FaceTime. We also have Facebook and Google’s voice and video chat, (and others) to use for video conferencing.
The 1912 depiction of the 2012 Picturephone projection apparatus is not so far-fetched.
Today’s portable, integrated pico-projector devices use a wall for beaming images and video onto – so, why not our video phone calls?
Logic Wireless Technologies has a built-in video projector on some of their cellphones; one is the Logic Axis Projector Phone. However, I do not believe it can project real-time video of a live phone conversation.
Aircraft from 2012 is shown in another colorfully hand-drawn Lombart card.
This card shows futuristic dirigibles or “lighter-than-air” aircraft, commonly known as blimps, floating in the night sky over London.
One dirigible is seen sitting atop a building delivering Lombart chocolates.
Moored balloons with attached, brightly shining globes about 2,000 feet over London, are spaced roughly 100 yards apart from each other.
The globes light up the night sky.
Bathed with light, the dirigibles appear to be in an airport-like holding pattern, waiting to deliver Lombart chocolate onto rooftop landing pads below them.
All the dirigibles have “Chocolat Lombart” written in bold, red lettering across their large, skeletal-framed, gas-filled balloon envelopes.
The interior-lit gondola crew cabin, suspended under each dirigible, looks like the caboose from a train, and has two propellers attached on the front.
The automobiles of 2012 fly. They are shown with side-wings and a propeller fastened to the engine.
One flying car is seen landing to pick up some Lombart chocolate.
Hey, it’s 2012 . . . I am still waiting for my Jetsons flying car.
Another 1912 card shows a trip to the moon from Paris via a futuristic looking 2012 “spaceplane.”
This spaceplane has an attached passenger cabin and a roof spotlight with its light beam focused directly at a large, full moon.
A card of a 2012 underwater submarine shows people peering out a large cabin window, while fish slowly swim by.
A male passenger talks on the intercom. He is probably asking the submarine driver to stop at the nearest underwater Lombart chocolate store.
These futuristic images created in 1912, are from the book “History of the Future: Images of the 21st Century” by Christiphe Canto and Odile Faliu.
>
>
>
>
>
>
>
>
>
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.- Mark Ollig
Wednesday, December 28, 2011
Wednesday, December 21, 2011
Looking back at this year's highlights
Dec. 26, 2011
by Mark Ollig
As this year ends, I want to express my appreciation to, my readers, for having spent a few moments of your time each week reading this column.
In looking back over this past year, January started with 140,000 people attending the annual Consumer Electronics Show (CES) in Las Vegas.
Highlights of the CES included “passive polarized” 3D television, and General Motors futuristic concept vehicle called the EN-V (Electric Networked Vehicle).
January ended with Apple’s Macworld Conference and Expo event in San Francisco.
In addition to more than 250 vendor exhibits, Apple presented dozens of new products to the nearly 20,000 visitors who attended.
In February, Watson, the smart supercomputer by IBM, made history when it played against (and defeated) two player-champions on the television show “Jeopardy.”
We also learned about The Computer History Museum, located in Mountain View, CA, which an in-house collection containing thousands of computer-related artifacts.
March 21 was the fifth anniversary of the first Twitter message: “twttr” sent by Twitter co-founder Jack Dorsey.
Coffee-Cam was the subject of the April 4 column.
In 1991, many of the University of Cambridge academic researchers (working in various parts of a multi-story building) needed to walk several flights of stairs in order to pour themselves a cup of coffee from the coffee maker.
Understandably, these folks became somewhat agitated whenever discovering an empty coffee pot.
Two of these resourceful researchers rigged up an electronic video “frame-grabber” device and captured time-sequenced, still-frame images from the video camera they had pointed at the coffee pot inside the coffee maker.
Updated images of the coffee pot were sent over the universities local computer network, appearing in a corner of each researchers computer screen.
This delighted the researchers; they could now simply glance at their computer screen to know how much coffee remained in the coffee pot.
They no longer worried about holding their empty coffee cup in front of an empty coffee pot.
April also brought some computer nostalgia for the baby boomers.
The popular Commodore 64 computer form the early ‘80s was remanufactured.
The C64 was fully-modernized on the inside, while retaining its vintage look on the outside.
On May 16, this columnist wrote about Roger Fidler’s futuristic 1994 video demonstration entitled “Tablet Newspaper.”
This video showed people using what looked like an Apple iPad – 17 years before they were made.
In June, Steve Jobs, CEO of Apple Inc., took the stage at the opening of Apple Computer’s Worldwide Developers Conference.
During this conference, an enthusiastic Steve Jobs talked cloud-computing, and about Apple’s iCloud data center.
Jobs said the center of our digital lives will be migrated into the cloud.
Jobs clearly illustrated Apple’s “next big insight” of demoting the PC and Mac to being devices like the iPhone, iPad, or iPodtouch.
In July, Google’s field-testing version of their new social media site, Google+, was online.
After testing Google+, I thought it would make a legitimate challenge to Facebook’s dominance.
I am still waiting for Google+ to make a legitimate challenge.
In August, we discovered how to save our pictures, music files, and other digital data for 1,000 years, by using the new M-DISC, made by Millenniata, Inc.
It started in 1996, when Barry M. Lunt, Ph. D., experienced a revelation while examining petroglyph (rock engraving) images northeast of Price, UT.
He realized these ancient images were created by etching or chipping away at the outer layer of the dark rock, which exposed the lighter layer of rock beneath its surface.
Lunt helped develop a method of permanently “etching” digital data onto a new type of DVD surface material.
During September, our friends at Microsoft released their new Windows 8 Operating System.
On October 4, Apple Inc. did not present us with the much anticipated iPhone 5, but instead offered the iPhone 4S.
The “4S” was suggested to be an abbreviation meaning, “For Steve.”
On Oct. 5, the computing world mourned the loss of Apple Inc. co-founder, Steve Jobs.
During mid-October, the merging of film and fragrances inspired this columnist to write “Smell-O-Vision II.”
We journeyed back to 1906; inside a small-town movie theater where Samuel Lionel Rothafel took a wad of cotton wool soaked with rose oil, and placed it in front of an electric fan during a silent-film showing of the 1906 Rose Bowl parade.
The pleasurable aroma of fresh-cut roses drifted upon the people watching the film.
Today, a rectangular-shaped device equipped with 128 fragrance scent capsules called the Odoravison System, is available for use with home theater systems.
November’s columns reviewed the benefits of medical robots, and Microsoft’s motion-sensing “Kenect Effect” add-on device for the Xbox 360 console.
This month we learned about teen's use of online social media sites, and the dominance of Google Search and tablet computing.
December also saw the return of Jessica’s favorite elf informant, Finarfin Elendil.
Bring on 2012.
This columnist is ready to write more about the Internet, ground-breaking technologies, social media, innovative high-tech companies, and new computing devices.
And, you know I like to look back at technology’s history every once and awhile, too.
I also want to wish my brother Tom, a very happy birthday.
About Mark Ollig
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
As this year ends, I want to express my appreciation to, my readers, for having spent a few moments of your time each week reading this column.
In looking back over this past year, January started with 140,000 people attending the annual Consumer Electronics Show (CES) in Las Vegas.
Highlights of the CES included “passive polarized” 3D television, and General Motors futuristic concept vehicle called the EN-V (Electric Networked Vehicle).
January ended with Apple’s Macworld Conference and Expo event in San Francisco.
In addition to more than 250 vendor exhibits, Apple presented dozens of new products to the nearly 20,000 visitors who attended.
In February, Watson, the smart supercomputer by IBM, made history when it played against (and defeated) two player-champions on the television show “Jeopardy.”
We also learned about The Computer History Museum, located in Mountain View, CA, which an in-house collection containing thousands of computer-related artifacts.
March 21 was the fifth anniversary of the first Twitter message: “twttr” sent by Twitter co-founder Jack Dorsey.
Coffee-Cam was the subject of the April 4 column.
In 1991, many of the University of Cambridge academic researchers (working in various parts of a multi-story building) needed to walk several flights of stairs in order to pour themselves a cup of coffee from the coffee maker.
Understandably, these folks became somewhat agitated whenever discovering an empty coffee pot.
Two of these resourceful researchers rigged up an electronic video “frame-grabber” device and captured time-sequenced, still-frame images from the video camera they had pointed at the coffee pot inside the coffee maker.
Updated images of the coffee pot were sent over the universities local computer network, appearing in a corner of each researchers computer screen.
This delighted the researchers; they could now simply glance at their computer screen to know how much coffee remained in the coffee pot.
They no longer worried about holding their empty coffee cup in front of an empty coffee pot.
April also brought some computer nostalgia for the baby boomers.
The popular Commodore 64 computer form the early ‘80s was remanufactured.
The C64 was fully-modernized on the inside, while retaining its vintage look on the outside.
On May 16, this columnist wrote about Roger Fidler’s futuristic 1994 video demonstration entitled “Tablet Newspaper.”
This video showed people using what looked like an Apple iPad – 17 years before they were made.
In June, Steve Jobs, CEO of Apple Inc., took the stage at the opening of Apple Computer’s Worldwide Developers Conference.
During this conference, an enthusiastic Steve Jobs talked cloud-computing, and about Apple’s iCloud data center.
Jobs said the center of our digital lives will be migrated into the cloud.
Jobs clearly illustrated Apple’s “next big insight” of demoting the PC and Mac to being devices like the iPhone, iPad, or iPodtouch.
In July, Google’s field-testing version of their new social media site, Google+, was online.
After testing Google+, I thought it would make a legitimate challenge to Facebook’s dominance.
I am still waiting for Google+ to make a legitimate challenge.
In August, we discovered how to save our pictures, music files, and other digital data for 1,000 years, by using the new M-DISC, made by Millenniata, Inc.
It started in 1996, when Barry M. Lunt, Ph. D., experienced a revelation while examining petroglyph (rock engraving) images northeast of Price, UT.
He realized these ancient images were created by etching or chipping away at the outer layer of the dark rock, which exposed the lighter layer of rock beneath its surface.
Lunt helped develop a method of permanently “etching” digital data onto a new type of DVD surface material.
During September, our friends at Microsoft released their new Windows 8 Operating System.
On October 4, Apple Inc. did not present us with the much anticipated iPhone 5, but instead offered the iPhone 4S.
The “4S” was suggested to be an abbreviation meaning, “For Steve.”
On Oct. 5, the computing world mourned the loss of Apple Inc. co-founder, Steve Jobs.
During mid-October, the merging of film and fragrances inspired this columnist to write “Smell-O-Vision II.”
We journeyed back to 1906; inside a small-town movie theater where Samuel Lionel Rothafel took a wad of cotton wool soaked with rose oil, and placed it in front of an electric fan during a silent-film showing of the 1906 Rose Bowl parade.
The pleasurable aroma of fresh-cut roses drifted upon the people watching the film.
Today, a rectangular-shaped device equipped with 128 fragrance scent capsules called the Odoravison System, is available for use with home theater systems.
November’s columns reviewed the benefits of medical robots, and Microsoft’s motion-sensing “Kenect Effect” add-on device for the Xbox 360 console.
This month we learned about teen's use of online social media sites, and the dominance of Google Search and tablet computing.
December also saw the return of Jessica’s favorite elf informant, Finarfin Elendil.
Bring on 2012.
This columnist is ready to write more about the Internet, ground-breaking technologies, social media, innovative high-tech companies, and new computing devices.
And, you know I like to look back at technology’s history every once and awhile, too.
I also want to wish my brother Tom, a very happy birthday.
About Mark Ollig
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Thursday, December 15, 2011
Yes Jessica, Santa Claus uses computers
Dec. 19, 2011
by Mark Ollig
One of my readers, Jessica, asked me a question I promised to investigate and write about before Christmas Day 2008.
Jessica wanted to know if I could find out whether Santa Claus used computers to help him deliver Christmas presents.
I emailed my entire list of elf contacts at the North Pole, hoping one would get back to me before the newspaper’s holiday deadline.
To my surprise, one game elf did reply.
And who was this accommodating gnome?
As some of you may remember, his name was Finarfin Elendil.
The following is what I wrote for Jessica (with revisions).
Dear Jessica, Santa does indeed use computers when delivering those Christmas presents.
That smiling, well-nourished, red-cheeked, jolly old man with the white beard, is in fact, extremely computer-savvy.
You see, during the Christmas off-season, the geekier elves, along with Santa, attend advanced computer technical training classes, at an undisclosed location in Redmond, WA.
One cooperative elf, Finarfin Elendil, gave me the inside scoop about the “Santa Claus Super Computer Center” (SCSCC), which is located near the North Pole’s largest toy-making factory.
The SCSCC is highly-computerized and totally cloaked, rendering it undetectable from all earth-orbiting satellites, high-altitude surveillance aircraft, and those Google street-view mapping cars.
Santa doesn’t mention the SCSCC when he’s out in public – he mainly concentrates on asking children if they were good this year, along with what they want for Christmas.
Finarfin Elendil reported Santa uses the SCSCC as the North Pole’s Christmas command and control center – and to store and maintain Santa’s new high-tech Christmas sleigh.
The SCSCC’s hangar bay is home to Santa’s newest mode of travel for delivering presents during Christmas: Sleigh-One.
Let it be known that Sleigh-One is not your average wooden toboggan.
Sleigh-One is a state-of-the-art, fully computerized, jumbo-sized, high-flying bobsled.
Its on-board computer receives in-flight location coordinates via an enhanced global positioning system.
Reindeer pulling efficiency, or mpr (miles per reindeer), is conveniently displayed on the cluster instrument panel.
Sleigh-One also receives “toy-inventory-remaining” Facebook telemetry and updated “who’s-naughty-or-nice” Twitter reports from the elves broadcasting back at the SCSCC.
Sleigh-One communicates using 3G technology, but Elf rumor has it Santa will be upgrading to 4G LTE wireless broadband transceivers soon.
Santa was said to have chuckled when he learned the helper elves traveling on Sleigh-One installed eggnog cup holders next to their seat armrests like the ones on Santa’s driver side armrest.
The SCSCC is home to the world’s most advanced supercomputer.
This supercomputer is exceptionally sophisticated; your humble columnist thinks Santa and the elves magically performed reverse-engineering on some highly-advanced extraterrestrial technology obtained from inside Area 51.
Finarfin Elendil described how the supercomputer’s data-stream is algorithmically encrypted, using session initiation protocol signaling transferred through nanotubed optical-fiber bus architectures within the North Pole’s local area network.
And to think this is all maintained by those geeky elves who take off-season computer classes . . . amazing.
These same elves also designed and manufactured the supercomputer’s E1 (Elf-1) Multi-Quad-Core-Super-Duper processor chip.
The E1 can process up to four hundred quindecillion FLOPS (floating operations per second).
Finarfin Elendil brags how the engineers from computer chip-maker, Intel Corp., are always asking the elves for advice.
Santa uses the E1’s processing speed to instantly map the exact coordinates of every rooftop and fireplace chimney throughout the world, where he, needs to deliver the good girls and boys Christmas presents.
If the home has no chimney, the supercomputer will automatically execute a “backdoor” software program Santa wrote, which provides an alternate access solution.
Finarfin Elendil confirmed this year’s Christmas Eve reindeer sleigh team will once again be comprised of: Dasher, Dancer, Prancer, Vixen, Comet, Cupid, Donner, and Blixen.
However, in the event of an emergency (sick reindeer), Sleigh-One has a built-in navigational program which activates the experienced Automatic Assistance Reindeer Pilot (AARP).
And, because of his extremely shiny red nose, Rudolph, the “Red-Nosed Reindeer” has been designated by Santa Claus himself, to be Reindeer One, and guide Santa’s mighty reindeer sleigh team around the world on Christmas Eve.
In order for Sleigh-One to deliver every single Christmas present over a 24-hour period, the sleigh needs to “push the pedal to the metal,” Finarfin Elendil quoted Santa as saying.
Rudolph wanted the sleigh to go faster than the speed of light so he could show off in front of the does, but Santa nixed the idea, saying he did not want to travel that fast.
Santa explained going faster than the speed of light would cause the bright, fog-piercing, red beam of light from Rudolph’s nose to bend around and shine behind the sleigh instead of in front of it.
Santa worried this might create a reverse time-line anomaly, triggering a space-time continuum vortex, causing the children’s Christmas presents to be delivered years before they were born.
Rest assured, Jessica, the Christmas presents are safely delivered by Santa, the helper elves, Rudolph, and the rest of the reindeer, in a sleigh never traveling faster than the speed of light.
But I digress.
I hope Jessica (and all of you) enjoyed reading this story as much as I did writing it.
And remember, the word “Christmas” comes from the very old phrase, “Cristes maesse” which means “Christ’s mass.”
Dec. 25, Christmas Day, is when Christians all over the world celebrate the birth of Jesus Christ.
Merry Christmas.
About Mark Ollig: Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
One of my readers, Jessica, asked me a question I promised to investigate and write about before Christmas Day 2008.
Jessica wanted to know if I could find out whether Santa Claus used computers to help him deliver Christmas presents.
I emailed my entire list of elf contacts at the North Pole, hoping one would get back to me before the newspaper’s holiday deadline.
To my surprise, one game elf did reply.
And who was this accommodating gnome?
As some of you may remember, his name was Finarfin Elendil.
The following is what I wrote for Jessica (with revisions).
Dear Jessica, Santa does indeed use computers when delivering those Christmas presents.
That smiling, well-nourished, red-cheeked, jolly old man with the white beard, is in fact, extremely computer-savvy.
You see, during the Christmas off-season, the geekier elves, along with Santa, attend advanced computer technical training classes, at an undisclosed location in Redmond, WA.
One cooperative elf, Finarfin Elendil, gave me the inside scoop about the “Santa Claus Super Computer Center” (SCSCC), which is located near the North Pole’s largest toy-making factory.
The SCSCC is highly-computerized and totally cloaked, rendering it undetectable from all earth-orbiting satellites, high-altitude surveillance aircraft, and those Google street-view mapping cars.
Santa doesn’t mention the SCSCC when he’s out in public – he mainly concentrates on asking children if they were good this year, along with what they want for Christmas.
Finarfin Elendil reported Santa uses the SCSCC as the North Pole’s Christmas command and control center – and to store and maintain Santa’s new high-tech Christmas sleigh.
The SCSCC’s hangar bay is home to Santa’s newest mode of travel for delivering presents during Christmas: Sleigh-One.
Let it be known that Sleigh-One is not your average wooden toboggan.
Sleigh-One is a state-of-the-art, fully computerized, jumbo-sized, high-flying bobsled.
Its on-board computer receives in-flight location coordinates via an enhanced global positioning system.
Reindeer pulling efficiency, or mpr (miles per reindeer), is conveniently displayed on the cluster instrument panel.
Sleigh-One also receives “toy-inventory-remaining” Facebook telemetry and updated “who’s-naughty-or-nice” Twitter reports from the elves broadcasting back at the SCSCC.
Sleigh-One communicates using 3G technology, but Elf rumor has it Santa will be upgrading to 4G LTE wireless broadband transceivers soon.
Santa was said to have chuckled when he learned the helper elves traveling on Sleigh-One installed eggnog cup holders next to their seat armrests like the ones on Santa’s driver side armrest.
The SCSCC is home to the world’s most advanced supercomputer.
This supercomputer is exceptionally sophisticated; your humble columnist thinks Santa and the elves magically performed reverse-engineering on some highly-advanced extraterrestrial technology obtained from inside Area 51.
Finarfin Elendil described how the supercomputer’s data-stream is algorithmically encrypted, using session initiation protocol signaling transferred through nanotubed optical-fiber bus architectures within the North Pole’s local area network.
And to think this is all maintained by those geeky elves who take off-season computer classes . . . amazing.
These same elves also designed and manufactured the supercomputer’s E1 (Elf-1) Multi-Quad-Core-Super-Duper processor chip.
The E1 can process up to four hundred quindecillion FLOPS (floating operations per second).
Finarfin Elendil brags how the engineers from computer chip-maker, Intel Corp., are always asking the elves for advice.
Santa uses the E1’s processing speed to instantly map the exact coordinates of every rooftop and fireplace chimney throughout the world, where he, needs to deliver the good girls and boys Christmas presents.
If the home has no chimney, the supercomputer will automatically execute a “backdoor” software program Santa wrote, which provides an alternate access solution.
Finarfin Elendil confirmed this year’s Christmas Eve reindeer sleigh team will once again be comprised of: Dasher, Dancer, Prancer, Vixen, Comet, Cupid, Donner, and Blixen.
However, in the event of an emergency (sick reindeer), Sleigh-One has a built-in navigational program which activates the experienced Automatic Assistance Reindeer Pilot (AARP).
And, because of his extremely shiny red nose, Rudolph, the “Red-Nosed Reindeer” has been designated by Santa Claus himself, to be Reindeer One, and guide Santa’s mighty reindeer sleigh team around the world on Christmas Eve.
In order for Sleigh-One to deliver every single Christmas present over a 24-hour period, the sleigh needs to “push the pedal to the metal,” Finarfin Elendil quoted Santa as saying.
Rudolph wanted the sleigh to go faster than the speed of light so he could show off in front of the does, but Santa nixed the idea, saying he did not want to travel that fast.
Santa explained going faster than the speed of light would cause the bright, fog-piercing, red beam of light from Rudolph’s nose to bend around and shine behind the sleigh instead of in front of it.
Santa worried this might create a reverse time-line anomaly, triggering a space-time continuum vortex, causing the children’s Christmas presents to be delivered years before they were born.
Rest assured, Jessica, the Christmas presents are safely delivered by Santa, the helper elves, Rudolph, and the rest of the reindeer, in a sleigh never traveling faster than the speed of light.
But I digress.
I hope Jessica (and all of you) enjoyed reading this story as much as I did writing it.
And remember, the word “Christmas” comes from the very old phrase, “Cristes maesse” which means “Christ’s mass.”
Dec. 25, Christmas Day, is when Christians all over the world celebrate the birth of Jesus Christ.
Merry Christmas.
About Mark Ollig: Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Thursday, December 8, 2011
Teen's interactions using online social networking sites
Dec. 12, 2011
by Mark Ollig
As teenagers, most of us baby boomers did not go on the Internet in order to participate in our social networks.
Of course, back then, there were no Internet social networking sites for us to use.
Instead of the Internet, we would do our socializing at school and sporting events, dance halls, roller skating rinks, bowling alleys, restaurants, local street corners, theaters, or at friend’s houses.
Boomers, feel free to add your favorite locations for socializing to the list.
Some of us, who grew up as teens during the ‘70s, also thought of computers as complicated devices used by the military, NASA, scientists, large corporations, and “nerdy computer hobbyists.”
A new report, “Teens, Kindness and Cruelty on Social Network Sites,” was released last month by the Pew Internet and American Life Project, in collaboration with the Family Online Safety Institute and Cable in the Classroom.
The report states that today, 95 percent of all teens ages 12 to17 are doing their socializing online, via the Internet.
The Internet is being used by many teens as their main venue for social networking with friends, family, and others.
Which online social networking sites are teens using?
The report says 80 percent of teens actively participating within online social networking sites are mainly using Twitter, Facebook, and MySpace.
Of all online social networking sites, Facebook dominates with teens, as 93 percent reported having an account there.
MySpace was being used by 24 percent of teens surveyed.
Twitter came in at 12 percent, while 7 percent said they had an account with Yahoo.
Teens with an account on YouTube were reported by 6 percent, whereas 2 percent had an account on Skype, myYearbook, or Tumblr.
Having only one online social networking account was reported by 59 percent of teens, while 41 percent said they have accounts on multiple social networking sites.
Out of the above mentioned 41 percent, Facebook was named as one of those accounts by 99 percent of the teens surveyed.
Of the teens who said they have only one social networking account, 89 percent disclosed that account was Facebook.
The reason no numbers were given for the new Google Plus social networking site, is because this survey was already being conducted when Google Plus was just being released.
While participating within online social networking sites and chat rooms, teens can sometimes be exposed to difficult and unpleasant experiences.
One of the report’s surveys asked teens where they get advice on how to use the Internet “responsibly and safely.”
Most teens (86 percent) report getting this advice from their parents.
A teacher, or another adult at school providing advice and information about online safety, was reported by 70 percent of teens surveyed.
The media; including, television, radio, newspapers, and magazines, accounted for the information obtained by 54 percent of teens.
Attending a specific school event about Internet online safety was also reported by teens as being a source for information.
The report noted most teens received advice from various sources regarding Internet online safety.
These sources include:
• Parents: 86 percent.
• Teachers: 70 percent.
• Media: 54 percent.
• Sibling or cousin: 46 percent.
• Friend: 45 percent.
• Older relative: 45 percent.
• Youth/church group leader/coach: 40 percent
• Websites: 34 percent
• Librarian: 18 percent
•Someone/something else: 10 percent
The study reports that teens having positive experiences within an online social networking site (strengthened friendships, positive feelings about themselves), are more likely to seek out advice about any negative issues encountered.
Teens who witnessed meanness or negativeness being perpetrated onto someone online (while it was occurring) were asked if they had sought somebody out for advice on what to do. Of the teens responding, 36 percent said they did seek out advice, while 64 percent said they did not.
However, after witnessing or having been personally involved in a bad online experience (harassment, cyber-bullying, or other cruelty), 56 percent of the teens said they did seek out advice, or talked about the negative experience with a friend.
Parents were asked for advice by one-third of the teens responding to the survey, while a smaller number of teens said they would ask a teacher, sibling, or cousin for advice after going through a bad online experience.
Some teens said they would seek counsel from a youth pastor, religious leader, uncle or aunt – or even a website – on how to cope with a negative online experience.
When asked who has been the biggest influence on what a teen thinks is appropriate or inappropriate behavior when online, parents were said to be the number-one influence by 58 percent of teens surveyed, followed by friends at 18 percent, and a brother or sister at 11 percent.
When teens were asked if their parents had talked with them about their online activities, 82 percent said they had.
Of the parents surveyed by Pew, 41 percent reported “friending” their child on an online social networking site (like Facebook).
More attention toward the monitoring of their children’s online website activities was reported by 77 percent of parents surveyed.
The full report, “Teens, Kindness and Cruelty on Social Network Sites” can be read at http://tinyurl.com/8xad35p.
A list of 204 online social networking sites can be seen at http://tinyurl.com/k2jhx.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
As teenagers, most of us baby boomers did not go on the Internet in order to participate in our social networks.
Of course, back then, there were no Internet social networking sites for us to use.
Instead of the Internet, we would do our socializing at school and sporting events, dance halls, roller skating rinks, bowling alleys, restaurants, local street corners, theaters, or at friend’s houses.
Boomers, feel free to add your favorite locations for socializing to the list.
Some of us, who grew up as teens during the ‘70s, also thought of computers as complicated devices used by the military, NASA, scientists, large corporations, and “nerdy computer hobbyists.”
A new report, “Teens, Kindness and Cruelty on Social Network Sites,” was released last month by the Pew Internet and American Life Project, in collaboration with the Family Online Safety Institute and Cable in the Classroom.
The report states that today, 95 percent of all teens ages 12 to17 are doing their socializing online, via the Internet.
The Internet is being used by many teens as their main venue for social networking with friends, family, and others.
Which online social networking sites are teens using?
The report says 80 percent of teens actively participating within online social networking sites are mainly using Twitter, Facebook, and MySpace.
Of all online social networking sites, Facebook dominates with teens, as 93 percent reported having an account there.
MySpace was being used by 24 percent of teens surveyed.
Twitter came in at 12 percent, while 7 percent said they had an account with Yahoo.
Teens with an account on YouTube were reported by 6 percent, whereas 2 percent had an account on Skype, myYearbook, or Tumblr.
Having only one online social networking account was reported by 59 percent of teens, while 41 percent said they have accounts on multiple social networking sites.
Out of the above mentioned 41 percent, Facebook was named as one of those accounts by 99 percent of the teens surveyed.
Of the teens who said they have only one social networking account, 89 percent disclosed that account was Facebook.
The reason no numbers were given for the new Google Plus social networking site, is because this survey was already being conducted when Google Plus was just being released.
While participating within online social networking sites and chat rooms, teens can sometimes be exposed to difficult and unpleasant experiences.
One of the report’s surveys asked teens where they get advice on how to use the Internet “responsibly and safely.”
Most teens (86 percent) report getting this advice from their parents.
A teacher, or another adult at school providing advice and information about online safety, was reported by 70 percent of teens surveyed.
The media; including, television, radio, newspapers, and magazines, accounted for the information obtained by 54 percent of teens.
Attending a specific school event about Internet online safety was also reported by teens as being a source for information.
The report noted most teens received advice from various sources regarding Internet online safety.
These sources include:
• Parents: 86 percent.
• Teachers: 70 percent.
• Media: 54 percent.
• Sibling or cousin: 46 percent.
• Friend: 45 percent.
• Older relative: 45 percent.
• Youth/church group leader/coach: 40 percent
• Websites: 34 percent
• Librarian: 18 percent
•Someone/something else: 10 percent
The study reports that teens having positive experiences within an online social networking site (strengthened friendships, positive feelings about themselves), are more likely to seek out advice about any negative issues encountered.
Teens who witnessed meanness or negativeness being perpetrated onto someone online (while it was occurring) were asked if they had sought somebody out for advice on what to do. Of the teens responding, 36 percent said they did seek out advice, while 64 percent said they did not.
However, after witnessing or having been personally involved in a bad online experience (harassment, cyber-bullying, or other cruelty), 56 percent of the teens said they did seek out advice, or talked about the negative experience with a friend.
Parents were asked for advice by one-third of the teens responding to the survey, while a smaller number of teens said they would ask a teacher, sibling, or cousin for advice after going through a bad online experience.
Some teens said they would seek counsel from a youth pastor, religious leader, uncle or aunt – or even a website – on how to cope with a negative online experience.
When asked who has been the biggest influence on what a teen thinks is appropriate or inappropriate behavior when online, parents were said to be the number-one influence by 58 percent of teens surveyed, followed by friends at 18 percent, and a brother or sister at 11 percent.
When teens were asked if their parents had talked with them about their online activities, 82 percent said they had.
Of the parents surveyed by Pew, 41 percent reported “friending” their child on an online social networking site (like Facebook).
More attention toward the monitoring of their children’s online website activities was reported by 77 percent of parents surveyed.
The full report, “Teens, Kindness and Cruelty on Social Network Sites” can be read at http://tinyurl.com/8xad35p.
A list of 204 online social networking sites can be seen at http://tinyurl.com/k2jhx.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Friday, December 2, 2011
Google's dominance of online search continues
Dec. 5, 2011
by Mark Ollig
Google began as a research project in 1996 by two doctor of philosophy degree students at Stanford University in Palo Alto, CA.
“The Anatomy of a Large-Scale Hypertextual Web Search Engine” was the name of the scientific paper written in 1998 by Google founders, Sergey Brin and Lawrence Page.
This paper described the original Google prototype, and the important computational algorithm called PageRank.
Using more than 100 computational algorithmic factors, the top search results presented to the user are based on a page with a higher page ranking.
Google is certainly unique in how it performs text searches.
Its basic word search engine has three separate parts.
The first is called a Googlebot.
This web-crawler (computer software program) speedily “crawls” and browses through the hyper-linked pages of the Web. It quickly makes copies of web pages in a very logical and automated manner.
Next, the Google Indexer will sort out every word on every page and will store this index of words in a very large computer server database.
Currently, I believe Google has six of these monster-sized databases.
The content found inside the Google index servers is comparable to the index in the back of a book; it tells which pages contain the words that match any particular query-search term.
The third part is the Query-Processor, which compares your search query against the Indexer and presents the documents in the results page that it considers the most applicable using its special software algorithmic computations.
Google, according to the June 2011 comScore Core Search Report, owned a commanding 65.5 percent of the online search engine market.
This percentage alone leaves little doubt about its dominance in the online search engine world.
However, a couple of serious competitors will just not go away.
They are Yahoo and Microsoft’s Bing.
These two competitors, according to the same June 2011 report, control a combined 30.3 percent of the total online search engine pie.
Yahoo’s search engine (online in 1994) maintained a 15.9 percent market share.
Bing (online since 2009) controlled a 14.4 percent share of the online search market.
I also want to mention Ask.com (formerly known as Ask Jeeves in 1996), which came in with a 2.9 percent share.
During the month of June, 2.4 billion searches were performed on Bing, 2.7 billion using Yahoo, and 10.9 billion with Google.
It is apparent from these numbers that Google will continue to dominate the online search world for the foreseeable future.
Yours truly regularly uses Google because it performs a thorough search of the Web, it is well supported, and they keep adding new and interesting search tools to it.
The newest search tool is Google’s “Search by Image.” This uses a photo from the Web or your own uploaded picture as the search input. Google will use your photo to search for related text information or similar images on the Web.
This feature is somewhat comparable to the Google Goggles visual search app, which is available for your Android 2.1 or iPhone running iOS 4.0.
Search by Image allows for a picture search on a variety of subjects such as art, venues, and, according to Google, “mysterious creatures.”
You can try Google’s Search by Image at http://images.google.com and just click the camera icon.
The Small Business Administration (SBA) partnered with Google for assistance in providing small businesses with tools to help them achieve online success. To see their video about a hair salon business that went online, check out http://tinyurl.com/35pyr3q.
Some folks have questioned whether Google, because it is so large, should be regulated like a public utility.
Google responds by saying the online user has a choice whether or not to use its services.
Google’s page ranking seems to be another point of concern, as some people feel their businesses are not fairly ranked.
Google responds to this by saying they “never take actions that would hurt a specific website for competitive reasons.”
The search quality and results, according to Google, are provided only on the basis of what is useful for consumers.
As far as the dollars go, research shows Google had $64 billion in US economic activity in 2010.
In Minnesota, this amount was $1.07 billion.
Google said these numbers are obtained by examining the number of businesses, website publishers, and non-profits using their search and advertising tools.
As all of us know, we do have the power to click onto any search engine (or other online source) we choose to use.
Lately, I have noticed people searching for information by querying social networks such as Twitter and Facebook.
People are posting messages asking for information on various topics . . . and sometimes they end up participating in productive online conversations within these social networks.
“My dream has always been to build a “Star Trek” computer, and in my ideal world, I would be able to walk up to a computer and say, ‘what is the best time for me to sow seeds in India, given that [the] monsoon was early this year,’” said Amit Singhal, who is a software engineer with Google.
Singhal received his M.S. degree from the University of Minnesota, Duluth in 1991.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
Google began as a research project in 1996 by two doctor of philosophy degree students at Stanford University in Palo Alto, CA.
“The Anatomy of a Large-Scale Hypertextual Web Search Engine” was the name of the scientific paper written in 1998 by Google founders, Sergey Brin and Lawrence Page.
This paper described the original Google prototype, and the important computational algorithm called PageRank.
Using more than 100 computational algorithmic factors, the top search results presented to the user are based on a page with a higher page ranking.
Google is certainly unique in how it performs text searches.
Its basic word search engine has three separate parts.
The first is called a Googlebot.
This web-crawler (computer software program) speedily “crawls” and browses through the hyper-linked pages of the Web. It quickly makes copies of web pages in a very logical and automated manner.
Next, the Google Indexer will sort out every word on every page and will store this index of words in a very large computer server database.
Currently, I believe Google has six of these monster-sized databases.
The content found inside the Google index servers is comparable to the index in the back of a book; it tells which pages contain the words that match any particular query-search term.
The third part is the Query-Processor, which compares your search query against the Indexer and presents the documents in the results page that it considers the most applicable using its special software algorithmic computations.
Google, according to the June 2011 comScore Core Search Report, owned a commanding 65.5 percent of the online search engine market.
This percentage alone leaves little doubt about its dominance in the online search engine world.
However, a couple of serious competitors will just not go away.
They are Yahoo and Microsoft’s Bing.
These two competitors, according to the same June 2011 report, control a combined 30.3 percent of the total online search engine pie.
Yahoo’s search engine (online in 1994) maintained a 15.9 percent market share.
Bing (online since 2009) controlled a 14.4 percent share of the online search market.
I also want to mention Ask.com (formerly known as Ask Jeeves in 1996), which came in with a 2.9 percent share.
During the month of June, 2.4 billion searches were performed on Bing, 2.7 billion using Yahoo, and 10.9 billion with Google.
It is apparent from these numbers that Google will continue to dominate the online search world for the foreseeable future.
Yours truly regularly uses Google because it performs a thorough search of the Web, it is well supported, and they keep adding new and interesting search tools to it.
The newest search tool is Google’s “Search by Image.” This uses a photo from the Web or your own uploaded picture as the search input. Google will use your photo to search for related text information or similar images on the Web.
This feature is somewhat comparable to the Google Goggles visual search app, which is available for your Android 2.1 or iPhone running iOS 4.0.
Search by Image allows for a picture search on a variety of subjects such as art, venues, and, according to Google, “mysterious creatures.”
You can try Google’s Search by Image at http://images.google.com and just click the camera icon.
The Small Business Administration (SBA) partnered with Google for assistance in providing small businesses with tools to help them achieve online success. To see their video about a hair salon business that went online, check out http://tinyurl.com/35pyr3q.
Some folks have questioned whether Google, because it is so large, should be regulated like a public utility.
Google responds by saying the online user has a choice whether or not to use its services.
Google’s page ranking seems to be another point of concern, as some people feel their businesses are not fairly ranked.
Google responds to this by saying they “never take actions that would hurt a specific website for competitive reasons.”
The search quality and results, according to Google, are provided only on the basis of what is useful for consumers.
As far as the dollars go, research shows Google had $64 billion in US economic activity in 2010.
In Minnesota, this amount was $1.07 billion.
Google said these numbers are obtained by examining the number of businesses, website publishers, and non-profits using their search and advertising tools.
As all of us know, we do have the power to click onto any search engine (or other online source) we choose to use.
Lately, I have noticed people searching for information by querying social networks such as Twitter and Facebook.
People are posting messages asking for information on various topics . . . and sometimes they end up participating in productive online conversations within these social networks.
“My dream has always been to build a “Star Trek” computer, and in my ideal world, I would be able to walk up to a computer and say, ‘what is the best time for me to sow seeds in India, given that [the] monsoon was early this year,’” said Amit Singhal, who is a software engineer with Google.
Singhal received his M.S. degree from the University of Minnesota, Duluth in 1991.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Monday, November 28, 2011
Tablet computing use is flourishing
Nov. 28, 2011
by Mark Ollig
Apple’s iPad is, of course, on top of the computing tablet world, and the latest numbers show no significant letup in its popularity.
The research firm, eMarketer publishes analysis and data regarding digital marketing, media and commerce.
They just released new survey numbers and predictions about the use of tablet computing devices among Americans.
By the end of this year – which is almost upon us – the research firm predicts at least once every month, there will be 33.7 million Americans, or 10.8 percent of the population, using a computing tablet device.
By comparison, in 2010, there were only 13 million US tablet users, with the majority, 11.5 million, being Apple iPad users.
It seems like we have had tablet computers with us for a long time, but it was only recently, April 2010, when the first Apple iPad tablet was made available to the public.
By the end of this year, an estimated 33.7 million US citizens were using tablet devices on a monthly basis. Of this total, 28 million used iPads.
Tablet users aged 18-34 represent 31.5 percent of this year’s total users, while those aged 35 and over make up 55.5 percent.
With the many accessories one can add to a mobile computing tablet, it seems more folks are simply abandoning their laptops and opting to use tablets instead.
By the end of 2012, it is estimated there will be 54.8 million US tablet users of this number approximately 42 million will be operating iPads.
The 2012 numbers indicate a 62.8 percent increase from this year.
Looking ahead to 2014, it is predicted the 18-34 age group will account for 34.8 percent of all tablet users, while those over age 35 will account for 49.3 percent.
All signs show steady growth in the use of tablet computing devices continuing into the future.
One very interesting number is the research projection showing 89.5 million US tablet users by 2014.
This number alone represents 35 percent of all US Internet users.
Of the 89.5 million, 61 million are anticipated to still be using Apple iPads.
As tablet computing devices continue being introduced into more school teaching curriculums, I look for the total number of tablet users to be much higher than the 2014-projected 89.5 million.
Unless some “killer tablet” appears in the marketplace, Apple’s iPad will continue its dominance as the consumer’s preferred tablet computing device for the foreseeable future.
But hold the phone.
The new Amazon Kindle Fire just might be the tablet to make a legitimate charge at iPad’s dominance.
Another research firm, ChangeWave Research, released survey information showing the Amazon Kindle Fire will be right behind the Apple iPad as the tablet of choice for this year’s North American holiday shoppers.
Currently, 74 percent of iPad owners report they are very satisfied with their mobile device, the researching group said.
It is stated by eMarketer, that many of the tablets used in 2014 will be made up of newer-styled tablet devices which will have replaced the older tablets.
Not that your humble columnist has any inside knowledge, but I’ll take a guess that Apple will probably be releasing their iPad 5 during 2014.
Another prediction says that instead of sharing a tablet computer among many users, each user will be buying their own tablet, much the same way a person now buys their own smartphone.
There were no hard numbers provided for iPad tablet competitors such as the new Kindle Fire, Nook Tablet, BlackBerry Playbook, or Samsung Galaxy Tab, which currently trail behind Apple iPad tablets in total units sold.
However, there are numbers which do forecast a slight decrease in iPad tablet users by 2014.
The research by eMarketer shows by 2014, the number of iPad users will be down to 68 percent. While this is still a substantial percentage, it does represent a 15 percent decrease from today’s numbers.
By 2014, one out of three online users will be using some kind of tablet computing device.
Although yours truly has used an iPad tablet, I still prefer my laptop; however, I can understand why tablets will most likely overtake laptops/notebooks for accessing the Internet – from a portability perspective.
I still see tablet computing devices as more of a media content consumption device; however, with more innovative accessories being added to them, they are in fact, becoming viable user content creation devices in their own right.
Tablet computing devices are also being used creatively in displaying content to consumers in various venues, such as art shows.
One tablet owner I observed – at a recent college art show in Minneapolis – skillfully exhibited his paintings on the display screen of his iPad to interested buyers who had gathered around him.
I learned on that day tablets also make excellent mobile content presentation devices.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
Apple’s iPad is, of course, on top of the computing tablet world, and the latest numbers show no significant letup in its popularity.
The research firm, eMarketer publishes analysis and data regarding digital marketing, media and commerce.
They just released new survey numbers and predictions about the use of tablet computing devices among Americans.
By the end of this year – which is almost upon us – the research firm predicts at least once every month, there will be 33.7 million Americans, or 10.8 percent of the population, using a computing tablet device.
By comparison, in 2010, there were only 13 million US tablet users, with the majority, 11.5 million, being Apple iPad users.
It seems like we have had tablet computers with us for a long time, but it was only recently, April 2010, when the first Apple iPad tablet was made available to the public.
By the end of this year, an estimated 33.7 million US citizens were using tablet devices on a monthly basis. Of this total, 28 million used iPads.
Tablet users aged 18-34 represent 31.5 percent of this year’s total users, while those aged 35 and over make up 55.5 percent.
With the many accessories one can add to a mobile computing tablet, it seems more folks are simply abandoning their laptops and opting to use tablets instead.
By the end of 2012, it is estimated there will be 54.8 million US tablet users of this number approximately 42 million will be operating iPads.
The 2012 numbers indicate a 62.8 percent increase from this year.
Looking ahead to 2014, it is predicted the 18-34 age group will account for 34.8 percent of all tablet users, while those over age 35 will account for 49.3 percent.
All signs show steady growth in the use of tablet computing devices continuing into the future.
One very interesting number is the research projection showing 89.5 million US tablet users by 2014.
This number alone represents 35 percent of all US Internet users.
Of the 89.5 million, 61 million are anticipated to still be using Apple iPads.
As tablet computing devices continue being introduced into more school teaching curriculums, I look for the total number of tablet users to be much higher than the 2014-projected 89.5 million.
Unless some “killer tablet” appears in the marketplace, Apple’s iPad will continue its dominance as the consumer’s preferred tablet computing device for the foreseeable future.
But hold the phone.
The new Amazon Kindle Fire just might be the tablet to make a legitimate charge at iPad’s dominance.
Another research firm, ChangeWave Research, released survey information showing the Amazon Kindle Fire will be right behind the Apple iPad as the tablet of choice for this year’s North American holiday shoppers.
Currently, 74 percent of iPad owners report they are very satisfied with their mobile device, the researching group said.
It is stated by eMarketer, that many of the tablets used in 2014 will be made up of newer-styled tablet devices which will have replaced the older tablets.
Not that your humble columnist has any inside knowledge, but I’ll take a guess that Apple will probably be releasing their iPad 5 during 2014.
Another prediction says that instead of sharing a tablet computer among many users, each user will be buying their own tablet, much the same way a person now buys their own smartphone.
There were no hard numbers provided for iPad tablet competitors such as the new Kindle Fire, Nook Tablet, BlackBerry Playbook, or Samsung Galaxy Tab, which currently trail behind Apple iPad tablets in total units sold.
However, there are numbers which do forecast a slight decrease in iPad tablet users by 2014.
The research by eMarketer shows by 2014, the number of iPad users will be down to 68 percent. While this is still a substantial percentage, it does represent a 15 percent decrease from today’s numbers.
By 2014, one out of three online users will be using some kind of tablet computing device.
Although yours truly has used an iPad tablet, I still prefer my laptop; however, I can understand why tablets will most likely overtake laptops/notebooks for accessing the Internet – from a portability perspective.
I still see tablet computing devices as more of a media content consumption device; however, with more innovative accessories being added to them, they are in fact, becoming viable user content creation devices in their own right.
Tablet computing devices are also being used creatively in displaying content to consumers in various venues, such as art shows.
One tablet owner I observed – at a recent college art show in Minneapolis – skillfully exhibited his paintings on the display screen of his iPad to interested buyers who had gathered around him.
I learned on that day tablets also make excellent mobile content presentation devices.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Thursday, November 17, 2011
Medical robotic devices show great promise for society
Nov. 21, 2011
by Mark Ollig
“It may be difficult to predict the future, but the era of an aging society is definitely coming.”
These words were spoken by Professor Eiichi Saito.
Saito is a professor of rehabilitation medicine at Japan’s Fujita Health University.
With nearly one in four Japanese aged 65 and older, new computerized robotic devices are being tested in hopes of providing greater assistance and independence to this segment of the population.
During a recent demonstration, Saito, who normally uses a walker, instead strapped a computerized, metallic brace device called an “Independent Walk Assist” onto his right leg.
Saito’s right leg is paralyzed because of polio.
With a smile, Saito proceeded to effortlessly rise up from his chair, and walk across the stage floor. He then easily walked up and down a flight of three stairs.
He noted his improved ability to walk and bend his knee more naturally using the computerized metallic brace device, rather than the walker. He also said how much easier it was for him to rise up from the chair.
The power and sensors for the movements made by the computerized metallic brace came from a small backpack Saito wore.
Another robotic computerized device was demonstrated by a health care worker. This device showed how it could lift and move a disabled patient from his bed, to another location.
These demonstrations were being presented in front of reporters at a Toyota showroom facility in Tokyo.
Officials from Toyota said the sensors, motors, and computer software technology used in their automobiles, are now being utilized in new computerized devices to help people become more mobile.
What they learn from these new devices will be most likely used in future cars as well, Toyota said.
According to Akifumi Tamaoki, general manager of Toyota, additional tests and user feedback are needed from more people to insure the safety and reliability of the new devices.
Robotic healthcare devices by Toyota will become available in the marketplace during 2013.
A video showing the two demonstrations can be seen at http://tinyurl.com/6v937hn.
The “Stride Management Assist Device” by Honda, can assist people who have lost strength in their legs due to aging, or other physical conditions.
This device is not intended for those who have lost total mobility in their legs; it is used for those who need an “assist” in their walking.
The assist device lifts each leg at the thigh, using a small motor to help the user as their leg moves forward and backward.
The unit weighs about 6 pounds, and includes a belt worn around the hips and thighs.
The device uses hip and ankle sensors, which send data to a computing processor externally attached to a portable device, positioned on the small of one’s back.
The Stride Management Assist Device can help not just the elderly, but also those who have trouble walking from other physical ailments, such as strokes.
“It’s supporting and stabilizing,” commented CBS News Medical Correspondent Dr. Jennifer Ashton, while demonstrating the device during the CBS “The Early Show” television program.
This device also helps lengthen the walking stride of the user.
“This actually engages more muscles than if you take shorter strides, so it’s actually preventing subsequent muscle atrophy,” Dr. Ashton said.
A team of scientists from Sweden and Italy have developed what is thought to be the first artificial robotic hand that conveys feeling back to the human user.
Called “SmartHand” this prototype is a five-fingered, self-contained robotic hand, with four motors and 40 individual sensors.
A team of scientists in Sweden attached this robotic hand to a 22-year-old amputee, Robin Ekenstam, who had lost his right hand to cancer.
They connected nerve pressure sensor endings onto selected areas of skin on Ekenstam’s right arm. These sensors will stimulate specific receptor areas of his brain’s cortex.
The sensor endings were also attached to the tiny sensor receptors in the robotic hand fastened to the end of Ekenstam’s arm.
I watched video of Ekenstam controlling the fingers of the robotic hand while grasping and picking up a filled, plastic water bottle. He then proceeded to pour water from the bottle into a cup sitting on a table.
Ekenstam’s brain was picking up information from the sensors inside the artificial hand, and the artificial hand was receiving signals from his brain.
“It’s great!” Ekenstam exclaimed, “I have a feeling that I have not had for a long time. When I grab something tightly, I can feel it in the fingertips, which is strange, because I don’t have them [human fingers] anymore. It’s fantastic.”
Robin Ekenstam could once again touch and feel by using his new robotic right hand.
Scientists say it has taken 10 years to get to this stage.
“First, the brain will control them [artificial hands] without any muscle contractions; secondly, these hands will be able to give back feedback, so that the patient will be able to feel what’s going on . . . by touching, just like a real hand,” said Christian Cipriani, who authored the May 2011 research paper, “The SmartHand Transradial Prosthesis.”
To watch a remarkable video about this new bio-engineered robotic SmartHand, go to http://tinyurl.com/7a5hfaw.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
“It may be difficult to predict the future, but the era of an aging society is definitely coming.”
These words were spoken by Professor Eiichi Saito.
Saito is a professor of rehabilitation medicine at Japan’s Fujita Health University.
With nearly one in four Japanese aged 65 and older, new computerized robotic devices are being tested in hopes of providing greater assistance and independence to this segment of the population.
During a recent demonstration, Saito, who normally uses a walker, instead strapped a computerized, metallic brace device called an “Independent Walk Assist” onto his right leg.
Saito’s right leg is paralyzed because of polio.
With a smile, Saito proceeded to effortlessly rise up from his chair, and walk across the stage floor. He then easily walked up and down a flight of three stairs.
He noted his improved ability to walk and bend his knee more naturally using the computerized metallic brace device, rather than the walker. He also said how much easier it was for him to rise up from the chair.
The power and sensors for the movements made by the computerized metallic brace came from a small backpack Saito wore.
Another robotic computerized device was demonstrated by a health care worker. This device showed how it could lift and move a disabled patient from his bed, to another location.
These demonstrations were being presented in front of reporters at a Toyota showroom facility in Tokyo.
Officials from Toyota said the sensors, motors, and computer software technology used in their automobiles, are now being utilized in new computerized devices to help people become more mobile.
What they learn from these new devices will be most likely used in future cars as well, Toyota said.
According to Akifumi Tamaoki, general manager of Toyota, additional tests and user feedback are needed from more people to insure the safety and reliability of the new devices.
Robotic healthcare devices by Toyota will become available in the marketplace during 2013.
A video showing the two demonstrations can be seen at http://tinyurl.com/6v937hn.
The “Stride Management Assist Device” by Honda, can assist people who have lost strength in their legs due to aging, or other physical conditions.
This device is not intended for those who have lost total mobility in their legs; it is used for those who need an “assist” in their walking.
The assist device lifts each leg at the thigh, using a small motor to help the user as their leg moves forward and backward.
The unit weighs about 6 pounds, and includes a belt worn around the hips and thighs.
The device uses hip and ankle sensors, which send data to a computing processor externally attached to a portable device, positioned on the small of one’s back.
The Stride Management Assist Device can help not just the elderly, but also those who have trouble walking from other physical ailments, such as strokes.
“It’s supporting and stabilizing,” commented CBS News Medical Correspondent Dr. Jennifer Ashton, while demonstrating the device during the CBS “The Early Show” television program.
This device also helps lengthen the walking stride of the user.
“This actually engages more muscles than if you take shorter strides, so it’s actually preventing subsequent muscle atrophy,” Dr. Ashton said.
A team of scientists from Sweden and Italy have developed what is thought to be the first artificial robotic hand that conveys feeling back to the human user.
Called “SmartHand” this prototype is a five-fingered, self-contained robotic hand, with four motors and 40 individual sensors.
A team of scientists in Sweden attached this robotic hand to a 22-year-old amputee, Robin Ekenstam, who had lost his right hand to cancer.
They connected nerve pressure sensor endings onto selected areas of skin on Ekenstam’s right arm. These sensors will stimulate specific receptor areas of his brain’s cortex.
The sensor endings were also attached to the tiny sensor receptors in the robotic hand fastened to the end of Ekenstam’s arm.
I watched video of Ekenstam controlling the fingers of the robotic hand while grasping and picking up a filled, plastic water bottle. He then proceeded to pour water from the bottle into a cup sitting on a table.
Ekenstam’s brain was picking up information from the sensors inside the artificial hand, and the artificial hand was receiving signals from his brain.
“It’s great!” Ekenstam exclaimed, “I have a feeling that I have not had for a long time. When I grab something tightly, I can feel it in the fingertips, which is strange, because I don’t have them [human fingers] anymore. It’s fantastic.”
Robin Ekenstam could once again touch and feel by using his new robotic right hand.
Scientists say it has taken 10 years to get to this stage.
“First, the brain will control them [artificial hands] without any muscle contractions; secondly, these hands will be able to give back feedback, so that the patient will be able to feel what’s going on . . . by touching, just like a real hand,” said Christian Cipriani, who authored the May 2011 research paper, “The SmartHand Transradial Prosthesis.”
To watch a remarkable video about this new bio-engineered robotic SmartHand, go to http://tinyurl.com/7a5hfaw.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Thursday, November 10, 2011
How many apps are on your mobile device?
Nov. 14, 2011
by Mark Ollig
In 2007, Apple first introduced apps, or mobile device software applications, designed to be used on their new iPhone.
Our good friends over at the Pew Internet and American Life Project just released a new report regarding adult users of cell phones and mobile computing devices, and the types of apps they use.
Pew defines an app as, “an end-user software application designed for the mobile device operating system, which extends that device’s capabilities.”
The Pew report starts off by disclosing half of all adult cell phone (smartphone) users have apps on their phones.
These include apps which originally came bundled with the phone, and the apps downloaded from online platforms such as, Apple’s App Store, Mac App Store, and Amazon Appstore.
Just what kinds of apps are users downloading?
Pew’s July 25 - Aug. 26 polling data released the following survey results.
Of all adults polled, 74 percent like apps which provide them with news, weather, sports, and stock information.
Apps for communicating better with friends and family were downloaded by 67 percent of adults surveyed.
Apps that educated and assisted in learning were downloaded by 64 percent of all adults questioned.
Downloading an app containing information about a visited destination was reported by 53 percent.
Support for making online purchases and assistance with shopping, were the reasons 46 percent of surveyed adults downloaded these types of apps.
Apps for watching movies or TV shows on mobile devices were downloaded by 43 percent.
Apps which tracked or managed one’s health information were downloaded by 29 percent of the adults questioned.
It is one thing to have lots of apps available on our mobile devices, however, it is a bit surprising when discovering we hardly use many of them – some apps we rarely use at all.
Pew found 51 percent of adults saying they only use a “handful” of apps per week, while 17 percent report “using no apps on a regular basis,” on their cell phones.
As for yours truly, names of apps I use on a daily basis include: Weather, Mail, Calendar, Stocks, Fluent News, Facebook and Twitter.
Apps I use during the week are: YouTube, NASA News, Merriam-Webster Dictionary, Kindle (e-book reader), iHeart Radio, my bank’s app, and a tech news reader app.
Other apps I use include: Dragon Dictation, Chess, Flick Bowling, C-Span Radio, US History, US Documents, Spanish (translator), Skype, Calculator, Speed Test, Battery Magic, and Maps.
My occasionally used apps are: Crime Reports, Police Scanner, Flight Track, iBartender, Dragon Dictate, and Ali Audio Jabs (this app contains 10 spoken phrases from boxer Muhammad Ali).
Currently, the top downloaded free iPhone app is, “Facebook Messenger,” followed by, “Hardest Game Ever.”
“Zombieville USA 2,” followed by the popular, “Angry Birds,” is the top downloaded paid iPhone app at this time.
Coincidentally, the top downloaded paid iPad app is also, “Zombieville USA 2.”
“Sprinkle: Water Splashing Fire Fighting Fun!” is presently the top free iPad app, followed by “Adobe Reader.”
The report revealed 52 percent of the adults polled paid $5 or less for an app, while 17 percent said they paid over $20 for an app.
There are quite a few apps available for free, and many priced at just 99 cents.
Pew’s August poll reports 60 percent of all adult cell phone users downloading apps to their phones belonged in the 18 - 29 age group.
This represents an increase of 8 percent since May of last year.
In the 30 - 49 age group, 46 percent reported downloading apps to their phones during Pew’s August survey, which is an increase of 15 percentage points since the May 2010 polling data.
As for the last age group, 50 and better, Pew states 15 percent responded saying they had downloaded apps to their cell phones when polled during the August 2011 survey.
This represents a 4 percent increase from the number polled in May 2010.
Pew’s latest results also showed a steady increase in the use of tablet computing devices by adults.
This data revealed 10 percent of US adults owning mobile computing tablet devices such as an Apple iPad, Samsung Galaxy Tab, or Motorola Zoom.
Among adult tablet computing device users, 39 percent use six or more apps per week.
Of these same tablet owners, 82 percent reported downloading apps to their cell phones as well.
There are approximately 500,000 apps available for the iPhone, and over 100,000 apps for the iPad.
And so my faithful readers, we continue the transitioning of ourselves away from stationary computer desktops plugged into a wall, and into a more mobile computing lifestyle.
With seemingly countless apps available to us, we are using more of them on our mobile computing devices for communicating, reading e-books, accomplishing our work, accessing information, and enjoying leisure-related computing activities.
As we come to an end of another Bits and Bytes column, let me play a random phrase from my Ali Audio Jabs app for you.
“This brash young boxer is something to see, and the heavyweight championship is his destiny.”
Have a great week!
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
In 2007, Apple first introduced apps, or mobile device software applications, designed to be used on their new iPhone.
Our good friends over at the Pew Internet and American Life Project just released a new report regarding adult users of cell phones and mobile computing devices, and the types of apps they use.
Pew defines an app as, “an end-user software application designed for the mobile device operating system, which extends that device’s capabilities.”
The Pew report starts off by disclosing half of all adult cell phone (smartphone) users have apps on their phones.
These include apps which originally came bundled with the phone, and the apps downloaded from online platforms such as, Apple’s App Store, Mac App Store, and Amazon Appstore.
Just what kinds of apps are users downloading?
Pew’s July 25 - Aug. 26 polling data released the following survey results.
Of all adults polled, 74 percent like apps which provide them with news, weather, sports, and stock information.
Apps for communicating better with friends and family were downloaded by 67 percent of adults surveyed.
Apps that educated and assisted in learning were downloaded by 64 percent of all adults questioned.
Downloading an app containing information about a visited destination was reported by 53 percent.
Support for making online purchases and assistance with shopping, were the reasons 46 percent of surveyed adults downloaded these types of apps.
Apps for watching movies or TV shows on mobile devices were downloaded by 43 percent.
Apps which tracked or managed one’s health information were downloaded by 29 percent of the adults questioned.
It is one thing to have lots of apps available on our mobile devices, however, it is a bit surprising when discovering we hardly use many of them – some apps we rarely use at all.
Pew found 51 percent of adults saying they only use a “handful” of apps per week, while 17 percent report “using no apps on a regular basis,” on their cell phones.
As for yours truly, names of apps I use on a daily basis include: Weather, Mail, Calendar, Stocks, Fluent News, Facebook and Twitter.
Apps I use during the week are: YouTube, NASA News, Merriam-Webster Dictionary, Kindle (e-book reader), iHeart Radio, my bank’s app, and a tech news reader app.
Other apps I use include: Dragon Dictation, Chess, Flick Bowling, C-Span Radio, US History, US Documents, Spanish (translator), Skype, Calculator, Speed Test, Battery Magic, and Maps.
My occasionally used apps are: Crime Reports, Police Scanner, Flight Track, iBartender, Dragon Dictate, and Ali Audio Jabs (this app contains 10 spoken phrases from boxer Muhammad Ali).
Currently, the top downloaded free iPhone app is, “Facebook Messenger,” followed by, “Hardest Game Ever.”
“Zombieville USA 2,” followed by the popular, “Angry Birds,” is the top downloaded paid iPhone app at this time.
Coincidentally, the top downloaded paid iPad app is also, “Zombieville USA 2.”
“Sprinkle: Water Splashing Fire Fighting Fun!” is presently the top free iPad app, followed by “Adobe Reader.”
The report revealed 52 percent of the adults polled paid $5 or less for an app, while 17 percent said they paid over $20 for an app.
There are quite a few apps available for free, and many priced at just 99 cents.
Pew’s August poll reports 60 percent of all adult cell phone users downloading apps to their phones belonged in the 18 - 29 age group.
This represents an increase of 8 percent since May of last year.
In the 30 - 49 age group, 46 percent reported downloading apps to their phones during Pew’s August survey, which is an increase of 15 percentage points since the May 2010 polling data.
As for the last age group, 50 and better, Pew states 15 percent responded saying they had downloaded apps to their cell phones when polled during the August 2011 survey.
This represents a 4 percent increase from the number polled in May 2010.
Pew’s latest results also showed a steady increase in the use of tablet computing devices by adults.
This data revealed 10 percent of US adults owning mobile computing tablet devices such as an Apple iPad, Samsung Galaxy Tab, or Motorola Zoom.
Among adult tablet computing device users, 39 percent use six or more apps per week.
Of these same tablet owners, 82 percent reported downloading apps to their cell phones as well.
There are approximately 500,000 apps available for the iPhone, and over 100,000 apps for the iPad.
And so my faithful readers, we continue the transitioning of ourselves away from stationary computer desktops plugged into a wall, and into a more mobile computing lifestyle.
With seemingly countless apps available to us, we are using more of them on our mobile computing devices for communicating, reading e-books, accomplishing our work, accessing information, and enjoying leisure-related computing activities.
As we come to an end of another Bits and Bytes column, let me play a random phrase from my Ali Audio Jabs app for you.
“This brash young boxer is something to see, and the heavyweight championship is his destiny.”
Have a great week!
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Wednesday, November 2, 2011
Microsoft's 'Kinect Effect' for real-world applications
Nov. 7, 2011
by Mark Ollig
It has been one year since Microsoft’s Xbox 360 game console scored a big hit with its motion-sensing “Kinect” add-on device.
Kinect is a combination of the words “kinetic” and “connect.”
Kinect was originally announced to the public as Project Natal, during the June 1, 2009 E3 (Electronic Entertainment Expo), by the famous filmmaker, Steven Spielberg.
The Kinect device (which holds the Guinness World Record as the fastest-selling consumer electronic device), allows users of the Xbox 360 home entertainment and video game system, to become more physically immersed in them, by using hand gestures, spoken commands, and physical movements.
Instead of video game players using hand-held controllers, the player’s themselves physically (and vocally) become the controllers and communicate with Kinect, which is attached to the Xbox 360 console.
Kinect provides Xbox 360 game users with a more engaging playing experience, via its full-body sensor tracking of the participating players.
Just step in front of the Kinect device’s sensors and it will recognize you, and respond to your gestures, vocal commands, and movements.
Kinect’s human “skeletal” sensor tracking system monitors the movements of up to six people.
Kinect can track 20 separate physical skeletal joint movements of two active players.
The two active players can be shown as live avatars on the Xbox 360 games display screen.
Kinect includes four microphones, supports single speaker voice recognition, and can locate the source of sounds around it.
Processing power for Kinect is obtained by using one of the three Xbox 360 console Xenon CPU processor cores.
Microsoft is employing Kinect’s technology with the Xbox 360 console, and moving it into real-world applications.
Microsoft, in their Oct. 31 press release, stated that the new commercial version of Kinect, which they call “Kinect Effect” will give “. . . businesses the tools to develop applications that not only could improve their own operations, but potentially revolutionize entire industries.”
Microsoft’s concept video demonstrating commercial uses for Kinect is viewable at http://tinyurl.com/3gnlxam.
Microsoft’s website showed real-life examples of how Kinect technology has improved the quality of life for people who are overcoming physical challenges.
In one example, persons with learning challenges at the Royal Berkshire Hospital (across the pond in the UK) were shown using the Xbox Kinect system as part of their rehabilitation exercises.
People in the hospital’s neurological rehab unit are being matched to specific interactive Kinect game titles, depending on the severity of their learning challenge.
The hospital also says the games have helped stroke patients physically, improving their balance, mobility, and coordination.
Another beneficial example of applying Kinect technology is at the Lakeside Center for Autism in Issaquah, WA.
The Lakeside staff uses Kinect technology in its therapy and skill-building programs.
Lakeside uses Kinect’s motion-sensor capabilities to observe patients' motor skills, and by using Kinect’s voice-recognition technology, improvements can be made in a patient’s social interaction and language development.
In Cantabria, Spain, some resourceful inventors at a technology start-up, called Tedesys, are developing applications using Kinect technology that will allow doctors to obtain vital patient information – while operating on them.
Because surgeries can last for hours, a doctor may need to look up details on a certain operating procedure, or obtain information from an MRI or CAT scan.
In these instances, the doctor would need to leave the sterile operating room environment to get the information – and then re-scrub, in order to come back into the operating room.
Using the Tedesys-developed Kinect application, the doctor is now able to simply use hand gestures or voice commands to look at information hands-free – without ever having to leave the operating room.
“Using Microsoft Kinect, they [doctors]can check information on the patient without touching anything, and in this way they can avoid [the risk] of bacterial infection,” said Jesus Perez, Tedesys’s chief operating officer.
After enabling the Microsoft Research WorldWide Telescope using Kinect’s Windows Software Development Kit (beta version), a Microsoft researcher, demonstrated how he could easily maneuver around our galaxy with just a wave of his hand.
Another Microsoft demonstration showed an ordinary lounge chair atop an electrically motorized wheeled platform. The wheeled platform was drivable using hand-gesturing motions from the person seated in the chair using Kinect technology.
“I think it opens up this realm of new experiences that are all about kinetics, not only physically-immersive games, but all kinds of new experiences,” said Jeremy Gibson, a game design instructor at the University of Southern California.
Gibson also suggested the main reason Kinect has moved from the living room into real-world uses, is because of how widely available the advanced technology is.
In his classroom, Gibson teaches game design and game prototyping. He and his colleagues teach students to develop applications using Kinect on the Xbox 360. Gibson says this gives the students hands-on experience using the new technology before they “enter the real world.”
In what began as an Xbox 360 gaming add-on device, Kinect technology is quickly evolving into some very useful, real-world applications that are improving the quality of people’s lives.
To learn more about the Kinect Effect, go to http://tinyurl.com/42o862h.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
It has been one year since Microsoft’s Xbox 360 game console scored a big hit with its motion-sensing “Kinect” add-on device.
Kinect is a combination of the words “kinetic” and “connect.”
Kinect was originally announced to the public as Project Natal, during the June 1, 2009 E3 (Electronic Entertainment Expo), by the famous filmmaker, Steven Spielberg.
The Kinect device (which holds the Guinness World Record as the fastest-selling consumer electronic device), allows users of the Xbox 360 home entertainment and video game system, to become more physically immersed in them, by using hand gestures, spoken commands, and physical movements.
Instead of video game players using hand-held controllers, the player’s themselves physically (and vocally) become the controllers and communicate with Kinect, which is attached to the Xbox 360 console.
Kinect provides Xbox 360 game users with a more engaging playing experience, via its full-body sensor tracking of the participating players.
Just step in front of the Kinect device’s sensors and it will recognize you, and respond to your gestures, vocal commands, and movements.
Kinect’s human “skeletal” sensor tracking system monitors the movements of up to six people.
Kinect can track 20 separate physical skeletal joint movements of two active players.
The two active players can be shown as live avatars on the Xbox 360 games display screen.
Kinect includes four microphones, supports single speaker voice recognition, and can locate the source of sounds around it.
Processing power for Kinect is obtained by using one of the three Xbox 360 console Xenon CPU processor cores.
Microsoft is employing Kinect’s technology with the Xbox 360 console, and moving it into real-world applications.
Microsoft, in their Oct. 31 press release, stated that the new commercial version of Kinect, which they call “Kinect Effect” will give “. . . businesses the tools to develop applications that not only could improve their own operations, but potentially revolutionize entire industries.”
Microsoft’s concept video demonstrating commercial uses for Kinect is viewable at http://tinyurl.com/3gnlxam.
Microsoft’s website showed real-life examples of how Kinect technology has improved the quality of life for people who are overcoming physical challenges.
In one example, persons with learning challenges at the Royal Berkshire Hospital (across the pond in the UK) were shown using the Xbox Kinect system as part of their rehabilitation exercises.
People in the hospital’s neurological rehab unit are being matched to specific interactive Kinect game titles, depending on the severity of their learning challenge.
The hospital also says the games have helped stroke patients physically, improving their balance, mobility, and coordination.
Another beneficial example of applying Kinect technology is at the Lakeside Center for Autism in Issaquah, WA.
The Lakeside staff uses Kinect technology in its therapy and skill-building programs.
Lakeside uses Kinect’s motion-sensor capabilities to observe patients' motor skills, and by using Kinect’s voice-recognition technology, improvements can be made in a patient’s social interaction and language development.
In Cantabria, Spain, some resourceful inventors at a technology start-up, called Tedesys, are developing applications using Kinect technology that will allow doctors to obtain vital patient information – while operating on them.
Because surgeries can last for hours, a doctor may need to look up details on a certain operating procedure, or obtain information from an MRI or CAT scan.
In these instances, the doctor would need to leave the sterile operating room environment to get the information – and then re-scrub, in order to come back into the operating room.
Using the Tedesys-developed Kinect application, the doctor is now able to simply use hand gestures or voice commands to look at information hands-free – without ever having to leave the operating room.
“Using Microsoft Kinect, they [doctors]can check information on the patient without touching anything, and in this way they can avoid [the risk] of bacterial infection,” said Jesus Perez, Tedesys’s chief operating officer.
After enabling the Microsoft Research WorldWide Telescope using Kinect’s Windows Software Development Kit (beta version), a Microsoft researcher, demonstrated how he could easily maneuver around our galaxy with just a wave of his hand.
Another Microsoft demonstration showed an ordinary lounge chair atop an electrically motorized wheeled platform. The wheeled platform was drivable using hand-gesturing motions from the person seated in the chair using Kinect technology.
“I think it opens up this realm of new experiences that are all about kinetics, not only physically-immersive games, but all kinds of new experiences,” said Jeremy Gibson, a game design instructor at the University of Southern California.
Gibson also suggested the main reason Kinect has moved from the living room into real-world uses, is because of how widely available the advanced technology is.
In his classroom, Gibson teaches game design and game prototyping. He and his colleagues teach students to develop applications using Kinect on the Xbox 360. Gibson says this gives the students hands-on experience using the new technology before they “enter the real world.”
In what began as an Xbox 360 gaming add-on device, Kinect technology is quickly evolving into some very useful, real-world applications that are improving the quality of people’s lives.
To learn more about the Kinect Effect, go to http://tinyurl.com/42o862h.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Thursday, October 27, 2011
They could have owned the computer industry
Oct. 31, 2011
by Mark Ollig
What well-known company created the first desktop office computer, navigable by using a mouse-driven graphical user interface?
Did I hear someone say Apple Computer’s Lisa or Macintosh computer?
The Lisa was available in January 1983, followed by the Macintosh one year later.
The Microsoft Windows 1.0 graphical user interface program came out in November 1985.
It was in April 1973, when Xerox Corporation’s Palo Alto Research Center (PARC) division completed work on a new desktop computer.
This computer contained a graphical user interface, navigated by using a 3-button mouse.
They called it the Xerox Alto.
I think of the Alto computer as the ancestor of today’s personal computer.
The name “Alto” was taken from the Palo Alto Research Center, where Xerox developed it.
Xerox is better known for its copier machines, but back in the early 1970s, inside Xerox’s Software Development Division, Xerox developers began work on a unique computer graphical user interface design.
Xerox researchers believed future technology favored digital over analog, and so they designed a way to merge their copier machines with digital computing technology – which they began using within their organization.
The Alto computer’s graphical user interface offered a significant improvement over keying in text at a command line.
Alto users experienced a dramatic visual difference when manipulating the display screen’s graphical images, scrollbars, icons, windows, and file names using the 3-button mouse.
The Xerox Alto computer used a rectangular-portrait-like, monochrome, 875-line, raster-scanned, bitmap display screen.
“Bitmap” refers to how each pixel element on the display screen is mapped with one or more bits stored inside the computer’s video memory.
A bitmap display was essential in using the graphical user interface.
Alto’s programs were stored on 2.5 MB single-platter removable disk cartridges.
The “microcoded” processor was a based on Texas Instrument’s Arithmetic and Logic Unit (ALU) 7481 chip, and was equipped with 128 kB of main memory, expandable to 512 kB.
The computer’s processing components, disk storage units, and related systems were encased inside a small cabinet the size of a compact refrigerator.
Alto computers were connected to Xerox’s LAN (Local Area Network) using Ethernet – which Xerox had developed at PARC.
The LAN allowed for the sharing of program files, documents, printers, office email, and other information.
The Alto computer included a 64-key QWERTY keyboard.
Another device for entering commands was a five-finger “chord keyset” device; however, this never became as popular with Alto users as did using the 3-button mouse.
The Alto was designed to be used with laser printers, which were also developed by Xerox.
Software used with the Alto included word processors Bravo and Gypsy.
Alto’s email software was called Laurel; someone with a sense of humor called the next version; Hardy.
Other software included a File Transfer Protocol program, a user chat utility, and computer games such as, Chess, Pinball, Othello, and Alto Trek.
Markup and Draw (a painting and graphics program), was also used on the Alto.
Xerox originally started with 80 Alto computers, each costing $10,000.
Alto computers were not sold to the general public; however, Xerox provided them to universities and government institutions, as well as using them within their corporate business offices.
By 1978, Xerox Alto computers had been installed in four test sites – including the White House.
I learned that by the summer of 1979, almost 1,000 Alto computers were being used by engineers, computer science researchers, and office personal.
In December 1979, Steve Jobs, who co-founded Apple Computer in 1976, visited Xerox at PARC, and was given a demonstration of the Alto computer.
He was shown Ethernet-networked Alto computers using email and an object-oriented programing language which ended up being called, SmallTalk.
However, what impressed Jobs the most was when he observed people operating the Alto computers by means of a working graphical user interface, instead of text commands.
“I thought it was the best thing I had ever seen in my life,” said Jobs.
He went on to say, “Within 10 minutes, it was obvious to me that all computers would work like this someday.”
In 1981, Xerox made available to the public a graphical user interface desktop business computer called, the Xerox Star 8010 Information System.
That same year, recognized technology leader IBM came out with its own desktop computer called, the IBM Personal Computer (model 5150).
Yours truly used this IBM PC model for several years.
IBM had a historical reputation with computing, and its personal computers became popular with businesses and the general public.
Throughout the 1980s, IBM, Apple, Microsoft, and other computer companies continued to develop and improve upon their own computer hardware and operating systems.
As the 1980s progressed, it became apparent that it was too late for Xerox to become a serious player in this newly emerging personal computer game.
The public identified Xerox more as a copier machine company, than a computer company.
As for the Xerox Alto, its significance was the major role it played in influencing how we interact with computers.
“ . . . Xerox could have owned the entire computer industry today,” Steve Jobs said in 1996.
To see a picture of the Xerox Alto, go to: tinyurl.com/42x52qo.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
What well-known company created the first desktop office computer, navigable by using a mouse-driven graphical user interface?
Did I hear someone say Apple Computer’s Lisa or Macintosh computer?
The Lisa was available in January 1983, followed by the Macintosh one year later.
The Microsoft Windows 1.0 graphical user interface program came out in November 1985.
It was in April 1973, when Xerox Corporation’s Palo Alto Research Center (PARC) division completed work on a new desktop computer.
This computer contained a graphical user interface, navigated by using a 3-button mouse.
They called it the Xerox Alto.
I think of the Alto computer as the ancestor of today’s personal computer.
The name “Alto” was taken from the Palo Alto Research Center, where Xerox developed it.
Xerox is better known for its copier machines, but back in the early 1970s, inside Xerox’s Software Development Division, Xerox developers began work on a unique computer graphical user interface design.
Xerox researchers believed future technology favored digital over analog, and so they designed a way to merge their copier machines with digital computing technology – which they began using within their organization.
The Alto computer’s graphical user interface offered a significant improvement over keying in text at a command line.
Alto users experienced a dramatic visual difference when manipulating the display screen’s graphical images, scrollbars, icons, windows, and file names using the 3-button mouse.
The Xerox Alto computer used a rectangular-portrait-like, monochrome, 875-line, raster-scanned, bitmap display screen.
“Bitmap” refers to how each pixel element on the display screen is mapped with one or more bits stored inside the computer’s video memory.
A bitmap display was essential in using the graphical user interface.
Alto’s programs were stored on 2.5 MB single-platter removable disk cartridges.
The “microcoded” processor was a based on Texas Instrument’s Arithmetic and Logic Unit (ALU) 7481 chip, and was equipped with 128 kB of main memory, expandable to 512 kB.
The computer’s processing components, disk storage units, and related systems were encased inside a small cabinet the size of a compact refrigerator.
Alto computers were connected to Xerox’s LAN (Local Area Network) using Ethernet – which Xerox had developed at PARC.
The LAN allowed for the sharing of program files, documents, printers, office email, and other information.
The Alto computer included a 64-key QWERTY keyboard.
Another device for entering commands was a five-finger “chord keyset” device; however, this never became as popular with Alto users as did using the 3-button mouse.
The Alto was designed to be used with laser printers, which were also developed by Xerox.
Software used with the Alto included word processors Bravo and Gypsy.
Alto’s email software was called Laurel; someone with a sense of humor called the next version; Hardy.
Other software included a File Transfer Protocol program, a user chat utility, and computer games such as, Chess, Pinball, Othello, and Alto Trek.
Markup and Draw (a painting and graphics program), was also used on the Alto.
Xerox originally started with 80 Alto computers, each costing $10,000.
Alto computers were not sold to the general public; however, Xerox provided them to universities and government institutions, as well as using them within their corporate business offices.
By 1978, Xerox Alto computers had been installed in four test sites – including the White House.
I learned that by the summer of 1979, almost 1,000 Alto computers were being used by engineers, computer science researchers, and office personal.
In December 1979, Steve Jobs, who co-founded Apple Computer in 1976, visited Xerox at PARC, and was given a demonstration of the Alto computer.
He was shown Ethernet-networked Alto computers using email and an object-oriented programing language which ended up being called, SmallTalk.
However, what impressed Jobs the most was when he observed people operating the Alto computers by means of a working graphical user interface, instead of text commands.
“I thought it was the best thing I had ever seen in my life,” said Jobs.
He went on to say, “Within 10 minutes, it was obvious to me that all computers would work like this someday.”
In 1981, Xerox made available to the public a graphical user interface desktop business computer called, the Xerox Star 8010 Information System.
That same year, recognized technology leader IBM came out with its own desktop computer called, the IBM Personal Computer (model 5150).
Yours truly used this IBM PC model for several years.
IBM had a historical reputation with computing, and its personal computers became popular with businesses and the general public.
Throughout the 1980s, IBM, Apple, Microsoft, and other computer companies continued to develop and improve upon their own computer hardware and operating systems.
As the 1980s progressed, it became apparent that it was too late for Xerox to become a serious player in this newly emerging personal computer game.
The public identified Xerox more as a copier machine company, than a computer company.
As for the Xerox Alto, its significance was the major role it played in influencing how we interact with computers.
“ . . . Xerox could have owned the entire computer industry today,” Steve Jobs said in 1996.
To see a picture of the Xerox Alto, go to: tinyurl.com/42x52qo.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Thursday, October 20, 2011
Licklider's vision: an 'Intergalactic' computer network
Oct. 24, 2011
by Mark Ollig
Joseph Carl Robnett Licklider was a visionary, and an Internet pioneer.
I realize many folks may not have heard of Licklider, however, he is deserving of recognition for his innovative concepts which helped bring us the Internet.
Licklider was born March 11, 1915.
He developed an early interest in engineering, building model airplanes, and working on automobiles.
In 1937, he graduated from Washington University in St. Louis, where he received a bachelor of arts degree majoring in psychology, mathematics, and physics.
After receiving a PhD in psychoacoustics in 1942 from the University of Rochester, he went on to work at Harvard University’s Psycho-Acoustic Laboratory.
In 1950, Licklider went to MIT, where he was an associate professor.
During the early 1950s, Licklider was asked about working on creating a new technology for displaying computer information to human operators. This was for the purpose of improving US air defense capabilities.
It was during this time Licklider’s thoughts about human-computer interactions began.
The Advanced Research Projects Agency (ARPA) was established in February 1958 by President Dwight Eisenhower, in response to the Soviet Union’s Sputnik I satellite program, which yours truly recently wrote a column about.
The next year, Licklider had written a book titled “Libraries of the Future.”
This book explained how people would be able to simultaneously access (from remote locations) what he called an “automated library,” located in a database inside a computer.
In 1960, Licklider wrote, “It seems reasonable to envision, for a time 10 or 15 years hence, a ‘thinking center’ that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval.”
He even began seriously speaking about interactive computers serving as automated assistants – and people were listening to him.
Licklider wrote what eventually became a seminal paper in March 1960, called “Man-Computer Symbiosis.”
In it he wrote, “It seems entirely possible that, in due course, electronic or chemical “machines” will outdo the human brain in most of the functions we now consider exclusively within its province.”
Licklider wrote about the need for computer involvement in formative and real-time thinking.
He described a computer assistant that could perform simulation modeling which would graphically display the results. He also wrote about how a computer could determine solutions for new problems based on past experiences, and how a computer would be able to answer questions.
Even back in 1960, Licklider foresaw a close symbiotic, or interdependent relationship developing between computers and human beings.
His foresight was revealed in his writings regarding computerized interfaces with the human brain – which he believed was possible.
Licklider also wrote a series of memos involving a number of computers connected to each other within a “Galactic Network” concept.
This concept, Licklider wrote, allowed the data and programs stored within each computer to be accessed from anywhere in the world, by any of the computers connected to the network.
He accurately recognized the importance and potential of computer networks, explaining that by distributing numerous computers over a fast-moving electronic data network, each one could share its programs and informational resources with the other.
It seems as if Licklider is describing the basic foundation of the Internet.
Licklider, in collaboration with Welden E. Clark, released a 16-page paper in August 1962, titled “On-Line Man Computer Communication.”
In this paper, he described ,in detail, the concepts of the future use of on-line networks, and how computers would play the role of a teacher for “programmed instruction” training purposes.
A quote from this 1962 paper says, “Twenty years from now [1982], some form of keyboard operation will doubtless be taught in kindergarten, and forty years from now [2002], keyboards may be as universal as pencils, but at present good typists are few."
I, your humble columnist, have always considered myself a good typist.
In October 1962, Licklider was chosen as the first director of the Information Processing Techniques Office (IPTO) research program located at the Defense Advanced Research Projects Agency (DARPA), which is part of the US Department of Defense.
This is where he was successful in gaining acceptance and support regarding his computer networking ideas. Licklider also helped to guide the funding of several computer science research projects.
While at DARPA, Licklider was involved in the development of one of the first wide area computer networks used in the United States for a cross-country radar defense system.
This network system was connected to many Department of Defense sites, including Strategic Air Command (SAC) headquarters, and the Pentagon.
In 1963, Licklider obtained IPTO funding for the research needed to explore how time-sharing computers could be operated by communities of users, located in various geographic locations.
IPTO originally began development of the Advanced Research Projects Agency Network (ARPANET) in 1966, which led to today’s Internet.
Licklider had the foresight in the 1960s to accurately estimate millions of people being online by the year 2000, using what he called an “Intergalactic Computer Network.”
In December of 2000, the Internet reached 361 million users, according to the Internet World Stats website.
Joseph Carl Robnett Licklider, also known as J.C.R. or Lick, passed away at age 75, June 26, 1990, in Arlington, MA.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
Joseph Carl Robnett Licklider was a visionary, and an Internet pioneer.
I realize many folks may not have heard of Licklider, however, he is deserving of recognition for his innovative concepts which helped bring us the Internet.
Licklider was born March 11, 1915.
He developed an early interest in engineering, building model airplanes, and working on automobiles.
In 1937, he graduated from Washington University in St. Louis, where he received a bachelor of arts degree majoring in psychology, mathematics, and physics.
After receiving a PhD in psychoacoustics in 1942 from the University of Rochester, he went on to work at Harvard University’s Psycho-Acoustic Laboratory.
In 1950, Licklider went to MIT, where he was an associate professor.
During the early 1950s, Licklider was asked about working on creating a new technology for displaying computer information to human operators. This was for the purpose of improving US air defense capabilities.
It was during this time Licklider’s thoughts about human-computer interactions began.
The Advanced Research Projects Agency (ARPA) was established in February 1958 by President Dwight Eisenhower, in response to the Soviet Union’s Sputnik I satellite program, which yours truly recently wrote a column about.
The next year, Licklider had written a book titled “Libraries of the Future.”
This book explained how people would be able to simultaneously access (from remote locations) what he called an “automated library,” located in a database inside a computer.
In 1960, Licklider wrote, “It seems reasonable to envision, for a time 10 or 15 years hence, a ‘thinking center’ that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval.”
He even began seriously speaking about interactive computers serving as automated assistants – and people were listening to him.
Licklider wrote what eventually became a seminal paper in March 1960, called “Man-Computer Symbiosis.”
In it he wrote, “It seems entirely possible that, in due course, electronic or chemical “machines” will outdo the human brain in most of the functions we now consider exclusively within its province.”
Licklider wrote about the need for computer involvement in formative and real-time thinking.
He described a computer assistant that could perform simulation modeling which would graphically display the results. He also wrote about how a computer could determine solutions for new problems based on past experiences, and how a computer would be able to answer questions.
Even back in 1960, Licklider foresaw a close symbiotic, or interdependent relationship developing between computers and human beings.
His foresight was revealed in his writings regarding computerized interfaces with the human brain – which he believed was possible.
Licklider also wrote a series of memos involving a number of computers connected to each other within a “Galactic Network” concept.
This concept, Licklider wrote, allowed the data and programs stored within each computer to be accessed from anywhere in the world, by any of the computers connected to the network.
He accurately recognized the importance and potential of computer networks, explaining that by distributing numerous computers over a fast-moving electronic data network, each one could share its programs and informational resources with the other.
It seems as if Licklider is describing the basic foundation of the Internet.
Licklider, in collaboration with Welden E. Clark, released a 16-page paper in August 1962, titled “On-Line Man Computer Communication.”
In this paper, he described ,in detail, the concepts of the future use of on-line networks, and how computers would play the role of a teacher for “programmed instruction” training purposes.
A quote from this 1962 paper says, “Twenty years from now [1982], some form of keyboard operation will doubtless be taught in kindergarten, and forty years from now [2002], keyboards may be as universal as pencils, but at present good typists are few."
I, your humble columnist, have always considered myself a good typist.
In October 1962, Licklider was chosen as the first director of the Information Processing Techniques Office (IPTO) research program located at the Defense Advanced Research Projects Agency (DARPA), which is part of the US Department of Defense.
This is where he was successful in gaining acceptance and support regarding his computer networking ideas. Licklider also helped to guide the funding of several computer science research projects.
While at DARPA, Licklider was involved in the development of one of the first wide area computer networks used in the United States for a cross-country radar defense system.
This network system was connected to many Department of Defense sites, including Strategic Air Command (SAC) headquarters, and the Pentagon.
In 1963, Licklider obtained IPTO funding for the research needed to explore how time-sharing computers could be operated by communities of users, located in various geographic locations.
IPTO originally began development of the Advanced Research Projects Agency Network (ARPANET) in 1966, which led to today’s Internet.
Licklider had the foresight in the 1960s to accurately estimate millions of people being online by the year 2000, using what he called an “Intergalactic Computer Network.”
In December of 2000, the Internet reached 361 million users, according to the Internet World Stats website.
Joseph Carl Robnett Licklider, also known as J.C.R. or Lick, passed away at age 75, June 26, 1990, in Arlington, MA.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Friday, October 14, 2011
Smell-O-Vision II: Coming soon to a nose near you
Oct. 17, 2011
by Mark Ollig
Remember those paper scratch-and-sniff stickers?
The 1981 movie “Polyester” featured numbered scratch-and-sniff cards which allowed the viewer (when prompted by a card number) to smell what was being shown on the movie screen.
It was promoted as Odorama.
Placing a fragrance coating on a piece of paper or cardboard is one thing; however, I had no idea of the long history of inventive devices used in dispersing smells while watching a movie.
Hans Laube and Michael Todd were involved in the creation of a device called The Smell-O-Vision.
Laube went on to build a machine that discharged a variety of odors, scents, and smells which would coincide with the events happening during a theater movie or stage play.
Hans Laube was issued US Patent number 2,813,452 titled MOTION PICTURES WITH SYNCHRONIZED ODOR EMISSION Nov. 19, 1957.
Laube’s device would disperse various mixtures and dilutions of liquid scented perfumes, and included one scent neutralizer.
Todd is credited with calling this device Smell-O-Vision.
“Scent of Mystery” was a 1960 movie using an updated version of Laube’s device that circulated up to 30 different smells into theater seats when prompted via specific signal markers on the movie’s film.
Disappointingly, this did not work very well, and as such, no future movies were shown using the Smell-O-Vision device.
The one scent I fondly recall as a youngster while seated inside my hometown’s local theater, was the addicting aroma that drifted in from the popcorn machine in the front lobby.
One of the earliest attempts at combining a motion picture film and smells goes back to 1906, when Samuel Lionel Rothafel, working at The Family Theater in the mining town of Forest City, PA., came up with an idea.
While a motion picture newsreel film of what is believed to have been the 1906 Rose Bowl parade was being shown inside the theater, Rothafel took a wad of cotton wool soaked with rose oil, and placed it in front of an electric fan. This caused the smell of roses to be wafted throughout the theater and amongst the seated patrons.
A rose by any other name would smell as sweet.
It is interesting to note Samuel Lionel Rothafel was born right here in Minnesota, in the city of Stillwater in 1882.
An in-theater “smell system” was installed in Paramount’s Rialto Theater on Broadway in 1933. Blowers released various smells during the movie, but proved unpopular as it took hours (sometimes days) for the scent to finally clear out of the theater building.
There is quite a variety of aromas in this world – and countless opinions on the number of unique scents the human nose can distinguish.
Trygg Engen, a Brown University psychologist, wrote in 1982 that an untrained person can identify 2,000 odors, and an expert, 10,000.
“The human nose can detect and differentiate 350,000 smells; it’s just that we shouldn’t smell them at the same time because you get anosmia – nose fatigue,” according to Sue Phillips, a fragrance expert.
In the book, “The Future of the Body,” Michael Murphy cites his source as saying a real expert (smelling expert, I would assume) “must distinguish at least thirty thousand nuances of scent.”
Ernest Crocker, a chemical engineer and MIT graduate, used a mathematical rating system and came up with 10,000 as being the number of recognizable odors a human can detect.
Mixing smells with your favorite movies, gaming, and television programs is becoming a reality through a French company called Olf-Action.
No doubt the company name is a play on the word “olfactory” which relates to the recognition of smell.
Olf-Action uses Odoravision.
Odoravision is a copyrighted term used to describe the concept for the delivery of odors, or particular scents, in combination with motion picture films viewed in movie theaters.
This method of odor-delivery has also been called: smell-synchronization.
Olf-Action’s Odoravision System can administer 128 scents with three simultaneous odors over the course of one motion picture film.
One aroma diffuser I saw connected a video source to Olf-Action’s Olfahome model 4045 scent dispenser device.
The model 4045 is a 44-pound rectangular, box-like device which was attached to the ceiling approximately 10 feet in front of, and above, the movie viewers.
The diagram for the Olfacine/Olfahome model 4045 showed 40 individual, open-air nozzles.
The scents are stored inside cartridges.
Some of the scents listed included: cakes, gasoline, flowers, roses, wood, sea water, smoke, candies, fabrics, trees, polluted city smells, and one I like; the smell of freshly cut grass.
Olf-Action listed several movie film titles available in Odoravision, including one many would like to see and smell: “Charlie and the Chocolate Factory.”
My concern is when we watch an Odoravision movie and tell people it stinks, they won’t know whether we meant the movie’s plot or the smells in it.
I can’t wait until Apple’s App Store starts selling the “iSmell” application.
Then we will be able to watch people sniffing their iPhones while they watch videos on them.
Do I hear laughter from some of my readers?
Folks, you just can’t make this stuff up.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
Remember those paper scratch-and-sniff stickers?
The 1981 movie “Polyester” featured numbered scratch-and-sniff cards which allowed the viewer (when prompted by a card number) to smell what was being shown on the movie screen.
It was promoted as Odorama.
Placing a fragrance coating on a piece of paper or cardboard is one thing; however, I had no idea of the long history of inventive devices used in dispersing smells while watching a movie.
Hans Laube and Michael Todd were involved in the creation of a device called The Smell-O-Vision.
Laube went on to build a machine that discharged a variety of odors, scents, and smells which would coincide with the events happening during a theater movie or stage play.
Hans Laube was issued US Patent number 2,813,452 titled MOTION PICTURES WITH SYNCHRONIZED ODOR EMISSION Nov. 19, 1957.
Laube’s device would disperse various mixtures and dilutions of liquid scented perfumes, and included one scent neutralizer.
Todd is credited with calling this device Smell-O-Vision.
“Scent of Mystery” was a 1960 movie using an updated version of Laube’s device that circulated up to 30 different smells into theater seats when prompted via specific signal markers on the movie’s film.
Disappointingly, this did not work very well, and as such, no future movies were shown using the Smell-O-Vision device.
The one scent I fondly recall as a youngster while seated inside my hometown’s local theater, was the addicting aroma that drifted in from the popcorn machine in the front lobby.
One of the earliest attempts at combining a motion picture film and smells goes back to 1906, when Samuel Lionel Rothafel, working at The Family Theater in the mining town of Forest City, PA., came up with an idea.
While a motion picture newsreel film of what is believed to have been the 1906 Rose Bowl parade was being shown inside the theater, Rothafel took a wad of cotton wool soaked with rose oil, and placed it in front of an electric fan. This caused the smell of roses to be wafted throughout the theater and amongst the seated patrons.
A rose by any other name would smell as sweet.
It is interesting to note Samuel Lionel Rothafel was born right here in Minnesota, in the city of Stillwater in 1882.
An in-theater “smell system” was installed in Paramount’s Rialto Theater on Broadway in 1933. Blowers released various smells during the movie, but proved unpopular as it took hours (sometimes days) for the scent to finally clear out of the theater building.
There is quite a variety of aromas in this world – and countless opinions on the number of unique scents the human nose can distinguish.
Trygg Engen, a Brown University psychologist, wrote in 1982 that an untrained person can identify 2,000 odors, and an expert, 10,000.
“The human nose can detect and differentiate 350,000 smells; it’s just that we shouldn’t smell them at the same time because you get anosmia – nose fatigue,” according to Sue Phillips, a fragrance expert.
In the book, “The Future of the Body,” Michael Murphy cites his source as saying a real expert (smelling expert, I would assume) “must distinguish at least thirty thousand nuances of scent.”
Ernest Crocker, a chemical engineer and MIT graduate, used a mathematical rating system and came up with 10,000 as being the number of recognizable odors a human can detect.
Mixing smells with your favorite movies, gaming, and television programs is becoming a reality through a French company called Olf-Action.
No doubt the company name is a play on the word “olfactory” which relates to the recognition of smell.
Olf-Action uses Odoravision.
Odoravision is a copyrighted term used to describe the concept for the delivery of odors, or particular scents, in combination with motion picture films viewed in movie theaters.
This method of odor-delivery has also been called: smell-synchronization.
Olf-Action’s Odoravision System can administer 128 scents with three simultaneous odors over the course of one motion picture film.
One aroma diffuser I saw connected a video source to Olf-Action’s Olfahome model 4045 scent dispenser device.
The model 4045 is a 44-pound rectangular, box-like device which was attached to the ceiling approximately 10 feet in front of, and above, the movie viewers.
The diagram for the Olfacine/Olfahome model 4045 showed 40 individual, open-air nozzles.
The scents are stored inside cartridges.
Some of the scents listed included: cakes, gasoline, flowers, roses, wood, sea water, smoke, candies, fabrics, trees, polluted city smells, and one I like; the smell of freshly cut grass.
Olf-Action listed several movie film titles available in Odoravision, including one many would like to see and smell: “Charlie and the Chocolate Factory.”
My concern is when we watch an Odoravision movie and tell people it stinks, they won’t know whether we meant the movie’s plot or the smells in it.
I can’t wait until Apple’s App Store starts selling the “iSmell” application.
Then we will be able to watch people sniffing their iPhones while they watch videos on them.
Do I hear laughter from some of my readers?
Folks, you just can’t make this stuff up.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Friday, October 7, 2011
Apple: No iPhone 5 - Hello iPhone 4S
Oct. 10, 2011
by Mark Ollig
“Let’s talk iPhone.”
This was the message inside the invitations sent out to members of the media from Apple Inc.
Most of us were anticipating Apple to announce the new iPhone 5 to the world.
Last Tuesday, Apple’s CEO Tim Cook took the stage at Apple’s Cupertino, CA headquarters to make the announcement of what many had assumed, written, tweeted, and blogged would be the new iPhone 5.
Cook spent considerable time talking about Apple’s past achievements, until we were finally introduced to the new iPhone 4S by Phil Schiller, Apple’s senior vice president of Worldwide Product Marketing.
“So people have been wondering, how do you follow up a hit product like the iPhone 4? Well, I’m really pleased to tell you today all about the brand new iPhone 4S,” said Schiller.
There was respectable applause from the audience in attendance, while yours truly wondered what happened to the iPhone 5.
One of the new features I liked on the iPhone 4S was Siri.
Siri is used to access the built-in voice-enabled personal assistant.
An iPhone 4S user speaks to Siri in ordinary, conversational dialogue.
Siri responds to verbal queries such as, “What is the weather like today in Minneapolis?”
Siri allows a person to work with the iPhone’s applications using normal, everyday voice conversation, while Siri will reply to the user’s voice in kind.
During Apple’s demonstration, Siri placed phone calls, provided directions to the nearest restaurant, and reported on the weather.
Siri performs a variety of dictation tasks; from creating reminders and setting alarms, to verbally notifying the iPhone 4S user about them.
If Siri needs more information in order to fulfill a request, it will verbally ask.
Incoming text messages can be read to you by Siri, and it will compose the text to reply with from your voice responses, providing hands-free texting.
I find myself suddenly having flashbacks to the HAL 9000.
The iPhone 4S Retina display significantly improves the sharpness and quality of images and text seen on the display screen.
Pixels on the iPhone 4S display screen are just 78 micrometers wide.
Pixel density is 326 pixels per inch, which means a human eye will not be able to detect the individual pixels, so web pages, photos, text in eBooks, and email will look very focused and sharp.
The Retina display utilizes technology called IPS (in-plane switching), which is the equivalent technology used in the iPad.
Included with the Retina display is LED back-lighting, and an ambient light sensor that intelligently adjusts the brightness of the screen, providing the best viewing possible under most conditions.
The iPhone 4S includes the Apple A5 dual-core graphic processing chip, (which operates in the iPad 2). It is twice as fast as the processor used in the previous iPhone 4. Web pages will load much faster; and to all the gamers out there: your video graphics will render seven times faster on the iPhone 4S.
The iPhone 4S screen is a 3.5-inch (diagonal) widescreen Multi-Touch display.
Its camera uses an 8 megapixel sensor, which takes pictures at a resolution of 3264 X 2488 pixels.
The camera includes tap-to-focus, auto-focus, and LED flash. Its optics include five element lenses and an f/2.4 aperture lens opening, which lets in more light and provides for better low-light performance.
The iPhone 4S captures 1080p video using video stabilization and records up to 30 frames-per-second with audio.
The iPhone 4S is used on 3G networks, and has a significantly improved antenna re-design from the previous iPhone 4.
Schiller stated how the iPhone 4S can attain data-rate speeds of up to 5.8 uploading and 14.4 Mbps downloading. However, to get these speeds, the 3G Carrier would need to be using a HSPA+ (Evolved High-Speed Packet Access) network.
With the iPhone 4S built-in rechargeable lithium-ion battery, talk-time battery life is said to be 8 hours, with 200 hours of standby time.
The battery provides 10 hours of video playback time, with up to 40 hours of audio/music playback.
When using the Internet, it provides six hours of usage with 3G enabled, and nine hours over Wi-Fi.
Containing both CDMA and GSM cellular phone standards, the iPhone 4S can be used world-wide. It also supports Bluetooth 4.0 wireless technology.
The iPhone 4S operates with the iOS 5 mobile operating system, integrates with the iCloud, and will be available in Apple retail stores Oct. 14.
The iPhone 4S will work over the AT&T and Verizon networks, and soon over the Sprint network.
The iPhone 4S models are priced at: $199 for 16GB, $299 for 32GB, and $399 for the 64GB iPhone 4S.
As this column was being sent to press, I learned of the passing of Apple Inc. co-founder and former CEO Steve Jobs.
While the news was breaking over the social networking sites, I came across one poignant Twitter message about Steve Jobs in reference to the new Apple iPhone 4S.
I was kindly given permission to use the following by the person who wrote it; “From now on, the 4S is going to stand for, ‘For Steve.’”
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
“Let’s talk iPhone.”
This was the message inside the invitations sent out to members of the media from Apple Inc.
Most of us were anticipating Apple to announce the new iPhone 5 to the world.
Last Tuesday, Apple’s CEO Tim Cook took the stage at Apple’s Cupertino, CA headquarters to make the announcement of what many had assumed, written, tweeted, and blogged would be the new iPhone 5.
Cook spent considerable time talking about Apple’s past achievements, until we were finally introduced to the new iPhone 4S by Phil Schiller, Apple’s senior vice president of Worldwide Product Marketing.
“So people have been wondering, how do you follow up a hit product like the iPhone 4? Well, I’m really pleased to tell you today all about the brand new iPhone 4S,” said Schiller.
There was respectable applause from the audience in attendance, while yours truly wondered what happened to the iPhone 5.
One of the new features I liked on the iPhone 4S was Siri.
Siri is used to access the built-in voice-enabled personal assistant.
An iPhone 4S user speaks to Siri in ordinary, conversational dialogue.
Siri responds to verbal queries such as, “What is the weather like today in Minneapolis?”
Siri allows a person to work with the iPhone’s applications using normal, everyday voice conversation, while Siri will reply to the user’s voice in kind.
During Apple’s demonstration, Siri placed phone calls, provided directions to the nearest restaurant, and reported on the weather.
Siri performs a variety of dictation tasks; from creating reminders and setting alarms, to verbally notifying the iPhone 4S user about them.
If Siri needs more information in order to fulfill a request, it will verbally ask.
Incoming text messages can be read to you by Siri, and it will compose the text to reply with from your voice responses, providing hands-free texting.
I find myself suddenly having flashbacks to the HAL 9000.
The iPhone 4S Retina display significantly improves the sharpness and quality of images and text seen on the display screen.
Pixels on the iPhone 4S display screen are just 78 micrometers wide.
Pixel density is 326 pixels per inch, which means a human eye will not be able to detect the individual pixels, so web pages, photos, text in eBooks, and email will look very focused and sharp.
The Retina display utilizes technology called IPS (in-plane switching), which is the equivalent technology used in the iPad.
Included with the Retina display is LED back-lighting, and an ambient light sensor that intelligently adjusts the brightness of the screen, providing the best viewing possible under most conditions.
The iPhone 4S includes the Apple A5 dual-core graphic processing chip, (which operates in the iPad 2). It is twice as fast as the processor used in the previous iPhone 4. Web pages will load much faster; and to all the gamers out there: your video graphics will render seven times faster on the iPhone 4S.
The iPhone 4S screen is a 3.5-inch (diagonal) widescreen Multi-Touch display.
Its camera uses an 8 megapixel sensor, which takes pictures at a resolution of 3264 X 2488 pixels.
The camera includes tap-to-focus, auto-focus, and LED flash. Its optics include five element lenses and an f/2.4 aperture lens opening, which lets in more light and provides for better low-light performance.
The iPhone 4S captures 1080p video using video stabilization and records up to 30 frames-per-second with audio.
The iPhone 4S is used on 3G networks, and has a significantly improved antenna re-design from the previous iPhone 4.
Schiller stated how the iPhone 4S can attain data-rate speeds of up to 5.8 uploading and 14.4 Mbps downloading. However, to get these speeds, the 3G Carrier would need to be using a HSPA+ (Evolved High-Speed Packet Access) network.
With the iPhone 4S built-in rechargeable lithium-ion battery, talk-time battery life is said to be 8 hours, with 200 hours of standby time.
The battery provides 10 hours of video playback time, with up to 40 hours of audio/music playback.
When using the Internet, it provides six hours of usage with 3G enabled, and nine hours over Wi-Fi.
Containing both CDMA and GSM cellular phone standards, the iPhone 4S can be used world-wide. It also supports Bluetooth 4.0 wireless technology.
The iPhone 4S operates with the iOS 5 mobile operating system, integrates with the iCloud, and will be available in Apple retail stores Oct. 14.
The iPhone 4S will work over the AT&T and Verizon networks, and soon over the Sprint network.
The iPhone 4S models are priced at: $199 for 16GB, $299 for 32GB, and $399 for the 64GB iPhone 4S.
As this column was being sent to press, I learned of the passing of Apple Inc. co-founder and former CEO Steve Jobs.
While the news was breaking over the social networking sites, I came across one poignant Twitter message about Steve Jobs in reference to the new Apple iPhone 4S.
I was kindly given permission to use the following by the person who wrote it; “From now on, the 4S is going to stand for, ‘For Steve.’”
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Thursday, September 29, 2011
First artificial space satellite launched 54 years ago
Oct. 3, 2011
by Mark Ollig
In 1952 the International Council of Scientific Unions proposed the Internal Geophysical Year (IGY) to be July 1957 to December 1958.
During the IGY, a series of scientific global activities would be performed.
A technical panel established for the IGY worked on what would be required in order to launch an artificial satellite that could orbit the Earth.
Both the United States and Soviet Union had announced plans to launch Earth-orbiting artificial satellites.
While the US space satellite program (called Vanguard) was an open and public undertaking, the Soviet program was being conducted in secret.
The historic event happened Friday, Oct. 4, 1957.
It began when a Soviet R-7 two-stage rocket (number 8K71PS) was successfully launched near Baikonur, a small town located in the remote Russian region of the Kazakhstan Republic.
The R-7 weighed approximately 267 tons at liftoff.
This rocket was more or less a Russian Soviet ballistic missile without the military warhead attached.
Instead of a warhead, the rocket carried into space a payload called PS-1, better known as Sputnik 1.
Sputnik, meaning “fellow traveler or companion,” orbited the Earth once every 92 minutes at a speed of 18,000 mph from a height of 139 miles.
The Sputnik 1 satellite was a metallic, highly polished 23-inch-diameter orb made of an aluminum-magnesium-titanium combination weighing 184 pounds.
Four spring-loaded, “cat-whisker-looking” whip-like antennas extended both 7.9 and 9.5 ft., from the satellite.
The satellite’s one-watt radio transmitter was powered from two of three on-board silver-zinc batteries. The third battery was used to power Sputnik’s internal temperature and other instrument systems.
In October of 1957, many people became fixated listening to the steady radio signal pattern of “beep-beep-beep-beep . . .”
Those beeps were being transmitted from Sputnik’s antennas at the 20.005 and 40.002 MHz frequency bands.
Sputnik’s radio transmissions were being closely listened to by people from around the world through their radios and televisions.
Sputnik’s radio signals also included encoded information about the satellite’s internal and external temperature and pressure readings, along with the density of the Earth’s ionosphere the radio signals had traveled through.
You can listen to one minute of the actual recorded radio signal beeps from Sputnik 1 at http://tinyurl.com/2u9b49. This link goes to a Wave Sound (.wav) audio format file.
Tracking of Sputnik 1 while in orbit was accomplished by way of the Soviet’s P-30 “Big Mesh” radar, and by the use of ground-based telescopes.
There were also people on the ground that looked up and saw the bright spot of sunlight being reflected off the highly polished Sputnik 1 as it sped over their heads across the night sky.
While Sputnik 1 orbited the Earth, Americans’ emotions ranged from shock and amazement to being downright frightened and distressed.
A number of people worried that instead of just harmless, beeping Soviet satellites orbiting over the United States, Soviet ballistic missiles carrying nuclear warheads might be attached on the next payload.
I mean, after all, it was 1957, and the US and Soviet Union were in the middle of the Cold War.
For the most part, the Soviet Union had clearly taken the lead in this new “space race” between the two super powers.
First Secretary of the Communist Party of the Soviet Union, Nikita Khrushchev viewed Sputnik’s triumph as an unmatched propaganda value for the Soviet space program.
During a news conference Oct. 9, 1957, President Dwight Eisenhower, in an attempt to subdue any public hysteria, tried to diminish the importance of Sputnik 1 by saying, “Now, so far as the satellite itself is concerned, that does not raise my apprehensions, not one iota. I see nothing at this moment, at this stage of development, that is significant in that development as far as security is concerned, except, as I pointed out, it does definitely prove the possession by the Russian scientists of a very powerful thrust in their rocketry, and that is important.”
On the other side of the world, what did the Russian people feel about the launch of Sputnik 1?
Semyon Reznik is a Russian writer and journalist, but Oct. 4, 1957, he was a Russian college student.
Reznik recalled what was being broadcast over Russian radio at the time of Sputnik’s launch and the Russian people’s response.
“The day our satellite Sputnik was launched, a special voice came over the radio to announce it to us . . . .” Reznik repeated the announcement; “Attention. All radio stations of the Soviet Union are broadcasting . . . Our satellite Sputnik is in space.”
Reznik talked about the people’s reaction; “Everyone felt so proud and wondered who did it? No names were named for years.”
Sputnik 1 continued to broadcast beeps until its radio transmitter batteries became exhausted Oct. 26, 1957.
The flight of the first Earth-orbiting satellite came to an end Jan. 4, 1958, when Sputnik 1 re-entered Earth’s atmosphere and burned up.
The US launched its first Earth-orbiting satellite, called Explorer 1 Jan. 31, 1958.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
In 1952 the International Council of Scientific Unions proposed the Internal Geophysical Year (IGY) to be July 1957 to December 1958.
During the IGY, a series of scientific global activities would be performed.
A technical panel established for the IGY worked on what would be required in order to launch an artificial satellite that could orbit the Earth.
Both the United States and Soviet Union had announced plans to launch Earth-orbiting artificial satellites.
While the US space satellite program (called Vanguard) was an open and public undertaking, the Soviet program was being conducted in secret.
The historic event happened Friday, Oct. 4, 1957.
It began when a Soviet R-7 two-stage rocket (number 8K71PS) was successfully launched near Baikonur, a small town located in the remote Russian region of the Kazakhstan Republic.
The R-7 weighed approximately 267 tons at liftoff.
This rocket was more or less a Russian Soviet ballistic missile without the military warhead attached.
Instead of a warhead, the rocket carried into space a payload called PS-1, better known as Sputnik 1.
Sputnik, meaning “fellow traveler or companion,” orbited the Earth once every 92 minutes at a speed of 18,000 mph from a height of 139 miles.
The Sputnik 1 satellite was a metallic, highly polished 23-inch-diameter orb made of an aluminum-magnesium-titanium combination weighing 184 pounds.
Four spring-loaded, “cat-whisker-looking” whip-like antennas extended both 7.9 and 9.5 ft., from the satellite.
The satellite’s one-watt radio transmitter was powered from two of three on-board silver-zinc batteries. The third battery was used to power Sputnik’s internal temperature and other instrument systems.
In October of 1957, many people became fixated listening to the steady radio signal pattern of “beep-beep-beep-beep . . .”
Those beeps were being transmitted from Sputnik’s antennas at the 20.005 and 40.002 MHz frequency bands.
Sputnik’s radio transmissions were being closely listened to by people from around the world through their radios and televisions.
Sputnik’s radio signals also included encoded information about the satellite’s internal and external temperature and pressure readings, along with the density of the Earth’s ionosphere the radio signals had traveled through.
You can listen to one minute of the actual recorded radio signal beeps from Sputnik 1 at http://tinyurl.com/2u9b49. This link goes to a Wave Sound (.wav) audio format file.
Tracking of Sputnik 1 while in orbit was accomplished by way of the Soviet’s P-30 “Big Mesh” radar, and by the use of ground-based telescopes.
There were also people on the ground that looked up and saw the bright spot of sunlight being reflected off the highly polished Sputnik 1 as it sped over their heads across the night sky.
While Sputnik 1 orbited the Earth, Americans’ emotions ranged from shock and amazement to being downright frightened and distressed.
A number of people worried that instead of just harmless, beeping Soviet satellites orbiting over the United States, Soviet ballistic missiles carrying nuclear warheads might be attached on the next payload.
I mean, after all, it was 1957, and the US and Soviet Union were in the middle of the Cold War.
For the most part, the Soviet Union had clearly taken the lead in this new “space race” between the two super powers.
First Secretary of the Communist Party of the Soviet Union, Nikita Khrushchev viewed Sputnik’s triumph as an unmatched propaganda value for the Soviet space program.
During a news conference Oct. 9, 1957, President Dwight Eisenhower, in an attempt to subdue any public hysteria, tried to diminish the importance of Sputnik 1 by saying, “Now, so far as the satellite itself is concerned, that does not raise my apprehensions, not one iota. I see nothing at this moment, at this stage of development, that is significant in that development as far as security is concerned, except, as I pointed out, it does definitely prove the possession by the Russian scientists of a very powerful thrust in their rocketry, and that is important.”
On the other side of the world, what did the Russian people feel about the launch of Sputnik 1?
Semyon Reznik is a Russian writer and journalist, but Oct. 4, 1957, he was a Russian college student.
Reznik recalled what was being broadcast over Russian radio at the time of Sputnik’s launch and the Russian people’s response.
“The day our satellite Sputnik was launched, a special voice came over the radio to announce it to us . . . .” Reznik repeated the announcement; “Attention. All radio stations of the Soviet Union are broadcasting . . . Our satellite Sputnik is in space.”
Reznik talked about the people’s reaction; “Everyone felt so proud and wondered who did it? No names were named for years.”
Sputnik 1 continued to broadcast beeps until its radio transmitter batteries became exhausted Oct. 26, 1957.
The flight of the first Earth-orbiting satellite came to an end Jan. 4, 1958, when Sputnik 1 re-entered Earth’s atmosphere and burned up.
The US launched its first Earth-orbiting satellite, called Explorer 1 Jan. 31, 1958.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Thursday, September 22, 2011
Microsoft shows developers Internet Explorer 10
Sept. 26, 2011
by Mark Ollig
A version of Microsoft’s new Internet Explorer 10 (IE 10) was shown during the recent Microsoft Build Developers Conference.
“In Windows 8, IE 10 is available as a Metro style app and as a desktop app. The desktop app continues to fully support all plug-ins and extensions. The HTML5 and script engines are identical and you can easily switch between the different frame windows if you’d like,” explained Windows Division President Steven Sinofsky in a recent Microsoft Development Network blog posting.
HTML5 is the latest version of the Hypertext Markup Language programming code. It is a scripted computing language used in the creation of web pages.
Microsoft is abandoning the need to incorporate Adobe Flash coding by using the HTML5 code.
Adobe Flash, which uses the Adobe Flash Player, is a cross-platform, browser-based application used for displaying video and multimedia content on computer web browsers. It is used in most computing and mobile devices.
“For the web to move forward and for consumers to get the most out of touch-first browsing, the metro style browser in Windows 8 is as HTML5-only as possible, and plug-in free,” wrote Dean Hachamovitch, corporate vice president of Microsoft’s Internet Explorer group.
Microsoft has thrown down the gauntlet, and is proceeding into the future without embedding Flash into their browsers; a bold move on their part, and not without its critics, I might add.
Windows 8 includes one HTML5 browsing engine that powers a user’s two individual browsing experiences: the Metro style browser and the IE 10 used with desktop applications.
Microsoft states this HTML5 browsing engine will provide support for today’s web standards, in addition to being a reliable, safe, fast, and powerful web developer programming tool when used for browser experiences, as well as for the new metro-style apps to be created.
The new IE 10 update includes support for on-screen touch-friendly websites and incorporates rich, visual effects technologies and sophisticated web page layouts.
IE 10 includes a built-in spell checker, along with an auto correct feature; so, when I type “teh” it will be auto corrected to “the.”
The history of the Microsoft Internet Explorer browser started in August 1995, with the release of Internet Explorer 1.0, which was used with Microsoft Windows 95.
Windows 95 included the technologies needed for connecting to the Internet, along with built-in support for dial-up networking to a Bulletin Board System (BBS) or other device.
During the mid-1990s, we used dial-up networking for accessing hobby BBSs and for our work.
Yours truly used dial-up networking for maintaining telephony devices, such as digital and electronic business phone systems and my hometown’s telephone digital switching office.
Internet Explorer 1.0 was originally shipped separately to retailers as the “Internet Jumpstart Kit in Microsoft Plus! For Windows 95.”
Consumers could also buy it pre-installed with Windows 95 when they purchased a new computer.
Internet Explorer 2.0 was released in November1995. It was a cross-platform web browser used in Macintosh and Microsoft Windows 32-bit computing systems.
Some trivia: Using Internet Explorer 2.0, one could view the famous “Trojan Room coffee pot,” which was the world’s first webcam. This webcam showed the current amount of coffee remaining inside a coffee pot in the computer laboratory at the University of Cambridge.
You might recall the column I wrote April 4 called “Computing ingenuity led to the creation of ‘XCoffee’.”
In 1991, at the University of Cambridge, a video camera was rigged to capture live, still-frame images of a working coffeepot every three seconds. These images were encoded and sent over the college’s local computer network.
The scholarly researchers (working in other parts of the building), could visually see the current status of the amount of coffee remaining in the coffeepot on a small image snapshot display in the corner of their computer screens.
The real-time XCoffee images made available to the world over the Internet in 1993 became an instant sensation.
Microsoft released Internet Explorer 3.0 in August 1996. It was also designed for Windows 95 and included Internet Mail, News 1.0, and the Windows Address Book.
Microsoft NetMeeting and Windows Media Player were later added to Internet Explorer 3.0.
Internet Explorer 4.0 was released in October 1997. It was used with Windows 95, Windows 98, and Windows NT.
Internet Explorer 4.0 allowed web pages to be more interactive. File menus could be expanded with a mouse click, and icon images could be dragged around and repositioned.
Internet Explorer 5.0 was released in March 1999. One new feature included the Windows Radio Toolbar which could access more than 300 Internet radio stations broadcasting around the world at that time.
Internet Explorer 6.0 was released with Windows XP in 2001.
Microsoft released Internet Explorer 7.0 Oct. 18, 2006. It included Quick Tabs, which provided an at-a-glance snapshot of all open tabs on a single screen.
Internet Explorer 8.0 was released during March 2009. It was offered in 25 languages.
This year, Internet Explorer 9.0 became available for users of Windows 7. One of its many features includes automatic crash recovery.
IE 10 will be publicly released with Windows 8 on a yet-to-be-determined date, so once again; stay tuned everyone.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
A version of Microsoft’s new Internet Explorer 10 (IE 10) was shown during the recent Microsoft Build Developers Conference.
“In Windows 8, IE 10 is available as a Metro style app and as a desktop app. The desktop app continues to fully support all plug-ins and extensions. The HTML5 and script engines are identical and you can easily switch between the different frame windows if you’d like,” explained Windows Division President Steven Sinofsky in a recent Microsoft Development Network blog posting.
HTML5 is the latest version of the Hypertext Markup Language programming code. It is a scripted computing language used in the creation of web pages.
Microsoft is abandoning the need to incorporate Adobe Flash coding by using the HTML5 code.
Adobe Flash, which uses the Adobe Flash Player, is a cross-platform, browser-based application used for displaying video and multimedia content on computer web browsers. It is used in most computing and mobile devices.
“For the web to move forward and for consumers to get the most out of touch-first browsing, the metro style browser in Windows 8 is as HTML5-only as possible, and plug-in free,” wrote Dean Hachamovitch, corporate vice president of Microsoft’s Internet Explorer group.
Microsoft has thrown down the gauntlet, and is proceeding into the future without embedding Flash into their browsers; a bold move on their part, and not without its critics, I might add.
Windows 8 includes one HTML5 browsing engine that powers a user’s two individual browsing experiences: the Metro style browser and the IE 10 used with desktop applications.
Microsoft states this HTML5 browsing engine will provide support for today’s web standards, in addition to being a reliable, safe, fast, and powerful web developer programming tool when used for browser experiences, as well as for the new metro-style apps to be created.
The new IE 10 update includes support for on-screen touch-friendly websites and incorporates rich, visual effects technologies and sophisticated web page layouts.
IE 10 includes a built-in spell checker, along with an auto correct feature; so, when I type “teh” it will be auto corrected to “the.”
The history of the Microsoft Internet Explorer browser started in August 1995, with the release of Internet Explorer 1.0, which was used with Microsoft Windows 95.
Windows 95 included the technologies needed for connecting to the Internet, along with built-in support for dial-up networking to a Bulletin Board System (BBS) or other device.
During the mid-1990s, we used dial-up networking for accessing hobby BBSs and for our work.
Yours truly used dial-up networking for maintaining telephony devices, such as digital and electronic business phone systems and my hometown’s telephone digital switching office.
Internet Explorer 1.0 was originally shipped separately to retailers as the “Internet Jumpstart Kit in Microsoft Plus! For Windows 95.”
Consumers could also buy it pre-installed with Windows 95 when they purchased a new computer.
Internet Explorer 2.0 was released in November1995. It was a cross-platform web browser used in Macintosh and Microsoft Windows 32-bit computing systems.
Some trivia: Using Internet Explorer 2.0, one could view the famous “Trojan Room coffee pot,” which was the world’s first webcam. This webcam showed the current amount of coffee remaining inside a coffee pot in the computer laboratory at the University of Cambridge.
You might recall the column I wrote April 4 called “Computing ingenuity led to the creation of ‘XCoffee’.”
In 1991, at the University of Cambridge, a video camera was rigged to capture live, still-frame images of a working coffeepot every three seconds. These images were encoded and sent over the college’s local computer network.
The scholarly researchers (working in other parts of the building), could visually see the current status of the amount of coffee remaining in the coffeepot on a small image snapshot display in the corner of their computer screens.
The real-time XCoffee images made available to the world over the Internet in 1993 became an instant sensation.
Microsoft released Internet Explorer 3.0 in August 1996. It was also designed for Windows 95 and included Internet Mail, News 1.0, and the Windows Address Book.
Microsoft NetMeeting and Windows Media Player were later added to Internet Explorer 3.0.
Internet Explorer 4.0 was released in October 1997. It was used with Windows 95, Windows 98, and Windows NT.
Internet Explorer 4.0 allowed web pages to be more interactive. File menus could be expanded with a mouse click, and icon images could be dragged around and repositioned.
Internet Explorer 5.0 was released in March 1999. One new feature included the Windows Radio Toolbar which could access more than 300 Internet radio stations broadcasting around the world at that time.
Internet Explorer 6.0 was released with Windows XP in 2001.
Microsoft released Internet Explorer 7.0 Oct. 18, 2006. It included Quick Tabs, which provided an at-a-glance snapshot of all open tabs on a single screen.
Internet Explorer 8.0 was released during March 2009. It was offered in 25 languages.
This year, Internet Explorer 9.0 became available for users of Windows 7. One of its many features includes automatic crash recovery.
IE 10 will be publicly released with Windows 8 on a yet-to-be-determined date, so once again; stay tuned everyone.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Thursday, September 15, 2011
Microsoft reveals Windows 8
Sept. 19, 2011
by Mark Ollig
Microsoft said this would be the biggest change to Windows computing since they released their revolutionary Windows 95 OS (operating system) 16 years ago.
Get ready folks, here comes the new interactive and touch-centric Windows 8.
Last Tuesday, yours truly, along with others online, watched the much anticipated Microsoft keynote address live via a webcast from the Microsoft Windows BUILD conference in Anaheim, CA.
Microsoft Build Professional Developers Conference is an event where hardware and software developers go to learn and exchange ideas for creating the next generation of hardware systems, software programs, and apps (applications) supporting Microsoft Windows operating systems.
Steven Sinofsky, Windows division president, began the keynote address by talking about the improvements made to Microsoft’s current operating system, Windows 7.
He then moved on to the primary focus of the keynote address: Microsoft Windows 8 OS.
Sinofsky is convinced traditional PC (personal computer) users, once they start using Windows 8, will become hooked navigating apps and entering text using on-screen touch-computing.
He did, however, reassure everyone that Windows 8 can be used with the traditional keyboard and mouse.
Sinofsky touted the immersive computing environment and touch-centric capabilities of Windows 8, as the “Metro experience.”
“Fast and fluid” is how Microsoft said they want Windows 8 apps to perform for the user.
Windows 8 uses an intuitive “Metro-style” full-screen touch-centric user interface design, featuring Start Menu program applications viewed as interactive widget-like “tiles,” instead of the familiar looking classic Windows program icon boxes we see on our existing Windows desktop.
The interactive touch-based interface provides the Metro-experience, which has been compared to Microsoft’s Windows Phone 7 OS user interface.
Sinofsky also pointed out Windows 8 only uses about 281 MB of memory, whereas Windows 7 requires around 404 MB to operate.
Sinofsky then introduced Julie Larson-Green, corporate vice president of Windows Program Management.
Larson-Green demonstrated features of Windows 8 using mobile tablet devices.
The Windows 8 default screen, or “lock-screen,” pops up when the screen times out or before a user logs on.
The lock-screen displays remaining battery power, Internet signal status, current email message count, video message count, and a timeline display showing the user’s current calendar appointment message.
With a touch and swipe of her finger against the computing screen, Larson-Green leaves the lock-screen and shows us how to use the new log-in screen.
Her individualized, password protect-screen, shows a picture of her daughter holding a glass of lemonade. The “picture password” code is entered as she uses her index finger and presses it on her daughter’s nose and then presses on the lemonade glass she is holding and finger swipes a line.
Larson-Green had previously programmed this particular picture password code-combination.
This unlocked the computer, and brought up the Windows 8 Start screen.
The Start screen held a collection of user apps that came bundled with Windows 8, as well as new prototype apps developed by Microsoft summer interns (who were in attendance).
Users can navigate through and launch Windows 8 apps via finger swipes, taps, and flicks; similar to how we navigate through the pictures, videos, songs, and apps stored on our various mobile devices.
During the demonstration, I noticed how easy it was for Larson-Green to navigate and use the tiled applications on the computing tablet screen.
If a user has a large collection of applications, instead of scrolling through pages of them (via finger swipes), you can see them all at once by using a two- finger pinch technique which zooms the view of all the apps outward, thus shrinking the size of the tiled apps to where you can see them all at once.
The apps can then be individually accessed, customized, re-arranged or moved into separate groups, and can be given individual names.
The Windows 8 Start screen can be personally customized and re-arranged so applications appear where you want them to.
Users will appreciate knowing Windows 8 cold boots (starts up) in less than 10 seconds.
Another feature demonstrated was the system-wide spell-checker built-in to Windows 8 that can be used by any app.
Windows 8 has a convenient one-step process to wipe the computer clean and restore it to the original factory settings; Microsoft fittingly calls this feature: Reset.
Removing corrupted system software without losing your system file settings and applications (downloaded from the soon-to-be Windows 8 App Store) can be accomplished using the Refresh feature.
Microsoft appears to be heading into the future focusing on an operating system using Metro-styled interactive apps functional on both traditional computers and mobile computing device display screens.
Windows 8 is taking us into an immersive, intuitive, touch-centric, and interactive digital computing environment navigable by means of finger touch swipes, flicks, taps, and pinches.
Sounds like fun.
No release date was given during the presentation regarding a beta version of the Windows 8 operating system for the general public.
Sinofsky said, “We’re going to be driven by the quality, and not by a date.”
To watch the two-hour opening keynote address demonstrating Windows 8 features and some very cool developer application code programming, go to http://tinyurl.com/3dycygt.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
by Mark Ollig
Microsoft said this would be the biggest change to Windows computing since they released their revolutionary Windows 95 OS (operating system) 16 years ago.
Get ready folks, here comes the new interactive and touch-centric Windows 8.
Last Tuesday, yours truly, along with others online, watched the much anticipated Microsoft keynote address live via a webcast from the Microsoft Windows BUILD conference in Anaheim, CA.
Microsoft Build Professional Developers Conference is an event where hardware and software developers go to learn and exchange ideas for creating the next generation of hardware systems, software programs, and apps (applications) supporting Microsoft Windows operating systems.
Steven Sinofsky, Windows division president, began the keynote address by talking about the improvements made to Microsoft’s current operating system, Windows 7.
He then moved on to the primary focus of the keynote address: Microsoft Windows 8 OS.
Sinofsky is convinced traditional PC (personal computer) users, once they start using Windows 8, will become hooked navigating apps and entering text using on-screen touch-computing.
He did, however, reassure everyone that Windows 8 can be used with the traditional keyboard and mouse.
Sinofsky touted the immersive computing environment and touch-centric capabilities of Windows 8, as the “Metro experience.”
“Fast and fluid” is how Microsoft said they want Windows 8 apps to perform for the user.
Windows 8 uses an intuitive “Metro-style” full-screen touch-centric user interface design, featuring Start Menu program applications viewed as interactive widget-like “tiles,” instead of the familiar looking classic Windows program icon boxes we see on our existing Windows desktop.
The interactive touch-based interface provides the Metro-experience, which has been compared to Microsoft’s Windows Phone 7 OS user interface.
Sinofsky also pointed out Windows 8 only uses about 281 MB of memory, whereas Windows 7 requires around 404 MB to operate.
Sinofsky then introduced Julie Larson-Green, corporate vice president of Windows Program Management.
Larson-Green demonstrated features of Windows 8 using mobile tablet devices.
The Windows 8 default screen, or “lock-screen,” pops up when the screen times out or before a user logs on.
The lock-screen displays remaining battery power, Internet signal status, current email message count, video message count, and a timeline display showing the user’s current calendar appointment message.
With a touch and swipe of her finger against the computing screen, Larson-Green leaves the lock-screen and shows us how to use the new log-in screen.
Her individualized, password protect-screen, shows a picture of her daughter holding a glass of lemonade. The “picture password” code is entered as she uses her index finger and presses it on her daughter’s nose and then presses on the lemonade glass she is holding and finger swipes a line.
Larson-Green had previously programmed this particular picture password code-combination.
This unlocked the computer, and brought up the Windows 8 Start screen.
The Start screen held a collection of user apps that came bundled with Windows 8, as well as new prototype apps developed by Microsoft summer interns (who were in attendance).
Users can navigate through and launch Windows 8 apps via finger swipes, taps, and flicks; similar to how we navigate through the pictures, videos, songs, and apps stored on our various mobile devices.
During the demonstration, I noticed how easy it was for Larson-Green to navigate and use the tiled applications on the computing tablet screen.
If a user has a large collection of applications, instead of scrolling through pages of them (via finger swipes), you can see them all at once by using a two- finger pinch technique which zooms the view of all the apps outward, thus shrinking the size of the tiled apps to where you can see them all at once.
The apps can then be individually accessed, customized, re-arranged or moved into separate groups, and can be given individual names.
The Windows 8 Start screen can be personally customized and re-arranged so applications appear where you want them to.
Users will appreciate knowing Windows 8 cold boots (starts up) in less than 10 seconds.
Another feature demonstrated was the system-wide spell-checker built-in to Windows 8 that can be used by any app.
Windows 8 has a convenient one-step process to wipe the computer clean and restore it to the original factory settings; Microsoft fittingly calls this feature: Reset.
Removing corrupted system software without losing your system file settings and applications (downloaded from the soon-to-be Windows 8 App Store) can be accomplished using the Refresh feature.
Microsoft appears to be heading into the future focusing on an operating system using Metro-styled interactive apps functional on both traditional computers and mobile computing device display screens.
Windows 8 is taking us into an immersive, intuitive, touch-centric, and interactive digital computing environment navigable by means of finger touch swipes, flicks, taps, and pinches.
Sounds like fun.
No release date was given during the presentation regarding a beta version of the Windows 8 operating system for the general public.
Sinofsky said, “We’re going to be driven by the quality, and not by a date.”
To watch the two-hour opening keynote address demonstrating Windows 8 features and some very cool developer application code programming, go to http://tinyurl.com/3dycygt.
About Mark Ollig:
Telecommunications and all things tech has been a well-traveled road for me. I enjoy learning what is new in technology and sharing it with others who enjoy reading my particular slant on it via this blog. I am also a freelance columnist for my hometown's print and digital newspaper.
Subscribe to:
Posts (Atom)