Feb. 28, 2011
by Mark Ollig
Barcelona, Spain played host to more than 50,000 people from more than 200 countries, attending the 2011 Mobile World Congress (MWC).
This year’s gathering of some of the largest players in the mobile industry took place Feb. 14 - 17.
The number of companies showcasing their mobile technology throughout eight exhibition halls inside the Fira Barcelona venue was nearly 1,500.
The MWC provides a setting for mobile industry networking, new business opportunities, and technical presentations.
In conjunction with MWC, more than 12,000 mobile application (app) software developers, content providers, and mobile device manufacturers participated in the App Planet conferences.
More than 200 exhibitors, about 45,000 visitors, and 11 App Developer conferences took place.
The App Planet exhibit areas and forums focused on four mobile area themes:
• Embedded Mobile - This theme focused on increasing wireless connectivity in more consumer electronic devices.
• Green Tech - In an effort to motivate the decision-makers, this forum presented alternatives using green technology to conserve energy usage and costs.
• Mobile Money - More people are using mobile apps for their bank account activities. This forum discussed technology standardization among mobile financial institution apps.
• Mobile Health - Showcasing creative solutions for remote healthcare using mobile services, this forum talked about how mobile health is able to reduce costs and increase the accessibility of healthcare’s reach on a global scale. Indeed, mobile healthcare is now meeting up wirelessly with the Internet cloud.
Get ready for CloudCare, coming soon to a mobile device near you.
App Planet says, it “will continue to bring together the many critical elements of the broad mobile app ecosystem in one location.”
Many of the mobile devices at MWC included computing tablets, smart phones, new apps, and other mobile adjunct technologies.
What about Apple iPhones? Well, Apple no longer attends these kinds of events so the focus was not on Apple, but you can bet Apple’s competitors at MWC were doing their best to shift some of the spotlight onto their own wares.
Not all mobile tablets have 9.7-inch display screens like the Apple iPad. Many of the mobile venders and manufacturers at MWC presented a variety of screen-sized tablets ranging from 4 to 10 inches.
The MWC was not just about showing off mobile devices either.
Some of the leading names in technology were in Barcelona, and presented keynote “thought leadership” conferences and participated in discussion panels.
Company CEOs and presidents speaking at MWC included those of Twitter, AT&T, Intel, Microsoft, Yahoo, Google, HTC, Nokia, and 12 others.
According to AT&T’s Chief Technology Officer John Donovan, stored information will continue to migrate from our devices to the Internet cloud.
Donovan (and your humble columnist) feels the move to cloud-based computing will continue to grow to the point where we will be able to access all of our personal information and data content from any device connected to the cloud.
Google’s CEO Eric Schmidt, spoke at length about his company’s vision of the mobile future.
He talked about the quickened pace of innovation and provided some interesting statistics.
Today, there are about 700 million pubic servers connected to the Internet and half of all new connections to the Internet, are from mobile devices.
By 2013, mobile data traffic will increase 66 times from what is used today, and mobile smartphones will surpass global personal computers in total sales.
Using the abundant memory storage available across cloud computing data servers, an SIM (Subscriber Identity Module) memory card will eventually no longer be needed. Instead of having a SIM to retain information in a mobile phone, the subscriber information will be accessible from the cloud.
He also stressed, there is much more memory and processing power available using the cloud, allowing newer voice recognition apps to work more efficiently.
Schmidt says to think of cloud information storage as being “replicated,” not simply “copied.”
Google’s own voice language translation apps are almost to the point of allowing a two-way conversation on the phone, while each person’s unique language is translated instantly into the other’s language.
This triggered my recollection of the Tower of Babel story.
Google’s Search by Voice app uses powerful audio recognition engines and understands English, Mandarin, Japanese and German.,
Google’s Goggles app, which yours truly wrote a full column about Dec. 14, 2009, is a mobile phone visual search app.
The following is an example of their newly advanced Goggles mobile visual search app.
Say you were on a European vacation and find yourself looking down at a menu written in German.
Unfortunately, you may not be able to read German.
No worries.
Just hold your mobile phone over the German text and take a picture. This new app instantly translates (onto your phone’s screen) the German menu text into your language preference, using the power of cloud computing. A successful demonstration of this was shown at MWC.
Google also spoke of its Android open platform software for mobile devices as a merger of computing, connectivity, and the cloud.
The complete Google MWC presentation can be seen at http://tinyurl.com/yg7nc3v.
The Mobile World Congress website is: http://www.mobileworldcongress.com.
Thursday, February 24, 2011
Wednesday, February 16, 2011
Computer History Museum chronicles an incredible journey
Feb. 21, 2011
by Mark Ollig
Fresh from a $19 million restoration, the Computer History Museum located in Mountain View, CA., re-opened with a new exhibit called Revolution: The First 2000 Years of Computing.
The Computer History Museum has been at this location since 2002 and houses a computing history collection of about 100,000 historical artifacts spread across 125,000 square feet.
It also includes a cafe, gift shop, theater, and educational facilities.
According to its president and CEO John Hollar, the Computer History Museum “represents a very sweeping history of computing; it covers everything from the abacus to the smartphone.”
In addition to being a “Garden of Eden” for computer geeks, it is also an appealing place for non-techie folks to visit.
Inside the museum, there are 19 major exhibit spaces or “conclaves” containing significant computing industry milestones for visitors to look at.
The exhibits begin with the earliest computing device: ancient counting machines.
Inside one conclave, visitors will see the story of the historic counting tool called an abacus.
The evolution to electronic and then mainframe computers, followed by the progression to advanced supercomputers, are all displayed.
Many of the earliest personal computers are also there.
Visual displays in another exhibit show-and-tell the story of the first integrated electronic circuits, which were put together in the late 1950s.
Information is presented in a non-technical way, making it easier for people to learn how the computing devices they use in their daily lives were created.
Len Shustek, a former teacher, is currently a Computer History Museum board member.
He recently talked about his favorite computer at the museum in a video I watched.
The computer system Shustek says he used as a college student in the 1960s, was an IBM 7094 scientific computer.
The IBM 7094 was an advanced, solid-state data processing system first installed in 1962.
This large-scale computing system sold for $3,134,500.
In one second, the IBM 7094 could perform 500,000 logical decisions, 250,000 additions or subtractions, 100,000 multiplications or 62,500 divisions.
IBM stopped selling the 7094 in 1969.
Many of the original computer programmers and individuals who actually designed some of the computers being displayed will come to the Computer History Museum. According to Shustek, they find the experience like visiting old friends. These people can relate to the vintage hardware and software platforms because of their prior association with them.
One computing artifact at the museum this NASA space program devotee would like to look at is the original Apollo command module Guidance Computer System.
The Apollo Guidance Computer was first used on Apollo 7 as it orbited the earth in 1968.
In 1969, this same-style guidance system aided Apollo 11 to the moon. The astronauts in the command module would punch two-digit codes and press the proper set of words on its display and keyboard unit in order to communicate with it.
Some of the originally built Apple I and II computers are exhibited in the museum, along with the Sinclair ZX80 and many other earlier home computers.
People who visit the Computer History Museum will no doubt find the very first computer model they ever used somewhere on display.
The museum’s collection of computers, along with software and much of the printed material are mostly donated by private individuals and corporations.
Some of the hardware stored there includes the first Google data server, and three Cray-1 supercomputers designed by “the father of supercomputing,” Seymour Cray, who graduated from The University of Minnesota in 1949.
Not only is the story of computing hardware shown in this museum, but also the story of computing software.
As we all know, computers require software in order for them to perform their functions.
Software doesn’t really have the tangible artifacts which can be seen and held like computer hardware, so it is a bit more challenging to exhibit.
The Computer History Museum’s Software Preservation Group (SPG), collects and preserves early computing software.
The software presentations focus attention on the stories from the people involved with its creation, which in itself, is historically noteworthy.
One of these collections includes FORTRAN (Formula Translating System) software developed in the mid 1950s by the late John Backus when he worked for IBM.
The SPG area of the museum contains collections of software design documents and videos which, catalog many of the historical operating systems and software programs created over the years.
Explaining the roles they played in computing history, early computing pioneers such as Apple co-founder Steve Wozniak, have given pubic speaking engagements at the museum.
Len Shustek stated, “It’s our goal to create a permanent institution to preserve and to celebrate the history of the information revolution.”
The YouTube channel for the Computer History Museum can be seen at http://tinyurl.com/bxqhn5.
I encourage you to visit this channel and view the many historic and information-packed videos provided by the Computer History Museum.
Begin your journey to The Computer History Museum’s website at http://www.computerhistory.org.
by Mark Ollig
Fresh from a $19 million restoration, the Computer History Museum located in Mountain View, CA., re-opened with a new exhibit called Revolution: The First 2000 Years of Computing.
The Computer History Museum has been at this location since 2002 and houses a computing history collection of about 100,000 historical artifacts spread across 125,000 square feet.
It also includes a cafe, gift shop, theater, and educational facilities.
According to its president and CEO John Hollar, the Computer History Museum “represents a very sweeping history of computing; it covers everything from the abacus to the smartphone.”
In addition to being a “Garden of Eden” for computer geeks, it is also an appealing place for non-techie folks to visit.
Inside the museum, there are 19 major exhibit spaces or “conclaves” containing significant computing industry milestones for visitors to look at.
The exhibits begin with the earliest computing device: ancient counting machines.
Inside one conclave, visitors will see the story of the historic counting tool called an abacus.
The evolution to electronic and then mainframe computers, followed by the progression to advanced supercomputers, are all displayed.
Many of the earliest personal computers are also there.
Visual displays in another exhibit show-and-tell the story of the first integrated electronic circuits, which were put together in the late 1950s.
Information is presented in a non-technical way, making it easier for people to learn how the computing devices they use in their daily lives were created.
Len Shustek, a former teacher, is currently a Computer History Museum board member.
He recently talked about his favorite computer at the museum in a video I watched.
The computer system Shustek says he used as a college student in the 1960s, was an IBM 7094 scientific computer.
The IBM 7094 was an advanced, solid-state data processing system first installed in 1962.
This large-scale computing system sold for $3,134,500.
In one second, the IBM 7094 could perform 500,000 logical decisions, 250,000 additions or subtractions, 100,000 multiplications or 62,500 divisions.
IBM stopped selling the 7094 in 1969.
Many of the original computer programmers and individuals who actually designed some of the computers being displayed will come to the Computer History Museum. According to Shustek, they find the experience like visiting old friends. These people can relate to the vintage hardware and software platforms because of their prior association with them.
One computing artifact at the museum this NASA space program devotee would like to look at is the original Apollo command module Guidance Computer System.
The Apollo Guidance Computer was first used on Apollo 7 as it orbited the earth in 1968.
In 1969, this same-style guidance system aided Apollo 11 to the moon. The astronauts in the command module would punch two-digit codes and press the proper set of words on its display and keyboard unit in order to communicate with it.
Some of the originally built Apple I and II computers are exhibited in the museum, along with the Sinclair ZX80 and many other earlier home computers.
People who visit the Computer History Museum will no doubt find the very first computer model they ever used somewhere on display.
The museum’s collection of computers, along with software and much of the printed material are mostly donated by private individuals and corporations.
Some of the hardware stored there includes the first Google data server, and three Cray-1 supercomputers designed by “the father of supercomputing,” Seymour Cray, who graduated from The University of Minnesota in 1949.
Not only is the story of computing hardware shown in this museum, but also the story of computing software.
As we all know, computers require software in order for them to perform their functions.
Software doesn’t really have the tangible artifacts which can be seen and held like computer hardware, so it is a bit more challenging to exhibit.
The Computer History Museum’s Software Preservation Group (SPG), collects and preserves early computing software.
The software presentations focus attention on the stories from the people involved with its creation, which in itself, is historically noteworthy.
One of these collections includes FORTRAN (Formula Translating System) software developed in the mid 1950s by the late John Backus when he worked for IBM.
The SPG area of the museum contains collections of software design documents and videos which, catalog many of the historical operating systems and software programs created over the years.
Explaining the roles they played in computing history, early computing pioneers such as Apple co-founder Steve Wozniak, have given pubic speaking engagements at the museum.
Len Shustek stated, “It’s our goal to create a permanent institution to preserve and to celebrate the history of the information revolution.”
The YouTube channel for the Computer History Museum can be seen at http://tinyurl.com/bxqhn5.
I encourage you to visit this channel and view the many historic and information-packed videos provided by the Computer History Museum.
Begin your journey to The Computer History Museum’s website at http://www.computerhistory.org.
Wednesday, February 9, 2011
The real inventor of the World Wide Web
By Mark Ollig
(From the archives: December 4, 2006)
Tim Berners-Lee is the person who wrote the programming code that we use when we “point and click” our way through the hyper-links of documents, sounds, videos and information that we access via the World Wide Web portion of the Internet.
Berners-Lee called his creation a “global hypertext system.”
Some people think that the Internet and the Web are the same, but this is not true.
The Internet is basically a network made from computers, routers, gateways and cables used to send around little “packets” of information. A packet is a bit (no pun intended) like a postcard with a simple address on it.
If you put the right address on a packet and gave it to any computer which is connected as part of the Internet, each computer would figure out which cable or path to send it down next so that it would get to its destination. That’s what the Internet does. It delivers packets anywhere in the world using various protocols and it can do this very quickly.
Berners-Lee connects the Internet to the Web by saying “The Web exists because of programs which communicate between computers on the ‘Net. The Web connections are hypertext links. The Web could not be without the Net. The Web made the Net useful because people are really interested in information (not to mention knowledge and wisdom!) and don’t really want to have know about computers and cables.”
In May of 1998, Tim Berners-Lee wrote a short piece on the history of the World Wide Web in which he says “. . .The dream behind the Web is of a common information space in which we communicate by sharing information.”
Back in 1980, Berners-Lee was working with computer software programs to store information with random links. Nine years later, while he was working at the European Particle Physics Laboratory near Geneva Switzerland, also known as “CERN” which I found out is the French abbreviation for “Conseil Européen pour la Recherche Nucléaire.”
Since the only French I remember was from a record album that comedian Steve Martin did back in late 1970s in which he says that when he was in France the only French he knew was how to order a ham and cheese omelet in the restaurant. Yep, the ol’ “omelette de jambon et de fromage.”
I went to the language translation link on Google at: http://www.google.com/language_tools and found out that the translation to English of “Conseil Européen pour la Recherche Nucléaire” is: The Council European for the Nuclear Research.
Tim Berners-Lee goes on to say that this is where in 1989, he first proposed that a global hypertext space be created in which any network-accessible information could be referred to by what he had called a UDI or “Universal Document Identifier.” This would become known today as the “Uniform Resource Locator” or “URL” that we are typing when going to a particular website.
He finished the actual client-browser and the point and click hypertext editor he called the “WorldWideWeb Program” in 1990. Some of the very early web browsers had names like Erwise, Viola, Cello and Mosaic. Today we use browsers like Netscape, Mozilla Firefox and Internet Explorer.
Berners-Lee said he was under pressure to define the future evolution of the ‘Web, so he decided to form the World Wide Web Consortium or W3C in September 1994 with a base at the Massachusetts Institute of Technology in the USA, and offices in France and in Japan.
The W3C is a neutral, open forum where companies and organizations can discuss and come to agreement upon new computer protocols that will help see the Web develop to its full potential.
It has been a center for education, issue raising and design. The website says their decisions are made by consensus. If you want to find out what the latest advancements being planned for the continued evolution of the World Wide Web are, I highly recommend you visit http://www.w3.org which states their mission is “To lead the World Wide Web to its full potential by developing protocols and guidelines that ensure long-term growth for the Web.” I noted that under the photo of Tim Berners-Lee it says he is the Director of the W3C and also “The Inventor of the World Wide Web” which in this humble columnist’s opinion adds a bit of respectability to this website!
For those of you out there that would like to see the original 1989 proposal (including a circles and arrows diagram) that Tim Berners-Lee first submitted and in which he coined the term: “WorldWideWeb” link over to: http://www.w3.org/History/1989/proposal.html and you will see what I call probably the ‘Webs most historical document when it comes to how we are able to navigate the Internet as we have come to use it today.
The computer that Berners-Lee used to write the code for the first web-browser and also what became the first “web-server” was called the “NeXtcube.” You can see and read about it at: http://en.wikipedia.org/wiki/NeXTcube
And if you go to this link: http://www.w3.org/History/1994/WWW/Journals/CACM/screensnap2_24c.gif you will see the “snapshot” of Tim Berners-Lee’s (and the world’s) very first website that he created and released to the High Energy Physics community.
(From the archives: December 4, 2006)
Tim Berners-Lee is the person who wrote the programming code that we use when we “point and click” our way through the hyper-links of documents, sounds, videos and information that we access via the World Wide Web portion of the Internet.
Berners-Lee called his creation a “global hypertext system.”
Some people think that the Internet and the Web are the same, but this is not true.
The Internet is basically a network made from computers, routers, gateways and cables used to send around little “packets” of information. A packet is a bit (no pun intended) like a postcard with a simple address on it.
If you put the right address on a packet and gave it to any computer which is connected as part of the Internet, each computer would figure out which cable or path to send it down next so that it would get to its destination. That’s what the Internet does. It delivers packets anywhere in the world using various protocols and it can do this very quickly.
Berners-Lee connects the Internet to the Web by saying “The Web exists because of programs which communicate between computers on the ‘Net. The Web connections are hypertext links. The Web could not be without the Net. The Web made the Net useful because people are really interested in information (not to mention knowledge and wisdom!) and don’t really want to have know about computers and cables.”
In May of 1998, Tim Berners-Lee wrote a short piece on the history of the World Wide Web in which he says “. . .The dream behind the Web is of a common information space in which we communicate by sharing information.”
Back in 1980, Berners-Lee was working with computer software programs to store information with random links. Nine years later, while he was working at the European Particle Physics Laboratory near Geneva Switzerland, also known as “CERN” which I found out is the French abbreviation for “Conseil Européen pour la Recherche Nucléaire.”
Since the only French I remember was from a record album that comedian Steve Martin did back in late 1970s in which he says that when he was in France the only French he knew was how to order a ham and cheese omelet in the restaurant. Yep, the ol’ “omelette de jambon et de fromage.”
I went to the language translation link on Google at: http://www.google.com/language_tools and found out that the translation to English of “Conseil Européen pour la Recherche Nucléaire” is: The Council European for the Nuclear Research.
Tim Berners-Lee goes on to say that this is where in 1989, he first proposed that a global hypertext space be created in which any network-accessible information could be referred to by what he had called a UDI or “Universal Document Identifier.” This would become known today as the “Uniform Resource Locator” or “URL” that we are typing when going to a particular website.
He finished the actual client-browser and the point and click hypertext editor he called the “WorldWideWeb Program” in 1990. Some of the very early web browsers had names like Erwise, Viola, Cello and Mosaic. Today we use browsers like Netscape, Mozilla Firefox and Internet Explorer.
Berners-Lee said he was under pressure to define the future evolution of the ‘Web, so he decided to form the World Wide Web Consortium or W3C in September 1994 with a base at the Massachusetts Institute of Technology in the USA, and offices in France and in Japan.
The W3C is a neutral, open forum where companies and organizations can discuss and come to agreement upon new computer protocols that will help see the Web develop to its full potential.
It has been a center for education, issue raising and design. The website says their decisions are made by consensus. If you want to find out what the latest advancements being planned for the continued evolution of the World Wide Web are, I highly recommend you visit http://www.w3.org which states their mission is “To lead the World Wide Web to its full potential by developing protocols and guidelines that ensure long-term growth for the Web.” I noted that under the photo of Tim Berners-Lee it says he is the Director of the W3C and also “The Inventor of the World Wide Web” which in this humble columnist’s opinion adds a bit of respectability to this website!
For those of you out there that would like to see the original 1989 proposal (including a circles and arrows diagram) that Tim Berners-Lee first submitted and in which he coined the term: “WorldWideWeb” link over to: http://www.w3.org/History/1989/proposal.html and you will see what I call probably the ‘Webs most historical document when it comes to how we are able to navigate the Internet as we have come to use it today.
The computer that Berners-Lee used to write the code for the first web-browser and also what became the first “web-server” was called the “NeXtcube.” You can see and read about it at: http://en.wikipedia.org/wiki/NeXTcube
And if you go to this link: http://www.w3.org/History/1994/WWW/Journals/CACM/screensnap2_24c.gif you will see the “snapshot” of Tim Berners-Lee’s (and the world’s) very first website that he created and released to the High Energy Physics community.
Wednesday, February 2, 2011
Mr. Watson, come here, play 'Jeopardy!'
February 7, 2011
by Mark Ollig
Most of us recall the quote, “Elementary, my dear Watson,” which is attributed to the Sherlock Holmes character replying to his friend and associate, Dr. Watson.
It is surprising to most people when they learn the Holmes character never actually spoke this line to Dr. Watson. Holmes did say the words, “Elementary” and “My dear Watson,” but never in one complete sentence.
Spoken March 10, 1876 by Alexander Graham Bell to his assistant Thomas A. Watson, the following quote has special meaning for yours truly. “Mr. Watson! Come here! I want to see you!” In 1931, Watson is reported to have said he recalled the words spoken by Bell as, “Mr. Watson, come here, I want you.”
By now you may be wondering what I am leading up to with all this Watson talk.
It’s elementary, my dear readers.
The giant computer maker, IBM, has come up with a new super computer, which recently competed in a practice round against the two top-ranked players of the popular television game show “Jeopardy!”
This new computer is called “Watson” and is named after IBM founder Thomas J. Watson.
Speaking in a relaxed manner during the practice round, the sentences Watson used were fluid, not choppy or “computer-like” at all.
Watson was able to instantly respond (in the form of a question) to the answers as displayed on the “Jeopardy!” board.
The looks of frustration displayed at times on the faces of the human players standing on either side of Watson had me wondering if this computer should have been instead called “HAL” as in reference to the sci-fi movie “2001: A Space Odyssey.” This futuristic computer, HAL 9000, combatingly challenges and almost outsmarts his human counter-parts.
Instead of HAL, IBM says Watson is more like the computer on Star Trek.
Watson’s hardware is made up of 10 racks of IBM POWER 750 servers using the Linux operating system inside 10 refrigerator-sized cabinet bays.
Watson has 15 terabytes of Random Access Memory (RAM), 2,880 processor cores, and operates over a massively-parallel high-performance computing platform.
Watson also has the ability to learn.
The Open Advancement of Question Answering (OAQA) is an area of computer science which develops software models that are able to provide precise and informative answers to questions asked of it using language, such as English, spoken in a natural manner.
The IBM Research computer science paper titled “Towards the Open Advancement of Question Answering Systems” discussed this software model.
Dr. Bill Murdock is a computer scientist with IBM who is involved with the technology’s algorithms – or set of rules – used in Watson, co-wrote this paper with other researchers in February of 2008.
The paper goes into details about the challenges which needed to be overcome for successful development of an OAQA algorithm to be used in the software code.
This is not just a typical software search and display program, but an advanced and revolutionary means of having a computer actually “understand” the question being asked, thus providing a more complete and informative answer.
The answers could be acquired from resources found in scattered or semi-structured Internet web pages and blog postings or from more structured and precise data found in online encyclopedias, and educational, governmental, and similar online or private databases.
One section of the paper talks about developing a Challenge Set Profile for testing the software’s Question Answering solution abilities.
One of the five challenge problems was identified as the “Jeopardy!” game show.
This IBM paper states that in order for the computer program to succeed in this challenge problem, the software coding itself would require high degrees of accuracy, speed, and confidence.
Yes, software having “confidence” is a new one for me, too.
The paper describes how sophistication would be needed in working out the software program’s grammatical structure and phrasing of sentences used in answers to the sometimes wordy “Jeopardy!” questions. The IBM paper acknowledges this “natural language parsing” between humans and the computer would be a difficult challenge.
Scientists at IBM worked for four years on this highly- advanced Question Answering (QA) system for Watson, calling it DeepQA.
Watson competed against the two top-ranked “Jeopardy!” contestants in a practice round this past January.
This practice round shows Watson being represented physically by a display screen with its avatar flickering and flashing in a manner IBM described as “a global map projection with a halo of ‘thought rays.’” To the left and right of Watson stood Jeopardy! champions Ken Jennings and Brad Rutter.
Watson acted very human, as it patiently listened to each question and then buzzed-in with a verbal response.
For each question, Watson simultaneously processed thousands of its algorithmic formulas via its sophisticated DeepQA program.
Watson performs 80 trillion calculations or operations per second.
During this practice round, Watson was not directly connected to the Internet, but was instead accessing millions of pages of previously scanned content which included books, entire encyclopedias, databases of literature, and other sources.
Watson ended up defeating its human opponents during this practice round.
This reminded me of when IBM’s Deep Blue computer defeated chess grandmaster Garry Kasparov back in 1997.
Watson participates in an official “Jeopardy!” tournament-style competition airing Monday, Feb. 14 through Wednesday, Feb. 16.
Watch Watson’s Jeopardy! practice round at http://tinyurl.com/6bsusvq.
Also, check out an in-depth interview with a member of IBM’s Algorithms Team at http://tinyurl.com/4rrsv3g.
by Mark Ollig
Most of us recall the quote, “Elementary, my dear Watson,” which is attributed to the Sherlock Holmes character replying to his friend and associate, Dr. Watson.
It is surprising to most people when they learn the Holmes character never actually spoke this line to Dr. Watson. Holmes did say the words, “Elementary” and “My dear Watson,” but never in one complete sentence.
Spoken March 10, 1876 by Alexander Graham Bell to his assistant Thomas A. Watson, the following quote has special meaning for yours truly. “Mr. Watson! Come here! I want to see you!” In 1931, Watson is reported to have said he recalled the words spoken by Bell as, “Mr. Watson, come here, I want you.”
By now you may be wondering what I am leading up to with all this Watson talk.
It’s elementary, my dear readers.
The giant computer maker, IBM, has come up with a new super computer, which recently competed in a practice round against the two top-ranked players of the popular television game show “Jeopardy!”
This new computer is called “Watson” and is named after IBM founder Thomas J. Watson.
Speaking in a relaxed manner during the practice round, the sentences Watson used were fluid, not choppy or “computer-like” at all.
Watson was able to instantly respond (in the form of a question) to the answers as displayed on the “Jeopardy!” board.
The looks of frustration displayed at times on the faces of the human players standing on either side of Watson had me wondering if this computer should have been instead called “HAL” as in reference to the sci-fi movie “2001: A Space Odyssey.” This futuristic computer, HAL 9000, combatingly challenges and almost outsmarts his human counter-parts.
Instead of HAL, IBM says Watson is more like the computer on Star Trek.
Watson’s hardware is made up of 10 racks of IBM POWER 750 servers using the Linux operating system inside 10 refrigerator-sized cabinet bays.
Watson has 15 terabytes of Random Access Memory (RAM), 2,880 processor cores, and operates over a massively-parallel high-performance computing platform.
Watson also has the ability to learn.
The Open Advancement of Question Answering (OAQA) is an area of computer science which develops software models that are able to provide precise and informative answers to questions asked of it using language, such as English, spoken in a natural manner.
The IBM Research computer science paper titled “Towards the Open Advancement of Question Answering Systems” discussed this software model.
Dr. Bill Murdock is a computer scientist with IBM who is involved with the technology’s algorithms – or set of rules – used in Watson, co-wrote this paper with other researchers in February of 2008.
The paper goes into details about the challenges which needed to be overcome for successful development of an OAQA algorithm to be used in the software code.
This is not just a typical software search and display program, but an advanced and revolutionary means of having a computer actually “understand” the question being asked, thus providing a more complete and informative answer.
The answers could be acquired from resources found in scattered or semi-structured Internet web pages and blog postings or from more structured and precise data found in online encyclopedias, and educational, governmental, and similar online or private databases.
One section of the paper talks about developing a Challenge Set Profile for testing the software’s Question Answering solution abilities.
One of the five challenge problems was identified as the “Jeopardy!” game show.
This IBM paper states that in order for the computer program to succeed in this challenge problem, the software coding itself would require high degrees of accuracy, speed, and confidence.
Yes, software having “confidence” is a new one for me, too.
The paper describes how sophistication would be needed in working out the software program’s grammatical structure and phrasing of sentences used in answers to the sometimes wordy “Jeopardy!” questions. The IBM paper acknowledges this “natural language parsing” between humans and the computer would be a difficult challenge.
Scientists at IBM worked for four years on this highly- advanced Question Answering (QA) system for Watson, calling it DeepQA.
Watson competed against the two top-ranked “Jeopardy!” contestants in a practice round this past January.
This practice round shows Watson being represented physically by a display screen with its avatar flickering and flashing in a manner IBM described as “a global map projection with a halo of ‘thought rays.’” To the left and right of Watson stood Jeopardy! champions Ken Jennings and Brad Rutter.
Watson acted very human, as it patiently listened to each question and then buzzed-in with a verbal response.
For each question, Watson simultaneously processed thousands of its algorithmic formulas via its sophisticated DeepQA program.
Watson performs 80 trillion calculations or operations per second.
During this practice round, Watson was not directly connected to the Internet, but was instead accessing millions of pages of previously scanned content which included books, entire encyclopedias, databases of literature, and other sources.
Watson ended up defeating its human opponents during this practice round.
This reminded me of when IBM’s Deep Blue computer defeated chess grandmaster Garry Kasparov back in 1997.
Watson participates in an official “Jeopardy!” tournament-style competition airing Monday, Feb. 14 through Wednesday, Feb. 16.
Watch Watson’s Jeopardy! practice round at http://tinyurl.com/6bsusvq.
Also, check out an in-depth interview with a member of IBM’s Algorithms Team at http://tinyurl.com/4rrsv3g.