July 2, 2012
by Mark Ollig
Are you thinking your humble columnist is playing a cat-and-mouse game with you?
Well, the answer to my question is: 16,000 computer core processors (on 1,000 computing machines), were used to independently identify the image of a cat.
Google’s secretive X laboratory (reportedly located in Google’s California headquarters and known as the Googleplex), has created an artificial brain using 16,000 core processors and a human-styled, artificial neural network containing more than one billion connections.
So, what does one do with this new artificial brain?
Why, you connect it to the Internet, of course.
The first online venue the artificial brain visited was Google’s very own YouTube.
The artificial brain began absorbing picture stills taken from watching a non-ending video stream featuring all those cute kitten videos, which some (alright, most of us) love to watch.
Purr-fect.
During testing, Google’s team also fed into the computer’s brain, random, still-frames of images, selectively picked from more than 10 million YouTube videos.
As hoped, the artificial brain was able to “learn” when it began to focus in on the cat images.
The artificial brain learned – on its own – what a cat looked like.
“We never told it during the training, ‘this is a cat,’ it basically invented the concept of a cat,” said Dr. Dean, who led the Google team during this project.
Well, isn’t this the cat’s meow?
Researchers on Google’s team reported this artificial brain’s recognition system “obtains 15.8 percent accuracy in recognizing 20,000 object categories from ImageNet.”
ImageNet is an enormous picture database. Currently, it has almost 14.2 million images. It is located at http://www.image-net.org.
Dr. Andrew Y. Ng, another Google team leader, is also director of the Stanford Artificial Intelligence Lab.
He explained as an alternative of having research teams “trying to find out how to find edges” (in regard to image recognition), he came up with this concept, “. . . instead, throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data.”
The algorithm used is a type of sparse coding, which can be focused on unlabeled, natural images. Being this is a learning algorithm; it thus contributed to the computer core’s neural network learning ability.
“Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not,” stated Dr. Ng in his abstract paper, “Building High-level Features using Large Scale Unsupervised Learning.” This paper was presented during last week’s 2012 International Conference on Machine Learning (ICML) in Edinburgh, Scotland.
In this paper, the explanation between cat and human body “facial detectors” were outlined in thorough detail – complete with some complicated formulas.
Researchers set up two experiments, one for classifying human bodies against random backgrounds, and one for classifying cat faces against random, distracting images.
In data collections consisting of positive and negative images (human and cat), the results confirm the artificial neural network learns not only the concept of faces, but also the concepts of cat faces and human bodies.
The results of these high-level facial detectors also surpassed standard baseline studies (in terms of recognition rates), achieving 74.8 percent and 76.7 percent on cat and human body, respectively.
“Our work shows that it is possible to train neurons to be selective for high-level concepts using entirely unlabeled data. In our experiments, we obtained neurons that function as detectors for faces, human bodies, and cat faces by training on random frames of YouTube videos,” stated the concluding remarks in the ICML paper.
“It is worth noting that our network is still tiny, compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses,” said the Google researchers.
It appears Google (or anyone else) won’t be creating computer processors capable of fully emulating human-brain visual processing anytime soon.
Which means, for now, our human brain still claims visual bragging rights over a 16,000 core computer processing network – however, the machines are slowly catching up.
To read the detailed, eight-page ICML submission paper, go to http://tinyurl.com/6mx8qam.
This is the cat face the computerized neural network "sees" in its artificial brain.
by Mark Ollig
Are you thinking your humble columnist is playing a cat-and-mouse game with you?
Well, the answer to my question is: 16,000 computer core processors (on 1,000 computing machines), were used to independently identify the image of a cat.
Google’s secretive X laboratory (reportedly located in Google’s California headquarters and known as the Googleplex), has created an artificial brain using 16,000 core processors and a human-styled, artificial neural network containing more than one billion connections.
So, what does one do with this new artificial brain?
Why, you connect it to the Internet, of course.
The first online venue the artificial brain visited was Google’s very own YouTube.
The artificial brain began absorbing picture stills taken from watching a non-ending video stream featuring all those cute kitten videos, which some (alright, most of us) love to watch.
Purr-fect.
During testing, Google’s team also fed into the computer’s brain, random, still-frames of images, selectively picked from more than 10 million YouTube videos.
As hoped, the artificial brain was able to “learn” when it began to focus in on the cat images.
The artificial brain learned – on its own – what a cat looked like.
“We never told it during the training, ‘this is a cat,’ it basically invented the concept of a cat,” said Dr. Dean, who led the Google team during this project.
Well, isn’t this the cat’s meow?
Researchers on Google’s team reported this artificial brain’s recognition system “obtains 15.8 percent accuracy in recognizing 20,000 object categories from ImageNet.”
ImageNet is an enormous picture database. Currently, it has almost 14.2 million images. It is located at http://www.image-net.org.
Dr. Andrew Y. Ng, another Google team leader, is also director of the Stanford Artificial Intelligence Lab.
He explained as an alternative of having research teams “trying to find out how to find edges” (in regard to image recognition), he came up with this concept, “. . . instead, throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data.”
The algorithm used is a type of sparse coding, which can be focused on unlabeled, natural images. Being this is a learning algorithm; it thus contributed to the computer core’s neural network learning ability.
“Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not,” stated Dr. Ng in his abstract paper, “Building High-level Features using Large Scale Unsupervised Learning.” This paper was presented during last week’s 2012 International Conference on Machine Learning (ICML) in Edinburgh, Scotland.
In this paper, the explanation between cat and human body “facial detectors” were outlined in thorough detail – complete with some complicated formulas.
Researchers set up two experiments, one for classifying human bodies against random backgrounds, and one for classifying cat faces against random, distracting images.
In data collections consisting of positive and negative images (human and cat), the results confirm the artificial neural network learns not only the concept of faces, but also the concepts of cat faces and human bodies.
The results of these high-level facial detectors also surpassed standard baseline studies (in terms of recognition rates), achieving 74.8 percent and 76.7 percent on cat and human body, respectively.
“Our work shows that it is possible to train neurons to be selective for high-level concepts using entirely unlabeled data. In our experiments, we obtained neurons that function as detectors for faces, human bodies, and cat faces by training on random frames of YouTube videos,” stated the concluding remarks in the ICML paper.
“It is worth noting that our network is still tiny, compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses,” said the Google researchers.
It appears Google (or anyone else) won’t be creating computer processors capable of fully emulating human-brain visual processing anytime soon.
Which means, for now, our human brain still claims visual bragging rights over a 16,000 core computer processing network – however, the machines are slowly catching up.
To read the detailed, eight-page ICML submission paper, go to http://tinyurl.com/6mx8qam.
This is the cat face the computerized neural network "sees" in its artificial brain.