Tweet This! :)

Thursday, May 29, 2014

2014 White House Science Fair

by Mark Ollig

Science projects created by students from grade to high school were the focus of this year’s White House Science Fair.
This is the fourth White House Science Fair, since its inception in 2010.

Last Tuesday, 105 students from 30 states across the country, representing over 40 organizations, participated in this year’s event.

The science projects demonstrated made use of STEM (Science, Technology, Engineering, and Mathematics).

Students proudly showcased their science fair projects.

Some of the students were able to make a presentation about their science project before the President of the United States.

Bill Nye, “The Science Guy,” also at this year’s event, interviewed some of the students outside on the lawn of the White House.

Nye’s first interview was with the three girl student members of “Team Rocket Power” from Maryland.

Rebecca Chapin-Ridgely, 17; Jasmyn Logan, and Nia’mani Robinson, both 15, worked after school and during weekends to construct and test their rocket, which was painted a bright purple.

As participants in the Team America Rocketry Challenge, the objective for their science project was to launch a rocket containing a payload of two eggs into the air at a minimum height of 825 feet within 48-50 seconds.

The egg payload needed to return to the ground – unbroken.

When asked if they were successful in returning the eggs back to the earth without any cracks, one of the students smiled, while responding with; “Most of the time, we were.”

Nye grinned, and questioned if they were able to reach the 825 feet. One student replied, “We hit it a couple of times; sometimes it [the rocket] was too high.”

Two mentors worked with the students to ensure their rocket experiments progressed safely.

Yes indeed, folks, this project was one of true “rocket science.”

Next to be interviewed was Parker Owen, 20, from Alabama, whose robotic science project was to make a prosthetic leg limb entirely out of a single, recycled bicycle.

Owen chose this project as a cost-effective solution for making prosthetics more accessible in developing countries.

He noted bicycles are being used as a major mode of transportation in many countries; when they break down or become old, they are commonly discarded.

In addition to the recycled bicycle; three bolts, three nuts, and a few zip-ties were also used.

The prosthetic leg has adjustable muscle fibers and tendons made from the bicycle’s tire tube, which provides the resistance and force needed during strenuous activities.

The synthetic muscles adjust using air pressure.

The prosthetic leg limb is adjustable to fit any size person, and is designed to accommodate a person’s growth, and muscle gain and loss, over the course of an individual’s lifetime.

Owen mentioned he has patented the process used to make what he calls “the cycle-leg.”

Owen’s design was created as a solution to significantly improve the quality of life of people in the developing world.

Crystal Brockington and Aaron Barron, both 18, researched a way to make an economical, yet sustainable, semi-conductor material to increase the efficiency of solar cells.

This new material combination better harnesses the power of the sun.

The research included finding an alternative nano-particle material which could be used with “quantum dots,” which are a type of nano-crystals.

They wanted to make a solar cell material that worked without cadmium; a soft metal and chemical element having carcinogenic effects to the environment.

The two students discovered by alternating the heat synthesis of the nano-particles, in conjunction with other material such as titanium dioxide, they could increase a solar panel’s efficiency.

Aaron was explaining how they were able to obtain a 48 percent efficiency level, when Bill Nye suddenly interrupted him saying: “Forty-eight percent?”

“I have a watch that’s solar powered that’s barely 10 percent! I’m not kidding,” Nye exclaimed.

“Wow! You should get some kind of award,” Nye told the smiling students.

While the students were being interviewed, the President’s family dog, Bo, walked by and was greeted enthusiastically.

Back inside the White House, President Barack Obama stopped by the science fair exhibits being displayed, and talked one-on-one with the excited students about their particular project.

Many of us also participated in our school’s science fairs.

Yours truly, while in seventh grade, was occupied in creating an exhibit displaying the planets of this solar system rotating around the sun.

I cleverly named my science fair project: “The Revolution of the Planets.”

And yes, this science project won a first-place blue ribbon, which I was very proud about receiving.

Twitter messages about this year’s White House Science Fair can be read using the hashtag #WHScienceFair.

The webpage for the 2014 White House Science Fair is:

Bill Nye interviewing students on the lawn of the White House. 
Source: White House live-streaming screen capture.

Wednesday, May 21, 2014

Search engines: Information concerns

by Mark Ollig     

What are the most dominant gatherers and holders of information in this digital age?

It’s the search engines of the Internet.

Many of us, when wanting to learn more about a politician, private citizen, company, or product, will first turn to a search engine like Google.

We simply type in any name, and see the search results.

Amazing, isn’t it?

People’s opinions are sometimes formed solely on what the search results disclose – whether they are true or not.

Well, what happens when information about someone is incorrect?

What recourse does one have when information about them on the Internet is inaccurate, illegally published, defamatory, or, intentionally falsified by someone on a social network?

Supposing your social security number, an image of your check signature, or other personal information ended up being stored on a non-secured website, and was retrieved by someone using a search engine?

Yours truly is not trying to scare anyone, and the odds of this scenario are unlikely to occur for you; but what if it did?

After typing my name inside quotation marks (which instructs the search results to include only those with “Mark” and “Ollig” in them), Google presented me with 5,120 search results.

I suppose this high number could be interpreted as somewhat flattering; or, it might suggest I spend too much time on the Internet.

The search results also informed me there is more than one Mark Ollig in the world.

Google has had 16 years to search and store Web links containing millions of “bits and bytes” worth of data regarding this humble columnist.

While going through a number of my search result links, I began to notice many were several years old.

Of course, for years, yours truly has been writing columns, blogs, and posting on Twitter, Facebook, and other social media sites scattered throughout the Internet, so there’s a lot of content out there.

A digital trail of my Internet information had been data-mined and gleaned by Google’s search engine bots.

These bots, or Web Robots, are also identified as Spiders and Web Crawlers.

The bots are software programs, navigating and collecting information from publicly accessible Internet websites.

Search engines use this information to index the content found on the Web.

Bots are constantly gathering information and storing it in data servers.

This stored content is made available for users on Google’s search engine.

Google’s search engine doesn’t create the information; it sends out those information-gathering bots to retrieve content from the Internet.

I suppose one could think of Google as a gigantic library; containing an ever-growing collection of a variety of books.

How long should Google, or any other search engine for that matter, be allowed to store content about someone?

Over the years, Google has had many accusations of privacy invasion directed at them via court lawsuits.

In a 2008 lawsuit, one person from Pennsylvania sued Google for $5 billion in a “crimes against humanity” case.

The case was eventually dismissed by the US Court of Appeals for the Third Circuit.

Then, there was a 2005 copyright infringement lawsuit filed against Google by the Authors Guild.

This was during the Google Library Project. Google had begun scanning text from books, making it available online to the public.

This lawsuit accused Google of “massive copyright infringement,” stating they were producing digital copies of copyrighted works.

In 2008, Google settled when it paid a reported $125 million. In 2013, this lawsuit was dismissed.

The Google Books Library Project website is:

Two weeks ago, a directive by the Court of Justice of the European Union, sparked much attention regarding personal information being stored and displayed by a search engine; namely, Google Spain, which is owned, by Google, Inc.

The court’s directive ruled European individuals have the right to ask Google to delete personal data, and have it “to be forgotten” under the conditions of when their personal data becomes inaccurate or outdated.

The Court of Justice of the European Union directive points out Google is “processing” the user information obtained by saying: “The Court further holds that the operator of the search engine is the ‘controller’ in respect of that processing.”

You can read their press release here:

It is wondered whether this directive could compel Google to remove information when requested to do so from US citizens.

As I understand, current federal law states websites and search engines can’t be held liable for indexing and providing hyper-links to a third party, or other website content.

A person wanting to have specific information “scrubbed” from a search engine would need to obtain a court-ordered declaration stating the content was unlawfully made public on a website, and is to be removed.

The court’s ruling would then be sent to the owners of the website, and/or responsible parties operating the search engine.

If you’re seeing inaccuracies about you or your company on Google, take action by going to the “Removing Content From Google” webpage on Google at:

The discussion continues . . . what are your thoughts?

Thursday, May 15, 2014

The Internet's future

by Mark Ollig

It began as an experimental network, and evolved into “the network of networks.”

Like the invention of the telephone and the original telecommunications network, the Internet and the devices connected to it enhance how we communicate, manage our personal lives, and conduct our business.

The Internet, for most of the Generation Y, or Millennials who grew up using it, has become part of their daily life.

Today, we are witnessing the architecture of the Internet being sewn throughout the very fabric of our society.

It’s analogous to the clothing we put on each morning; we have become used to wearing our “digital fabric” to connect with the online social media and applications we routinely use each day.

The Internet’s “Web” is spun around this planet, and high above it. Ultimately, yours truly believes it will reach out to Mars, and beyond.

During the last few years, many of us have been transferring our personal and business data file information onto online storage mediums.

The data files we previously kept on our own computers and storage devices are now residing in the cloud; those mysterious storage servers located within the Internet’s infrastructure.

How comfortable are we with having our online social activities, much of our communications, personal information, and work data, being managed and stored within the Internet?

We are told by websites our information is secure and protected “We have your data encrypted and backed up,” they say us.

Nevertheless, our concern is justified.

How many Internet website data breaches have we heard about lately?

These data breaches often reveal customer account information being compromised.

Internet sites can be vulnerable to having their security defeated, and its information acquired.

I recall the old adage, “There’s no system foolproof enough.”

The only way to truly keep data safe is to store it on a device or storage medium not connected to the Internet, which is what we were doing back in the 1980s and ‘90s.

Of course, it’s a different world today than it was then.

Many of us, myself included, have become overly trusting when uploading photos, videos, text, and other information files onto what we believe are secured social media, data storage, e-mail servers, and other Internet sites.

Think of our online financial transactions; we routinely trust a website using the “https” seen in the web address bar, as being securely encrypted. It’s a visual assurance the information we enter won’t be seen by others.

Fear not, dear readers, for we can take comfort in knowing there are highly-skilled Information Technology (IT) programmers managing the cloud portions of the Internet’s infrastructure.

They are safeguarding the gateways to where our information is stored, by using firewalls, algorithms, and data encryption.

It should be known, for the most part, that our data is well-protected using today’s current technology.

Alright, yours truly is finished with his ranting, and is now stepping off his soapbox – for the moment.

The National Science Foundation (NSF) is an independent federal agency created by Congress in 1950, to “promote the progress of science.”

Recently, the NSF made available $15 million in awards to three institutions for specific Internet project research.

These institutions will begin to research and submit proposals for new designs to enhance the technical architectural infrastructure of the future Internet.

These new Internet project designs include planning, developing, testing, and deploying future Internet architectures.

The objective of this year’s award is to test new designs via a pilot program, with the cooperation of academic institutions, non-profit organizations, industrial partners, and cities.

The NSF’s website said these projects will “explore novel network architectures and networking concepts.”

The three projects will be led by researchers at Carnegie Mellon University in Pittsburgh, the University of California in Los Angeles, and Rutgers University in New Jersey.

The NSF also listed several other universities which will be partnering with each of the three institutions leading the research efforts.

I remain optimistic about the future of the Internet.

As the Internet grows and develops into “The Internet of Things,” it needs to become not only more efficient, but increasingly user-friendly, adaptable, reliable, manageable, and, of course, secure.

I look to see a future Internet using an enhanced transmission control protocol “intelligent” software language, and much less physical internal hardware.

The Internet architecture of the future will become accustomed to recognizing, communicating with, and processing information from the billions (if not trillions) of smart devices, electronic sensors, and other yet-to-be-invented gadgets.

As our society and the world grows and evolves so must the Internet.

People researching and designing new Internet-related technologies; governments committed to maintaining fair and equal laws and policies regarding its use, keeps the Internet’s progressive evolution continuing.

The NSF website is located at

The Future Internet Architecture project website is

A diagram of what a future Internet architecture could look like can be seen on my Photobucket page

Thursday, May 8, 2014

Rooms immersed in SurroundWeb

by Mark Ollig

Why surf the Web using just a smart device, tablet, or computer screen, when you can bring it into an entire room?

SurroundWeb is described by Microsoft as being a “3D Browser,” allowing two-dimensional web page information to be projected and displayed across, and on, multiple surfaces (such as tables, counters, cabinets, and walls) found inside a room.

We can think of it as a greatly enhanced, interactive, Web presentation medium.

The SurroundWeb UI (User Interface) interacts with the objects, surfaces, and people within a room.

A person uses natural hand gestures and their voice when interacting with SurroundWeb.

Kinect Fusion, Microsoft’s 3D object scanning and model creation tool, is one of the programs used for these Web\user\room\object interactions.

The projected images, video, and text content are “conditioned” (via SurroundWeb) to use flat surfaces in a room which are visually suitable for presenting content from the Web.

The Web content is beamed from multiple overhead-mounted projectors, which overlay the content onto physical surfaces.

In the Microsoft Research video yours truly watched, I observed a living room and a kitchen using SurroundWeb.

“SurroundWeb is a way to bring immersive room experiences to everyone’s home, without compromising privacy,” said David Molnar, a researcher with Microsoft Research.

He explained how the physical surfaces in a home can become a setting for interaction with the Web.

In one example, Molnar pointed to a living room’s wall, where he noted its left, middle, and right segments (panels) being divided into virtual screens, and how these would present multiple web pages.

These web pages were dedicated to car racing.

A large display monitor sitting on a table, showed live video of the actual car race.

The left wall panel had three separate picture-in-a-picture screens showing other video content he was watching.

The middle portion of the wall displayed Wikipedia car racing information obtained from the Web.

The right wall panel screen displayed the live, scrolling, chat-content taking place from his social network about the car race.

Molar noted SurroundWeb wasn’t limited to screen projectors.

For instance, SurroundWeb will interface with his Microsoft Surface tablet and smartphone for displaying visual and textual content.

A table became another display screen for information originating from a webpage; it was shown via a projector’s beam.

SurroundWeb will provide multiple screens for displaying relevant information to a user, from the resources it obtains from the Web.

Each area or frame section of the projected display screens shown in the video, were logical-elements deployed using current web technologies.

Eyal Ofek, a senior researcher with Microsoft Research, commented how in addition to the screen projections being displayed on the wall or table, SurroundWeb also had the ability to respond to events occurring in the room.

For example, he placed a can of soda pop on a table. The can was scanned, and text information about it was seen on the surface of the table, along with dietary suggestions from the Web.

Another example I viewed was in the kitchen, showing SurroundWeb’s interaction while a recipe was being used from a website.

SurroundWeb was monitoring the progress of the food in a pot cooking on the stove, and displayed, in an orderly fashion, the sequential steps to be taken in its preparation. It did this by displaying (projecting) information onto the kitchen counter, next to the stove, which was easily viewable by the person preparing the food.

To the left of the stove’s hood vent, SurroundWeb projected a display panel onto the face of a wooden kitchen cabinet door, of a live-streaming video program.

Under the video stream’s panel, five separate video selection buttons were also projected.

The video selection controls were not physical and looked very holographic-like.

The video display and controls could have been projected onto any flat surface in the kitchen.

I took a screen capture from this segment of the video and uploaded it to my Photobucket page; you can see it here:

Microsoft Research ended its video presentation by reinforcing how the web server out on the Internet retains no individual information about the user during a SurroundWeb session.

The user’s personal information, according to both researchers, is not sent back to the website.

No information about the user’s room, surroundings, or the events which took place is recorded.

The Microsoft researchers stressed the assurance of the user’s privacy while using SurroundWeb.

SurroundWeb will have the potential to utilize the resources from any webpage, stream them into the real world, and mediate their interaction with the physical objects and people, found inside a “Web room.”

Folks will benefit by being able to more fully utilize the resources available on the Web.

Microsoft Research’s detailed, 16-page publication: “SurroundWeb: Least Privilege for Immersive \Web Rooms,” which Molnar and Ofek contributed to, can be read here:

So, when will we be able to use SurroundWeb in our own Web room?

The prototype is still in the research and development stage, as testing continues using an experimental Internet Explorer Web Browser, with specially embedded software application controls.

Stay tuned.

Screen capture I took from the video