Wednesday, April 26, 2017

I’ve switched back to Windows

©Mark Ollig


After talking computers with the smiling, youthful-looking computer salesperson (I’ll call Robert for this column), he said the following to me: “You talk a lot like my dad did when I was growing up. Many years ago, he worked in the computer IT department at his office.”

I smiled back; however, for a split-second, I felt unsure of how to reply to his comment.

Of course, Robert meant it as a compliment; however, it was the “many years ago” which distracted me.

I then realized my sprinkling of old-school computing jargon into the conversation reminded Robert of what his father, probably a fellow baby-boomer, had spoken to him in his youth.

But, I digress.

You may recall I wrote a column about my experience switching from Microsoft Windows to the Apple OS (operating system) after buying a used MacBook laptop five years ago.

Guess what, folks? I’m moving back to Windows.

Years before using the Apple OS, I was a dedicated (and occasionally very frustrated) Microsoft Windows user.

Microsoft successfully tempted me to move back with their Windows 10 OS, which is a combination of features taken from Windows 7 and 8.

This edition of Windows reportedly won’t freeze-up or crash like its previous versions, which gives me confidence it’s now a stable operating system.

After finishing my computer shopping at this popular electronics computer store, I brought my items up to the checkout counter where Robert rang them up, and processed my electronic payment.

He then cheerfully placed my new HP notebook computer, USB Bluetooth adapter, wireless mouse, external speakers, and Kaspersky Internet Security authorization code card into plastic bags.

After thanking Robert, I left the store with my items and walked to the parking lot.

Opening my car’s rear door, I carefully positioned the bags on the backseat.

As I got behind the wheel, a worrying thought came over me. “What do I really know about Windows 10?”

I knew it was released in 2015 under the code name Threshold, and Microsoft had reinstalled the popular start menu on the task bar; other than that, not too much.

So, I decided to go old-school and get some books on Windows 10.

Not eBooks, or Kindle; I mean real, honest-to-goodness parchment paper books I could sink myself into – just like back in the old days.

Driving to a nearby, well-known book store, I found myself fortunate to park surprisingly close to its main entrance – I took this as a sign.

Confidently, I strolled into the store, and spied where their row of technical book shelves were located.

If you’ve ever watched someone standing and staring bewilderingly at hundreds of books on a shelf – you know how I looked.

While gazing at the technically pleasing artwork on the book covers, I suddenly heard a voice from behind me, “Can I help you find something?”

“No, I am just zeroing in on what I need here, thanks for asking,” I confidently replied.

The store assistant gave me a wry smile, and headed down to the next row of folks browsing through books.

In what could be described as a case of divine intervention, one book title with five large letters on its cover immediately caught my eye.

“Windows 10 Anniversary Update BIBLE.” Of course, I had to have this 792-page book.

I also purchased “Windows 10 Step by Step” (607 pages) ,and “Windows 10 Tips and Tricks” (465 pages).

I now have plenty of old-school reading material to keep me busy.

After leaving the bookstore (it was a beautiful, sunshiny day), I sat in my car, opened the sunroof, and found myself reading a few pages from each book.

After 45 minutes, I knew I couldn’t sit in the parking lot all day, so like a kid excitedly waiting to open a Christmas present, I drove home with my new books and unopened notebook computer.

At home, I carefully placed the computer on my office table, plugged it into the surge-protected AC outlet, and turned the power on.

It only took a few minutes to load Windows 10 for the first time, and for me to answer all the initial questions; everything worked fine.

I liked the clean, new look of the Windows 10 desktop user interface.

Next, I connected the notebook to the internet, and was quickly online using Microsoft Edge.

Yours truly also uses the Google Chrome and Mozilla Firefox web browsers.

The Microsoft Office website is where I downloaded my new Office 365 personal subscription.

The Office 365 code key I needed to enter in order to unlock this application was a lengthy 25 characters long.

I learned my one-user one- year subscription also included 1 TB of OneDrive cloud for data storage, upgrades, updates, and the ability to use Office 365 on my mobile devices.

My program applications are working well with Windows 10; in fact, this column was written using my new Office 365 Word program on my new HP notebook.

The new computer includes the cloud-based, digital assistant, Cortana, which is similar to the iPhone’s Siri, Google Assistant, and Amazon’s Alexa.

Cortana quickly recognized my voice commands; becoming familiarized with it after I read through a few practice sentences.

Opening Microsoft Word happened quickly after I verbally asked Cortana to “open Word.”

With an always-on internet connection, I’ve asked Cortana for current weather conditions, to play my favorite music using my Microsoft Groove Music subscription, open websites, and more.

My greatest relief about the Windows 10 operating system: It has yet to crash.


Thanks to everyone following me on Twitter at @bitsandbytes.



Thursday, April 20, 2017

Watson and AI confront the Internet of Things

©Mark Ollig


“You see things; and you say, ‘Why?’ But I dream things that never were; and I say, ‘Why not?’” said the Serpent to Eve.

This quote is from George Bernard Shaw’s 1921 book “Back to Methuselah,” where his sequence of plays takes us through the past, present, and future.

I begin with this quote because it expresses for me the wonderment of technology’s next stage of evolution.

Have you heard about the AI (Artificial Intelligence) computational software probabilistic program which rewrites its own code in order to improve its efficiency and cognitive perception?

This probabilistic AI programming is based on the Approximate Bayesian computation method, which has its roots in 18th century English theologian and mathematician, Thomas Bayes.

He is credited as being the first person to use “probability” as a method for calculation.

Bayes’ conclusions were recorded in “Essay Towards Solving a Problem in the Doctrine of Chances,” in 1763, two years after his death.

Probabilistic programming is being used by AI programs to aid in its observing the likelihood or probability of an event happening.

It uses probability as a construct to write or rewrite its own algorithmic code to improve the software programs it runs.

Yours truly writes code using an older HP keyboard typing lines at a time; although I sometimes get inspired to create and run an automated script file.

Probabilistic programming will accelerate machine-learning techniques used by AI system platforms, smart devices, and the unique AI programs controlling the data being gleaned from billions of Internet of Things (IoT) devices; which, of course, are connected to the Internet.

Some of my readers may recall a column I wrote six years ago about IBM’s cognitive computer called Watson.

Back then, Watson had become famous for demonstrating its computing intelligence on the television game show “Jeopardy!”

IBM is creating a cognitive computer brain using neurosynaptic computing chips.

This computer brain will be able to reprogram itself based upon interactions within its surroundings and past learning experiences.

IBM recently launched its global Watson cognitive computing and Internet of Things technologies headquarters in Munich, Germany.

This building is staffed with 1,000 IBM employees, customer and IBM laboratories called “collaboratories,” and areas named “Client Experience Zones.”

Today, the Watson computing IoT platform boasts providing operating analytical software from within the internet cloud for defining, managing, collecting, analyzing, and responding to data statistics from a variety of IoT devices.

IBM also released interesting information about the future of IoT.

In five years, they predict there will be over 20 billion internet-connected IoT devices, creating an economic value of over $14 trillion.

Currently, over 6,000 customers and partners are working with the IBM Watson IoT platform.

Soon, we will be entering a future whereby AI programs and cognitive computing systems will independently control and improve not only their own programming, but that of the IoT devices they connect with.

Regarding computer artificial intelligence, “My hope is when this ground-breaking technology finally comes to fruition, it will be used wisely, and to everyone’s benefit,” yours truly wrote in 2011.

On the other hand, there’s always the possibility things could go south – and we find ourselves living in Isaac Asimov’s robotically-controlled future.

“Neurosynaptic-core, digital silicon-chip, cognitive computers and robots could eventually develop to the point of self-awareness, take over the planet, and, after finding us humans inferior, decide it would be in their best interest to reprogram our brains in order to be of better service to them,” I prognosticated in 2011.

Of course, this unlikely prediction was just a worst-case scenario on my part.

It all depends on one’s perspective.

But remember; we humans are a clever lot, and I’m sure the “overseeing powers that be” are embedding safeguards into these cognitive AI system platforms – just in case a group of them gang-up and try to take over the planet.

One suggestion: Have an unalterable, undeletable, “hard- wired” source code rooted in all future AI robotic programs incorporating Asimov’s “The Three Laws of Robotics.”

I also suggest the software coders creating these cognitive computers and AI self-learning programs, be required to watch the original “Star Trek” episode “The Ultimate Computer,” and learn what could happen when humans relinquish authority and put a computer in charge of everything.

But I digress with a bit of a wry smile.

“An Essay Towards Solving a Problem in the Doctrine of Chances,” by Thomas Bayes, can be read here: http://bit.ly/2oni5sM.

The Project Gutenberg organization archives 54,500 readable eBooks at no cost; one of them is “Back to Methuselah.” Read it here: http://bit.ly/2omXJjw.

The IBM website for Watson IoT is: https://www.ibm.com/internet-of-things.

If managed wisely, cognitive computing and artificial intelligence platforms have the potential to provide extraordinary benefits for our society and the planet.

Let’s keep generating new ideas on how technology can assist and enrich everyone’s lives by asking the question; “Why not?”


Follow me on Twitter at @bitsandbytes.














(Above image used with permission via paid license)

Tuesday, April 18, 2017

ICANN-58 meets in Denmark


©Mark Ollig


The beautiful seaside city of Copenhagen played host for the Internet Corporation for Assigned Names and Numbers (ICANN) meeting last week.

This was ICANN’s first meeting ever held in Denmark, and it drew more than 2,000 people.

“The Internet opens our eyes and minds to the world around us,” said Denmark Danish Minister of Culture Mette Bock.

“Denmark tops the digital economy society index in the European Union, and is in the top 10 UN ICT [United Nations International Telecommunication Union] Development Index,” she added.

Bock commented on how Denmark is very adaptive when it comes to new technology, and how Danish citizens are “the most advanced in the use of internet in the European Union, and we are leading the way in terms of adapting tablets and smartphones.”

Jean-Jacques Sahel, ICANN’s vice-president, gave the official ICANN 58 keynote welcome, and began with a description of the city of Copenhagen, which he described as the “beautiful coastal capital of Denmark.”

Sahel talked about the importance of collaboration and participation within the non-profit ICANN organization.

He also mentioned Denmark is the home of LEGO plastic building blocks.

To this, I drew an immediate association; I’ve personally contributed to the LEGO account receivables by purchasing numerous LEGO block sets for my younger relatives building and engineering pleasure.

“ICANN is an independent organization, ready to take on the next challenges in shaping the internet’s future,” said Sahel.

Started in 1998, ICANN is a not-for-profit organization comprised of people living in countries around the world.

ICANN is now the recognized controlling authority of all Internet Protocol address space allocation, protocol identifier assignments, and the generic and country code Top-Level Domain name management system.

The US had been the Internet’s Assigned Numbers Authority (IANA); or what many call the internet’s address book.

Six months ago, this authority was transferred to ICANN.

Setup as a private-public partnership, ICANN is committed to “preserving the operational stability of the Internet; to promoting competition; to achieving broad representation of global Internet communities; and to developing policy appropriate to its mission through bottom-up, consensus-based processes.”

I checked the governance bylaws on the ICANN website, and noted article 1, section 1.1 states the mission of ICANN is to ensure the stable and secure operation of the Internet’s unique identifier systems.

ICANN coordinates the allocation and assignment of names in the root zone of the Domain Name System (DNS), and coordinates the development and implementation of policies concerning the registration of second-level domain names in generic top-level domains (gTLDs).

Because of the 2012 ICANN gTLDs expansion program, more than 1,000 new generic top-level domains have been added to the internet.

Examples of well-known gTLDs include: .com, .org, .gov, .mil, and .edu.

We can thank ICANN every time we type in a domain name, as the DNS will translate the name into an IP address, which connects us to the website assigned for that address.

The DNS manages the IP address associated with public website names.

So, when we type www.google.com into our web browser, the DNS takes us to IP address 172.217.6.1, which is Google’s Mountain View, CA website IP address.

ICANN’s complete (and lengthy) descriptive bylaws, and various section and subsections can be read at https://go.icann.org/2n616xT.

“One World, One Internet,” is the statement next to the ICANN symbol on its webpage, and so it makes sense to host its meetings in locations throughout the world.

ICANN hosts three meetings a year in various parts of the world.

The first ICANN meeting; ICANN 1, was held in Singapore in March 1999.

There have been six meetings in the US: ICANN 4, 7, 11, 30, 40, and 51.

The ICANN 59 meeting will be hosted by Johannesburg, Africa.

Future meeting locations have been planned through October 2020, when ICANN 69 will tentatively be in a country located in Europe.

ICANN currently has 10 offices located throughout the world.

Its US offices are in Los Angeles, CA and Washington, DC.

The English website for ICANN is https://www.icann.org/en.

Twitter hashtag #ICANN58 was used to send out messages and video originating from the ICANN 58 meeting in Copenhagen.

Videos from ICANN are available from the ICANN News YouTube channel, http://bit.ly/2n75AUW.

Beaches Resorts link: http://www.beaches.com/all-inclusive.

Follow my daily tweeted messages from @bitsandbytes on Twitter.


Monday, April 17, 2017

The first online 'coffee cam': XCoffee

© by Mark Ollig


This story begins in 1991 with a group of academic researchers working in the University of Cambridge, England, computer science study lab called the Trojan Room.

These researchers spent most nights working on computer and network programming, and they, of course, consumed large amounts of coffee.

The one coffee machine was located in the corridor directly outside the Trojan Room.

This one coffee machine was shared by about 15 researchers working throughout the multi-story university building complex.

Of course, having a freshly-brewed pot of coffee available at all times was of the utmost importance.

However, a problem soon developed.

Researchers working in the lower floor offices of the building, needed to walk up several flights of stairs to get to the coffee machine.

During their journey up the stairs carrying their empty coffee cup, they did not know whether there was any coffee in the pot.

Understandably, frustration would set in whenever a researcher discovered an empty coffee pot.

One obvious option would have been to install another coffee machine, but we must remember; these were academic researchers living on a very limited budget.

The researchers gathered, pooled their talents, and came up with an ingenious plan for having their computers know how much coffee was left in the coffee pot before trudging up those stairs.

This innovative plan developed into what became known as XCoffee.

Researchers working in the Trojan Room noticed a number of shelving racks containing inventoried computer servers to be used for maintenance testing of the university’s data networks.

They also discovered one unused Acorn Archimedes computer server installed with the X Window System protocols (hence the name XCoffee).

This computer server also contained a gray-scale video-frame grabber circuit card.

A frame grabber is a device which takes a picture, or captures still-frame images and saves them digitally (in this case, it would be used to capture images coming from an analog video camera).

The first frame grabbers could only grab and save one still-frame digital image at a time.

Utilizing a retort stand, used for holding scientific equipment, the researchers mounted a camera onto it and pointed the lens towards the coffee machine, specifically, in the direction of the coffee pot itself.

They then ran all the cabling under the floor from the coffee camera location to the Acorn Archimedes computer server equipped with the frame-grabber in the Trojan Room.

One of the researchers, Paul Jardetzky, coded a server software computer program which would run on the Acorn Archimedes computer equipped with the video-grabber obtaining live images of the coffee pot.

The video-frame grabber would then capture these live still-frame images of the coffee pot about once every three seconds.

Quentin Stafford-Fraser, a researcher in the Trojan Room, worked on an ATM (Asynchronous Transfer Mode) communication switching network. He wrote the code for the “coffee client” software computing program.

Stafford-Fraser’s client software program operated on the researchers’ computers in the building connected to the university’s internal data network.

This client software program communicated with the Acorn Archimedes computer’s newly-written server software.

The client program displayed the latest image of the coffee pot onto a corner of the display screen on a researcher’s computer – so they were seeing a near real-time image of how much coffee was left in the pot.

It took about a day for the programmers to get the XCoffee camera program fully up and running over the university’s internal local area network.

The software application operated over a MSNL (Multi-Service Network Layer), which is a network layer protocol designed for ATM networks.

The most current coffee pot image was updated onto the university’s internal local area network, and would appear inside a small onscreen box (Windows dialog box) displayed on each researcher’s computer screen.

This was acceptable by everyone, because, according to Stafford-Fraser, the coffee pot filled rather slowly, and since they were using a greyscale photo capture, the images looked fine.

The researchers working upstairs and in other parts of the building would now be able to visually see the status of the amount of coffee remaining in the coffee pot on their computer screens.

They could now confidently leave their computer station, and walk to the coffee machine, knowing there was coffee in the pot.

By 1993, a new video frame grabber was being used, and frequently updated coffee pot images were being broadcast onto the internet.

XCoffee became very popular; in fact, it went viral over the Internet.

The coffee-cam internet website informed its online viewers, “The lights in the Trojan Room aren’t always switched on, but we try to leave a small lamp pointing at the coffee pot so you can see it at night.”

By 1994, the idea of remotely monitoring a coffee pot in near real-time became so popular, a reporter from a local BBC radio station interviewed the XCoffee university researchers to discuss how they did it.

Here is the audio of the 1994 BBC interview: http://bit.ly/2ooUg4G.

Sadly, the computer server responsible for providing the visual images of this famous coffee pot to the online world was officially turned off Aug. 22, 2001.

The university’s URL link for the Coffee Cam still exists at http://bit.ly/2o6mKP7.

Your investigative columnist located a photo of the famous coffee pot taken from one of the researcher’s computer screens: http://tinyurl.com/4vh8cxk.

Yours truly was also privileged to correspond via Twitter with one of the researchers, Quentin Stafford-Fraser, who still lives in England.

I expressed my admiration to him in regard to his adventure with XCoffee.

Stafford-Fraser’s final message to me was, “Thanks Mark . . . And best wishes from this side of the pond!”

Enjoy a cup of coffee while reading through my daily Twitter messages at @bitsandbytes.

This column was originally published April 4, 2011, and was recently updated by the writer.

(Below are still-frame images of the coffee pot)






Friday, April 7, 2017

Recognizing hand gestures as a smart controller

© by Mark Ollig 


We interact with our smartphone applications by tapping, pressing, and swiping our fingers on a hard-to-see, tiny screen.

This small screen can sometimes become too cramped of an area for comfortably interacting with our programs.

Wouldn’t it be nice instead, to use the large area above and on the surface of a table for entering finger handwriting and waving hand gesture commands our devices would understand?

These gestures would be “seen” and interpreted by technology inside a small device, and converted into command actions for controlling smartdevices and other electronic gadgets.

Recently, I reviewed a new product coming soon to market called Welle (pronounced vell-uh), which in German means “wave.”

“Welle gives you the ability to use unlimited simple gestures to control your favorite devices, appliances, and apps [applications] – so powerful it even tracks your finger movements and recognizes handwriting. With Welle, the entire surface becomes connected to sonar, allowing you to use hand gestures to control all your smart devices,” the company stated in a press release.

Welle is a small, portable, ancillary device which interfaces via a wireless Bluetooth connection with compatible televisions, lighting, fans, heating/air-conditioning, audio speakers, curtains/blinds, smart computing devices, and IoT (Internet of Things) devices.

Welle technology tracks your fingers when you touch a surface. It also recognizes handwriting motions, transforming them into customized commands for whatever device is linked with it.

If your smart coffee maker has Bluetooth; link it with a Welle.

Then, while you’re seated at the kitchen table and want some coffee, simply draw the letter “C” on the table, and the coffee maker will begin brewing it.

I love the smell of freshly-brewed coffee.

Welle’s unique embedded sonar technology and highly sensitive touchless gesture control algorithms, and provide superior tracking of hand gestures and handwriting.

I learned this same technology is also used in drones, automobiles, and some military applications.

The electronic components and digital logic used by Welle are inside a small, black and yellow rectangular box measuring nearly 3-inches by 1.38-inches, and weighing only 3.5 ounces.

Welle is small enough to bring with you to hotels, coffee shops, or to the office for guiding your audience through those important PowerPoint presentations.

Welle is used by placing it on any flat surface and establishing a Bluetooth connection to the device you wish to control.

It uses sensing techniques which transmits signal pulses, collects the reflective energy from our fingers, and logically translates the user’s hand movements into the specified action desired.

“Human hands are one of nature’s most perfect creations. They are highly sensitive and can manipulate objects with fine dexterity. The iPhone ushered in the era of touch control, but its under-glass technology diminishes the hand’s tactile feel, limiting the ability to control devices with a high degree of sensitivity,” said Mark Zeng, CEO of Welle.

Advanced ultrasonic signaling technology, hardware design, and software algorithms used inside the Welle device transmit signal pulses, and collect and recognizes, the reflected energy by the user’s hand motions, gestures, and finger swipes within the range of the Welle device’s sonar echoed signals.

Maxus Tech, the company which created Welle, employs scientists, physicists, mathematicians, and engineers who develop human-computer interaction power sensing equipment. The company was founded in 2014, in Shenzhen, China.

Software and hardware developers can use the Welle open API (Application Programming Interface) for redefining hand gestures and creating new command actions for controlling devices and apps.

Both Android and iOS users are able to use Welle apps.

Adding computing design tools like Photoshop and CAD to interface with the Welle, along with interpreting hand gestures mimicking the use of a physical trackpad and mouse, are being considered by Welle.

Public availability of the Welle device is scheduled for October 2017, at a price of $99.

While reading through the comments on the Welle Kickstarter webpage, I found many favorable reviews, and a lot of interest in this product.

Their original Kickstarter fundraising goal was $20,000; which was achieved on the first day.

As of last Thursday, $43,468 has been pledged for the Welle project by 447 supporters.

Learn more about Welle, and view some demonstration videos by visiting the Kickstarter homepage: http://getwelle.com.

Just think; with a simple hand gesture, you could be following my daily Twitter messages at: @bitsandbytes.

(Below photo used with permission)