Inventing the Future: Everything Old is New Again (iPhone 6S edition)

I’ve been really excited to see new innovations in interaction with phone, mobile device and wearable interfaces lately. Some of these innovations are doubly exciting…. because I helped invent them, seven years ago, and these new ways of interacting with data and with devices are only now coming to the mass market. For example, it was fascinating to have Walt Mossberg trumpet the praises of the iPhone 6S this week at Apple’s launch event. Here’s what Mossberg said: Anyone who thought there was no more fundamental innovation to be wrung out of the smartphone is just wrong. The 10-finger multi-touch interface made mainstream by the iPhone 8 years ago has now taken a leap forward with Apple’s 3D Touch. This lets you view content in apps...

Read More

In Memory: Aaron Reynolds

I was at Charles Wright school this week. For those friends who worked with me at Kiha /ARO, you will remember Aaron Reynolds (our first architect — famous for this incident in Windows history). I was saddened and also very happy to see his legacy live on in the technology suite at his alma mater (Charles Wright) — with Paul Allen, Dwight Krossa, Jon Lazarus, Mike Perkowitz, Kevin Eustice, Peter Schwab and Phil Rogan. Please like &...

Read More

Vulcan Labs – Presenting ARO

At Kiha Software, a funded startup from Vulcan, I’ve been leading product management alongside a brilliant engineering and design team. Together, we created “Aro,” a new semantic application that surfaced entities inside of communications and provided a quick and easy user interface to take rapid action. We delivered applications that surfaced this patented user interface on iPhone and Android. I contributed to several patents on this application design and development.  Here are some recent videos featuring my team’s stellar work:   Please like &...

Read More

Multi-Modal Computing – What it could mean

Multi-Modal world, as envisioned, back in the day at Adobe. In 1998, there were research groups looking at multi-modality, and by 2000, folks involved in standards creation were already thinking about multi-modal inputs. Today, Google has a group devoted to multi-modal inputs, although the Wiki is a little bare. New stuff over at IBM on this topic. CTG (from whom I borrowed the accompanying graphic to this post), specializes in multi-modal input computing.   facial expression and gesture inputs Now that we have finger inputs, what about facial expressions? My wife said I was “smug” the other day. How did she read that expression? Could a computer read such a subtle expression? Or just sadness vs. smiling (small children find it difficult to tell...

Read More

Enjoy what you read? Share!