Technology – Ideas in Motion

On this page, I post thoughts about technology, context awareness and predictive analytics. You can find out about my career in technology or my writing here as well. (Today, I am a proud Intel employee, and my employer is not accountable or liable for my opinions.)


Recent Blog Posts on these topics: 

Ethics and AI at GlueCon

Posted by on Mar 6, 2018 in Intel, tech, technology, theology | 0 comments

I’m excited to be giving a new and interesting talk on Ethics, AI and algorithm design at Glue Con this year (May 16-17, 2018). I can’t really spell out all the details before you go to the conference and see the talk — so get ye to Glue Con to find out more and listen to a number of great interesting technical talks. TITLE: Ethics and AI: Thinking about the Implications of Algorithmic Design within Semi-Autonomous Systems. SESSION DESCRIPTION As we design increasingly autonomous systems, the role of ethical decision-making in real-time system outcomes can’t be ignored. Ned Hayes will use Intel’s existing Computer Vision and Media SDKs to demonstrate the possible implications of autonomous action in vehicle and digital surveillance scenarios. He’ll outline some of the significant open questions in ethics and AI and venture some thoughts on how to design better algorithms that create ethical outcomes that don’t suck. VIDEO OF PRESENTATION SLIDES Slides for the presentation can be found at this post: Final overview of presentation and slides SPEAKER BIO Ned Hayes was educated in the business of technology at Xerox PARC, Vulcan Labs, Stanford University and in the startup trenches. He also has studied ethics at a graduate level at UC Berkeley Grad Theological Union, Seattle University and Luther Seminary. At Intel, Ned leads a joint solutions team that focuses on autonomous vehicle algorithms and digital surveillance. He is also a member of Intel’s internal ethics working group. Please like &...

read more

New Patent: Cross-Geo Calendar

Posted by on Feb 24, 2018 in biometrics and context, software, tech, technology, Updates | 0 comments

Good ideas are quick and easy to create. However, it gets complicated if you want to keep your idea as your own, and if you want to make money from your idea. If you wish to protect your good idea from other people using it without your permission, it’s wise to protect it by formalizing your idea as an “intellectual property” (I.P.). Books, movies, songs and software code can be protected. One great way of keeping your idea protected is to keep it secret. If you never reveal what’s inside your secret box, it’s pretty hard for people to steal it! Examples of this kind of I.P. protection include the exact mix of the Kentucky Fried chicken batter, and the ingredients for Coca-Cola. A trade secret is proprietary — that’s kind of the point. However, many people protect their ideas by describing it precisely and submitting their idea to the Patent Office. The clerks of the Patent Office check if anyone else has had the exact same idea in the past (known as “prior art”). If your idea is truly original, the office issues a patent for your original good idea, which reserves to you the right to use that idea in any manner you see fit. Fun fact — Einstein worked as a patent office clerk, evaluating patent submissions. In the United States, the Patent Office has existed almost from the beginning of the country’s existence. The very first patent every issued was in 1790 — it was a Patent for Making Potash. By now, over 6 million patents have been issued, for everything from the air conditioner to the pencil eraser to the light bulb to components of the automobile and the modern computer. Today, many patents are issued for design innovations (think: iPhone form factor, stylus design, etc.) and, more controversially, for innovations in software, such as Amazon’s one-click purchase mechanism. The reason these patents are controversial is that creating something ephemeral like easily-updated software is dissimilar from patents on physical changes to substances, such as the filament in a light bulb or the ingredients for a battery in a car. In the modern era though, much of our experience of the world is mediated by software, and so people protect their ideas via the U.S. Patent system. Because of the wealth of ideas, and the complexity of properly evaluating ideas through the patent office, patents do take a long time to finalize. Here’s an example. Nearly ten years ago, in 2008, I was working with a great team at Vulcan Labs on designing a completely new operating system for mobile phones (we started in 2007, before the iPhone or Android existed). When we were thinking about the calendar, we were working hard to maximize the tiny screen available on that generation of phones. I came up with a quick little diagram about how a calendar might be able to reflect two different time zones, or two different locations for a meeting. On the right side would be your meetings at work, while the left side might show your commitments on the home front. Or, if you were operating in different time zones, on the right might be meetings in Paris, while on the left would be meetings in Seattle. Other more technical team members contributed to...

read more

Explainer: Autonomous and Semi-autonomous Vehicles

Posted by on Nov 28, 2017 in Intel, software, tech, technology, Updates | 0 comments

You’ve heard of autonomous vehicles — cars that can drive themselves. But what you may not know is that there are actually multiple levels of possible autonomy, and we’re living with some of those early levels now with newer vehicles that includes advanced driver assistance systems (ADAS). These systems help with safety and comfortable driving. The potential systems grow in complexity and autonomy from there: the Society of Automotive Engineers defines six levels of automotive automation, explained here: Level 0: No Automation — Zero autonomy; the driver performs all the driving, but the vehicle can aid with blind spot detection, forward collision warnings, and lane departure warnings. Level 1: Driver Assistance — The vehicle may have some driving assist features, but the driver’s still in charge. Such assist features available in today’s vehicles include adaptive cruise control, automatic emergency braking, and lane keeping. Level 2: Partial Automation — The driver still must drive and monitor the environment at all times–no texting!–but may let the vehicle’s combined automated functions control acceleration, braking, and steering. Such automated functions available today include self-parking and stop-and-go traffic driving. Level 3: Conditional Automation — The vehicle can itself perform all aspects of the driving task under some circumstances, but the human driver must always be ready to take control at all times with notice. In all other circumstances, the human performs the driving. Level 4: High Automation — This is a self-driving vehicle! But it still has a driver’s seat and all the regular controls. Though the vehicle can drive and “see” all on its own, circumstances such as geographic area, road conditions, or local laws might require the person in the driver’s seat to take over. Level 5: Full Automation — The vehicle is capable of performing all driving functions under all environmental conditions and can operate without humans inside. The human occupants are just passengers and need never be involved in driving. A steering wheel is optional in this vehicle. Sources: Society of Automotive Engineers (SAE); National Highway and Traffic Safety Administration (NHTSA)   Please like &...

read more

The 11 most important lessons I’ve learned at Microsoft

Posted by on Sep 12, 2017 in microsoft, tech, technology, Updates | 0 comments

(Alex Hinrichs, a long-time Microsoft release manager, just announced his retirement. Here are some of the lessons he’s learned, posted on LinkedIn). I heartily agree with all of these!    Engineering is all about people. Big software projects are built by giant human systems, and giant human systems run on relationships. Good relationships require trust. Trust is built via daily face-to-face rituals. When there is trust, then and only thendo you have an environment where the best ideas win.   Define and agree on the PROBLEM… And then work on the SOLUTION. So often, people will argue and argue about how to solve something, and 95% of the time, they can’t agree on the solution, because they don’t agree on the problem they are trying to solve. Once you step back and truly define the problem, frequently the solution is obvious.   Public accountability is how you drive results. People do not want to let down their boss or peers, so get people to publicly state a commitment. If they don’t deliver, they are judged as unreliable or inconsistent – worse than being wrong.   If no one knows who is doing something, then YOU do it. Grow your scope by aggressively taking ownership. Don’t wait for your manager to give it to you – just go take it. When doing this, you risk stepping on someone’s toes, but don’t let this stop you. They’ll yelp and let you know if you over-stepped.   Whoever writes it down first wins. Bring clarity to chaos, and then write it down. It’s difficult, time consuming, requires lots of face-to-face time, and often needs re-working, but it’s the way teams make progress with confidence. When YOU are the one to write it down, then YOUR version of what to do is what gets done and everyone else is an editor – not the author.   Idea -> Proposal -> Plan. When talking about what to do, make it clear to everyone what stage of thinking you are in else they may mistakenly execute on what they think is a “plan” when it’s really only an “idea”. Idea: we should paint the living room. Proposal: let’s paint it blue this summer. Plan: On June 20th, Acme painting company is painting the living room Nobility Blue. Oh.. and every plan must have actions, owners and dates, or else it’s not a plan.   If you bring a problem to your boss, you must have a recommendation. When presenting solutions to your boss, give them a menu and tell them your recommendation. “Hey boss, choose solution a, b, c, or d… I recommend a”.   Right Decision, Wrong Decision, No Decision: No Decision is the worst. Make a decision and start executing, rather than leaving the team in limbo. If the decision is wrong, then you and the team will learn quickly and course correct.   “No a**holes” hiring rule. No matter how smart or talented, they destroy your team. The smarter and more talented, the more destruction. Your team members waste tremendous amounts of time mollifying them, or worse, working around them… and losing faith in management for not fixing the hiring problem.   Grow your superstar powers and be known for being excellent at something specific.  Being well rounded is over rated. Get awesome at something and people will seek you out as that-thing-you’re-great-at person.   Finish – no such thing as a “starter”. If you don’t know how to finish, then you don’t know how to start. And the only way...

read more

Balancing Life in Writing and Tech

Posted by on Jun 16, 2017 in lifehack, tech, technology, writing | 0 comments

I recently had the opportunity to talk at some length with an engaged set of students at Olympia High School. (I talk regularly to school groups, and if you’d like to book me to talk to your school group, you can contact my booking agent here.) We discussed both my high tech career as well as my writing, and found ourselves discussing technology, the future, and how one balances career aspirations as well as creative endeavors.   Here are some of the resources I shared with them after my talk, as “footnotes” to our conversation.   1) First, I spoke about removing distractions from my creative work and focusing. Here is relevant information about phones and other technology as intentionally addictive and distracting.   Addiction to Phones   How Technology Addicts You — from a Google Design Ethicist   Digital Distraction   2) Second, I spoke about Ira Glass (host of “This American Life”) and his emphasis on the experience of a younger, striving artist.   Ira Glass on Failure as an Artist   Here’s a great animated video with the same quote:   3) Third, here is a link to that study on “luck” that I mentioned   Is Luck Real?   More details on this study here:   4) Fourth, I talked a bit about writing about autism. Here’s Temple Grandin’s TED Talk about the autistic experience.    Temple Grandin TED talk   5) Fifth, here are a number of articles about how liberal arts degrees make you more successful in the new economy:    Liberal Arts Degree is Tech’s hottest ticket (factual data)   Fortune 500 CEOs with Liberal Arts Degrees (factual data)   Why Tech Needs Liberal Arts Degrees  (factual data)   The Future of Work (an opinion piece)   6) Sixth, here’s a series of article that provide discussion of the coming wave of job losses due to high tech innovation and AI in particular.    AI and Unemployment   Robots and Work   Jobs Disappearing   AI and Jobs   7) Seventh, my current book-in-progress covers AR/VR futuristic scenarios. Some of the references I’ve used include the following:   Discussion of Augmented Reality and Virtual Reality   Another perspective from Facebook —   8) Eighth, I mentioned my work in high tech. Here are some relevant videos and highlights that might be of interest.    Google I/O – Ned Hayes on the show floor doing a Drone Demo   Intel Developer Forum, on stage with EVP Doug Fisher   GigaOM Mobilize   Intel TV Interview   Early Mobile Phone interface Demo (it’s very cheesy, but please note that this was before the iPhone and Android went big — ) I led the UI design and demo team for Paul Allen.   Oh, and here’s some information about that iPhone innovation I mentioned, that Paul Allen’s team invented:   The following information about Silicon Valley is interesting and pertinent.   And here’s a quick Seattle Times article that covered my commute to Silicon Valley   And finally, I mentioned several additional books during the course of our conversation. Here are some of the relevant books.   AMUSING OURSELVES TO DEATH, by Neil Postman   Steve Silberman NEUROTRIBES   Temple...

read more

Intel’s Next Generation Drone Platform with the Intel® Context Sensing SDK

Posted by on Jun 18, 2016 in biometrics and context, geekdad, Intel, tech, technology | 0 comments

The Intel® Context Sensing SDK surfaces the power of the Intel Integrated Sensor Solution and advanced sensor capabilities to power mobile, laptop and wearable apps. What if the Intel Context Sensing SDK could be used to understand a drone’s environment and even control a drone?  That’s the vision we set out to discover in this demo at Intel® Day Zero Google I/O in 2016. Our demo at the Intel Google I/O Day Zero Pre-Party demonstrated the integration of Intel’s Context Sensing SDK with Intel’s next generation drone platform for the collection and analysis of telemetry and environment sensing data. The demo highlighted how data collected by individual sensors on individual drones can be aggregated and processed for different use cases including authentication and control, environment understanding and proactive maintenance, as well as telemetry for thousands of devices stored and analyzed in the data center. Demo includes the following: Context Demo Dashboard demonstrates collection of sensor data across many drones and devices Authentication into “launch” of a drone via multi-device authentication, sensor state recognition and soft sensor inputs (calendar, time of day, etc.) Drone Dashboard demonstrates collection of sensor data across many drones and devices Drone behavior is also controlled as well by pre-set rules engines running both in the cloud and on the drones themselves Machine learning algorithms both on drone devices and in the cloud monitor thousands of drones and demonstrate how drones can “learn” from their environment and from the inputs given to them via sensors, etc. The Sensor SDK used for Drones can be used to provide SIMILAR benefits on your smaller device and laptop 2:1. All the Algorithms we use on drone sensors are cross-compatible on Intel’s Context Sensing SDK for wearables, phones and laptops / 2:1 devices. Get more information about the Context Sensing SDK here >> Please like &...

read more

Intel in Shenzhen — Context Sensing SDK

Posted by on May 1, 2016 in Intel, tech, technology | 0 comments

Our team has delivered a useful set of context-aware tools that can be seen in several games and both Android and Windows apps on the show floor at IDF Shenzhen. You can get the “Intel Context Sensing SDK” toolset here: And here’s some more information about the Intel Context Sensing SDK. Context Sensing SDK surfaces the power of Intel’s sensor hub and advanced sensor capabilities to power mobile and Web apps, thru standard RESTful APIs, dynamic algorithms and state machines, married to Intel’s next-generation Intel Integrated Sensor Solution for Android and for Windows, with all your data aggregated to a world-class managed cloud service. Context Sensing SDK is a library available for both Android and Windows that helps you easily incorporate services and context aware capabilities in your mobile and desktop applications. The SDK is flexible, offering several methods to use the services, either independently or in combination. The SDK includes Context APIs, which are useful to create context aware applications by taking advantage of many built-in context types providers. What is more, the SDK includes a Rules Engine that enables you to create rules based on the context and trigger actions once the conditions are met. You can download and use the Intel Context Sensing SDK here >> Please like &...

read more

God and the Robots

Posted by on Oct 21, 2015 in posthuman, technology, theology | 0 comments

I am writing a non-fiction book called God and the Robots. Here’s a first look at my early book proposal.  ——————————————– In the summer of 2015, two stories were on the front page of the New York Times. The stories did not appear to be related. The first was an announcement that a group of technical luminaries – including Elon Musk and Stephen Hawking – had signed a statement calling for governments to outlaw the building of autonomous killing weapons, or so-called “Terminator” machines.[1] The other was a story about a well-meaning robot called “hitchBot” being destroyed by vandals while hitchhiking across the United States.[2] The first story was about how intelligent machines should treat us. The second story was about how we should treat intelligent machines. Both stories turn on an unanswered question – what is our ethical relationship to robots? How should we treat each other? The proposed nonfiction book God and the Robots provides an answer to this important question, from an author with intimate knowledge of current and future machine intelligence, as well as a deep understanding of theological ethics and America’s long engagement with Judeo-Christian ethics. Intel futurist Ned Hayes is an expert in artificial intelligence; he also happens to be a published theologian who has worked on ethical questions for several decades. He thus has a uniquely informed perspective on ethical interactions with machines. With strong endorsements from leading thinkers across the robotics spectrum, God and the Robots is an important and timely update on a topic that has a broad and actively engaged readership, but whose leading New York Times bestselling books are all five to ten years old. God and the Robots fills this market need with an original and deeply informed new book on robotics, ethics, and AI. Why Theology  & Robots In 2015, nearly 80% of Americans claim some affinity with Judeo-Christian conceptions of God; a strong majority of U.S. adults say they believe in God, heaven and the soul.[3]  What will happen to those baseline beliefs when self-aware robots begin to assert themselves? In fact, what is a robot in terms of theological understanding? Does a robot believe in God or have a soul? Today, these questions may not seem to matter. But very soon, when your 90-year-old grandmother is being cared for by an always-on and seemingly highly compassionate near-sentient robot, people will begin to ask these questions every day. If your grandmother begins to believe that the robot prays for her, and she cares for the robot as a “person,” then these questions will have immediate resonance. And what if the robot does actually pray for her? What happens then? What will your family think of that robot, and of that grandmother? These scenarios may come to life within the next five to ten years. Readers living in this generation of human beings will need to engage with robots on a daily basis. And they will need answers to critical questions of machine en-soulment, human differentiation and ethical behavior towards robots. This book sees the coming AI sea-change as a way to extend America’s Judeo-Christian ethic to embrace robotic beings.[4] Finally, one of the few nonfiction categories still growing dramatically is the “Religious”[5] book category: another category still above water is “Technology.”[6] God and the Robots...

read more

Inventing the Future: Everything Old is New Again (iPhone 6S edition)

Posted by on Sep 14, 2015 in tech, technology | 0 comments

I’ve been really excited to see new innovations in interaction with phone, mobile device and wearable interfaces lately. Some of these innovations are doubly exciting…. because I helped invent them, seven years ago, and these new ways of interacting with data and with devices are only now coming to the mass market. For example, it was fascinating to have Walt Mossberg trumpet the praises of the iPhone 6S this week at Apple’s launch event. Here’s what Mossberg said: Anyone who thought there was no more fundamental innovation to be wrung out of the smartphone is just wrong. The 10-finger multi-touch interface made mainstream by the iPhone 8 years ago has now taken a leap forward with Apple’s 3D Touch. This lets you view content in apps without opening them, quickly perform common actions, and generally manage your smartphone more smoothly by simply pressing a bit harder than usual on the screen. In brief use so far, I found this both highly useful and delightful — exactly the reactions Apple loves best. I expect to use it many times a day. Exactly right. And my stellar team at Vulcan Labs (under the Kiha/ARO product name) invented most of these user experience innovations…. six or even seven years ago. You see, innovations like this don’t emerge one time or in one place. They percolate around the edges — and in the Labs — of various teams of collaborators for many years before finally, someone gets the budget, the right partners, and the clout to bring them to market in a fully productized and completed product. Here are some examples: Flipper of multiple “Tabs” in Apple iOS Safari   “Force Touch” / 3D Touch to expose contextually relevant menus   Smart contextual information based on in-line text recognition Let’s go thru the details on these “new” innovations, point by point. 1) Flipper of multiple “Tabs” in Apple iOS Safari (You might note that I went to Whitworth for my undergraduate degree… guess who got to write the copy for the Flipper?) Right now, they are using this only on browser pages on the on-device Safari browser. I wouldn’t be at all surprised if they eventually got around to our implementation model, where every recent app on the device can be browsed in this flip-book fashion.Here’s the Apple version, side by side with our original version, as available in press ready presentations since 2009 or thereabouts.   2) “Force Touch” / 3D Touch to expose contextually relevant menus Imagine that you could touch LONGER in a certain place on the screen, or have the device recognize that you needed information relevant to that place on your screen or your text. Well, we did that. In 2009. Happy to see it in broad production on an Apple iPhone 6S in 2015.             3) Smart contextual information based on in-line text recognition   Imagine that your device might know where on your screen or where in your text you were, and provide contextually relevant information, as a blossoming “menu” available at a touch. Yes, we did that, in 2009. Happy to see it on the iPhone 6S in 2015. I’m very happy to have been part of extremely collaborative and innovative teams at Vulcan Labs and Xerox PARC who invented many ideas similar to those...

read more

Intel Developer Forum Showcase

Posted by on Aug 21, 2015 in biometrics and context, Intel, tech, technology | 0 comments

Intel created an awesome week with at IDF 2015, Aug 18-20. We had a fantastic session with the new Intel® Context Sensing SDK for Windows – newly released this week. Here’s a great short summary video of all the fun at IDF 2015! Coverage includes announcements from this years Intel Developer Forum: Intel® RealSense™ technology extends into new platforms and devices; ‘America’s Greatest Makers’ reality competition announced, premiering in 2016; 3D XPoint™ comes to storage and memory solutions with Intel® Optane™ Technology, and much more. Visit for more. Please like &...

read more

Enjoy what you read? Share!