We’re on the cusp of robotic assistants appearing in home and office use; not just silent machines vacuuming our floors (or in my experience with the Roomba, getting stuck under our furniture) or in industrial applications, but interacting with us for daily tasks and presenting themselves in anthropomorphic ways. This is on my mind a lot, but last week’s MIT Technology Review article “Personal Robots: Artificial Friends with Limited Benefits” kept gnawing at me. I’ve got some questions:
- Why are the first wave of personal robotic assistants so cute and kid-friendly?
- Is it necessary to train potential buyers with entertaining ‘bots before they will use serious applications?
- Do cute robots trivialize the potential of these machines?
- What have we learned from other sources about what adults might want, need, and — most importantly in the long run — actually use?
- What does the product roadmap look like between Roomba and Rosie, and beyond?
In my previous work life as a project and product manager at Internet companies, it was important to consider not only the current product my team was building, but the competitive landscape, latest research, and how we hoped to iterate the product in the future. The product roadmap got more speculative the further forward it stretched, and in Internet time, that could mean it was blurry a mere 12 months ahead, but I had some idea where we planned to go. Combined with research, reporting, and user testing, that roadmap would drive the requirements for the next version.
With a number of companies heading in simultaneous, differing development directions, I wonder what the roadmap looks like to people on the inside. Does Cynthia Breazeal want JIBO to become the Furby of 2016, just as irrelevant years later? Is she counting on more adult applications to come from third-party developers, or does she have a track in mind that goes beyond the lovechild of WALL-E and Siri? I look at the “Future Life with Pepper” video from Aldebaran Robotics (below, in Japanese but very easy to understand) and I find it unimaginative and silly.
Some of my irritation with how Pepper is shown could be cultural; I like kawaii things, but I don’t want an infantilized assistant with a high voice. That might say “non-threatening and friendly” to others, but it says “annoying and dumbed down” to me. I would love to have a moving robot with hands right now, if it could fetch or carry things for me while I’m steering my wheelchair or gripping crutches. Stir onions on the stove while they carmelize. Let the dog out. Pick up the ball of yarn I dropped that rolled across the room. Don’t play peek a boo with me when I’m crying, ffs. How useless!
Does the roadmap for personal robotics have to pass through Candyland? Though I find it frustrating for myself as an early adopter, I can see how it could be a viable path. It’s a non-threatening way to get robots into a family home. Children might engage with a cute bot more frequently and naturally than adults with a more serious one, and I suspect that like a digital assistant or a DVR, robots will have more perceived value when used regularly, while that value might be hard to explain to a non-user. Teaching children to comfortably interact with robots could be important to the roadmap in a Wayward Pines First Generation sort of way: they are the future, and when robotic technology has advanced so there are more home and office uses, they will be the programmers, designers, buyers, and users.
Do we have data that could point to what older users want from personal robots in the near future? I’d suggest looking at tablet/phone apps, gadget purchases, and use of digital assistants now. Mail, chat, videos, photography, weather, maps, social media, music, games, search, stock updates, fitness tracking, and news. Communication with other devices on the same network. Notifications delivered in a personalized, prioritized way. Immediate answers to relatively simple questions. Reminders and a calendar. These are all things that are perfectly suited to a stationary, voice-controlled robot with a display screen. If I were designing a bot of that sort for my personal needs I’d add in: can take dictation and save longer notes, can read a piece of text and answer basic questions about it (“How many cups of flour do I need?” when reading a recipe), can send voice/photo/video messages to other bots of the same/similar type, can act as a receptionist for my mobile phone when I’m home, can interact with my accounts on video sites and the Chromecast/future device attached to my TV (“Play season 2 of Archer on the family room television”), and more.
I think that even at that point in the roadmap, a stationary robot with personality, like JIBO rather than the not-very-clever, screenless Amazon Echo, could be exceedingly useful for remote relationships of various types. My family is spread across the country and my friends are around the world, and just from my own life I can think of many use cases. I can also imagine such a bot as an assistant at work. In a few years, with better communication between devices and programs instead of maintaining silos of information, even this level of robot could be a daily helpmate to many people.
When we start to consider a robot with mobility and limbs, however, we need to think in 3D. The Pepper video fails greatly in that regard. The only shown use of mobility is that Pepper can move toward people and its hands are used for games or expressions. I doubt that’s all we want, but the development path between that and a fully mobile bot with useful appendages that could do housework, for example, is unclear. Our homes have different floor types, thresholds, stairs, and obstacles that must be overcome before we start to consider the fine motor control and grip needed for simple tasks. Still, I can imagine a robot not too far off that could operate on one floor of a home or office and handle small manual jobs as well as providing entertainment. At times, most of us could simply use an extra set of hands to hold, stir, open, carry, or balance something. Is that enough to justify the work necessary to make a mobile robot? Probably not. I can see the first viable generation of mobile home robots being developed and marketed for the elderly or disabled, with uses customized to those populations as well as the functionality of the stationary bots. When might that be? 10-15 years from now?
It seems that the next step after that is currently undefined. The technological gap that remains before we reach the dream of a robot butler or housekeeper, able to do physical work in any setting, is huge. Maybe we need to give some thought to the roadmap and where we really want personal robotics to be in 20-30 years. Are charismatic androids the best robotic supplement we can imagine? Maybe there is a fork in the path, where we separate companion bots from more utilitarian bots. Maybe the development curve of smart home/office technology will intercept the robotic curve at a point where the robot can be the control interface, but not need so many skills built in.
Along those lines, I’ve embedded a video below about the characters in the AMC series HUMANS. It’s interesting if you’re watching the series, but even if you’re not, it introduces the androids (“synthetics” or “synths”) as they’re imagined in that parallel present and the interactions that humans have with them. I think that full-service androids like synths are often seen as the endpoint of the personal robotic roadmap. Should they be?