By Mark Ackerman

The SocialWorlds research group is exploring new forms of family memory that are enabled by the Internet of Things. Our goal is to understand what the user experience might be when everyday things have their own memories.

Mark Ackerman

Mark Ackerman is the George Herbert Mead Collegiate Professor of Human-Computer Interaction in the School of Information and the faculty member for the SocialWorlds project. Contact him at ackerm@umich.edu.

Have you noticed that as we become an increasingly digitized society, the possibilities for family memories are changing? On the one hand, we are being inundated with digital photographs and videos. Seemingly every person captures video at important (and non-important) events, and parents can easily take pictures at soccer games, birthdays, and recitals. While memories used to be scarce – I have exactly one photograph of my great-grandfather – my grandchildren will have thousands of me.

The problem is we can’t find the memories we have, and we still capture only certain events. New computational objects, now going under the rubric of the Internet of Things, will afford new kinds of memories. For example:

  • My children are now older, and I can hardly remember what they were like as toddlers. We have videos of family gatherings and vacation trips. But I can’t really recall what they sounded like at the dinner table after a day at school, squabbling in front of the television, or playing with friends. Imagine collecting memories when your children are small, captured automatically in the dining room or other semi-public places. We’re trying to do this. OneDay is a prototype application that plays randomly-selected snippets of audio or video captured over the course of a year. A decorative globe contains those memories; it can be taken off of a shelf and played.
  • My grandparents were immigrants from Eastern Europe. They grew up in a small village, came to a U.S. city as young adults, and raised a family. They saw a wide swath of the 20th Century, from electricity and airplanes through World Wars and atomic bombs. I can picture them in their living room, but what I don’t have is their stories. Often, when grandparents or parents are ready to tell stories, the grandchildren aren’t ready. And when the grandchildren are ready, the stories are gone. It was the same for me: I was too young when they died to have asked. We’re developing another prototype application, called StoryBall, that allows children to hear their family’s stories. You shake the ball, like a Magic 8-Ball, and out comes a story for the grandchildren.

We are also interested in how the everyday objects of the future will have their own memories – what that will mean and what capabilities and issues that will bring. One prototype we have constructed is a shelf that can track the StoryBalls placed on it. The shelf has a rudimentary memory of its own.

Our SocialWorlds research group does both social-analytic studies and systems building in an iterative cycle. We are building two systems currently. One system is the Memory Toolkit, which supports building the applications above as well as others. The other is the Memory Curator, which helps decide which are the very best memories. The Memory Curator never throws anything away; it just decides what to highlight. It uses a combination of automatic means, crowd-sourcing, and asking family members.

To create useful designs, we also study how family members pass along their memories, perhaps as heirlooms with stories or perhaps just as stories. If you’ve had family memories passed to you in a productive way, we’d like to interview you.

We also plan to put our prototype applications into the field to study them over the next six months.

Jasmine Jones and David Merritt are PhD students and Xinda Zeng is an MSI student currently working on this project. Jasmine is working on the Memory Toolkit, along with conducting studies of how people pass along family memories. David is designing and constructing the Memory Curator. Xinda is responsible for prototype hardware and software for our applications. As well, Ying-yu Chen and Xiaomu Zhou previously worked on the project.