An interview with Christine Freeman, Manager, User Experience Services, Cengage Learning.

We’ve all had frustrating experiences with software and websites that appear to have been designed with little or no knowledge of who the customer is – screens full of gadgets that are aesthetically appealing but so cumbersome to use that they make it difficult for us to achieve our goals. What makes great online teaching and learning tools great? Christine Freeman and her team know that it is all about building products that make it easier for instructors and students to achieve their goals – improved engagement and outcomes – more quickly and easily. In order to do that, her team and cohorts leverage several research techniques that help them understand the true needs and preferences of instructors and students, enabling the team to move into the design phase and to obtain user feedback on early prototypes.

Christine’s team members have conducted user research and designed user experiences across a wide variety of Cengage Learning products and services, including MindTapMindApps, OWLSAMKeyboarding Pro Deluxe OnlineCourseMateGale products and services, and many others. Jeanne Heston recently caught up with Christine to learn more.

Jeanne Heston (JH): How long have you been working in the area of user research and design? What was it that inspired you to get into this field in the first place?

Christine Freeman (CF): When I joined the company six years ago, I’d been working as an instructional designer for digital learning for several years, and that’s the role I was first hired to do. Our group worked on outcomes-based digital learning in a little corner of the company. I’ve long been fascinated by the growing body of academic research related to multimedia learning – how it has created and supported best-practice principles for how to provide content to learners in the optimal way at the optimal time. So we worked to apply these in developmental mathematics learning.

Our design group became the basis of Cengage’s initial User Experience (UX) teams. I started calling on my experience with information architecture, doing lots of wireframing. Then I joined the early MindTap team and started doing user research and persona development. Now I also supervise people doing visual design and front-end coding, from whom I learn so much.

JH: Tell me a little bit about your research philosophy.

CF: I have learned that what people do is quite often very different from what they say they do, so it is very important to observe them in the real world, in their environments, doing the types of things that they do every day. The type of information we gather from this ethnographic research is very different from the information that we obtain in focus groups or usability testing. It is extremely valuable, but it takes more time and resources, and requires that we be co-located with participants, so we sometimes need to use other research methods, such as interviews, to ensure that we have sufficient data to make good design decisions.

JH: Which user research methods do you use during the design and development process?

CF: In addition to the ethnographic research we just discussed – including a specific form of this called “contextual inquiry” that also involves specific analytical components – we also conduct user interviews, interactive user sessions, and traditional usability testing.

For ethnographic and interview-based research, we may follow up with affinity sessions in which we organize the research, identify patterns related to the observations, and recruit ideas and insights from stakeholders who participate in the affinity sessions. We may also use “lean” UX analysis; we glean the high points more informally, with minimal documentation, focusing on design implications.

JH: Can you tell me more about the interactive user sessions?  How does that differ from traditional usability testing?

CF: In an interactive user session, we use a webinar hosting environment, like WebEx, to show the user a prototype of the product we are developing. We pass control of the keyboard and mouse to the user and give the user a goal. For instance, when we were developing some of the MindTap learning paths, we asked instructors to move screen elements around to show the order in which they would like to see the learning activities organized. We then asked follow-up questions to determine why they made particular choices. Unlike a usability test, we are not gauging the usability of a prototype in an interactive user session. Instead, we are trying to learn about the user’s problems and solutions. We provide an environment for the user to work in, give them a goal, observe their actions, and ask them about what they are doing and why.

JH: Can you provide an example of an interesting or unexpected user insight that you were able to obtain and how you were able to obtain it?

CF: In the early stages of MindTap, I was visiting campuses to meet with students. My goal was to find out how they used highlighters to learn and study.  I had set up appointments with individual students and had asked them to bring some of their notes. I was waiting in the school cafe for a student, when I noticed another student at a nearby table using an elaborate highlighting strategy to color-code her notes. She was kind enough to talk with me and let me videotape a demonstration of her color-coding system. This material added valuable insights to our research, about how the most advanced highlight users use multiple colors. This was real ethnographic research – being where users are, observing how they work in their everyday environments.

JH: Are there any user research methods you are just starting to experiment with, or planning to use in the near future?

CF: We are planning to try some online, remote user testing, without a live moderator. This testing won’t replace ethnographic research or other user research activities. It will enable us to conduct tests with many users – very quickly – at times when we would like fast feedback on attributes of a new prototype or web page. Users fill out text-box responses, and we get all the feedback within a day or two of posting the test.

We anticipate that this research technique will help us get user feedback on designs quickly. Then we can do more iterative design, which time constraints have often prevented us from doing. In an iterative design process, we can obtain user feedback after a design has been prepared, and then refine the design again (and perhaps repeating that cycle until we get it right), before sending it to developers. Iterative design allows us to provide the developers with a design that more fully meets the needs of instructors and students.

Online remote testing without a live moderator may work effectively for certain issues, though not when we need to understand the “why” behind user responses. For that, we still need to connect with users through observation and conversation.

What problem do you think user research could help you solve? Have you ever participated in an ethnographic research or usability test session? If so, what did you think of the experience? We would love to hear from you. Drop us a note in the comments below!