An Interview with Cheryl Costantini, VP, Brand Management, Cengage Learning.

When business productivity applications were first launched in the early 1980’s for the brand-new personal computers that had been introduced by companies like IBM, Compaq, DEC, and NEC, software was delivered in a box that contained a packet of floppy disks and two manuals – a references manual and a user manual. Most users learned how use the products on their own, with both manuals open on their desks, next to their computers. There were training classes available, but many professionals simply did not have time to attend classes. The problem with these manuals was that they were focused on the features of the product – what it could do – instead of focusing on the tasks and projects that typical users needed to accomplish with it (such as creating forecasts, balance sheets, and income statements).

In 1987, a small start-up, Course Technology, opened its doors in Cambridge, Massachusetts with the goal of creating books that were designed to help software users learn the products on their own or in conjunction with classroom-based learning. They took a tutorial-style approach, taking readers through a series of goal-based exercises that would help them learn the products. At the same time, they negotiated special pricing with software publishers in order to include student versions of the software, along with the books. This model was largely unchanged, industry-wide, until 1997, when the company introduced its Skills Assessment Manager (SAM). Cheryl Costantini joined the company at about the same time. Jeanne Heston recently caught up with her to learn more about the inspiration for the product.

Jeanne Heston (JH): How long have you been working with content designed for computing courses?

Cheryl Costantini (CC): I joined Course Technology in September, 1997, so it’s been over 15 years. Course Technology’s mission was “Helping People Teach with Technology” which at the time was somewhat uncharted territory.

JH: What was it that first inspired the team to develop SAM?

CC: Instructors were grading student assignments, quizzes, and tests manually by collecting 3.5-inch floppy disks that included completed [Microsoft] Word docs, PowerPoint files, and Excel spreadsheets. One by one, they would open up the files and see if the end result was correct. Not only did this take a lot of time, but it was challenging to determine whether the students had used the correct functions to get the result — for example, if they had used the center align function vs. the tab key. It also seemed strange that instructors had to use very manual ways of grading a technology-based assignment. We asked the question, “Couldn’t technology help them grade the technology?” So, we set out to create SAM. In its original offering, it was focused on having students perform discrete tasks that would be automatically graded for correctness.

JH: Were there other technologies that you and/or competitors offered prior to SAM, along with the books?

CC: No; we offered software (the applications) bundled with books so that students had access to the software tools to perform the work, but we did not offer any digital learning or teaching materials.

JH: How did you determine which features should be part of SAM?

CC: We started by conducting ethnographic research, visiting instructors and observing them – to see how they prepared for class, taught classes, and did their work after class – because we have found that the best way to gather insights is to watch instructors at work. It was difficult for them to articulate the types of technology that would help them teach technology more easily, but they were able to tell us about the challenges associated with what they did, and we were able to observe the workflow and some of the frustrations.

JH: What was the reaction when instructors started using it for their classes?

CC: Well, we knew that it would impact how the classes were taught, but we were not prepared for the degree to which it did so. It essentially transformed the way computing courses were taught. By enabling a more hands-on, applied approach to teaching, instructors were able to flip the classroom. Instead of requiring both a lecture and lab component to the courses, instructors were able to eliminate or reduce the lecture time. SAM also enabled instructors to save a tremendous amount of time collecting and grading assignments, especially for courses that had no teaching assistants assigned to them. And because students received immediate feedback on their assignments, instructors started reporting that outcomes had improved, as well.

Do you use online environments that enable students to obtain immediate feedback when practicing new computing skills? We would love to hear from you. Please share your experiences using the comments section below.

You may also be interested in this recording from the 2013 Course Technology Conference, Best Practices for Teaching/Engaging Online Students, presented by Sandy Keeter and Melinda White.