Online shopping: “Collaborative filtering” might not be a widely recognizable term, but it’s what companies like Netflix and Amazon rely upon for recommending products to customers. These rating systems are an efficient and reliable way to break down difficult-to-analyze qualitative data. Ken Goldberg, a professor at UC Berkeley, headed a research team to see if collaborative filtering could be used for social good. CITRIS and the Banatao Institute’s Brandie Nonnecke and Eric Martin wrote about this phenomenon — and the potential for collaborative filtering to be used for social good — in the Stanford Social Innovation Review.
Stanford Social Innovation Review, March 6, 2018 – Back in 2006, a modest DVD-by-mail company called Netflix offered a $1 million grand prize to any programming team that could improve its ability to recommend films that matched the interests of its customers. Meanwhile, Amazon was betting its future on Prime, a subscription service built on its own recommendation tool. For both companies, the concept was simple: Develop an algorithm that uses the opinions and actions of customers as predictive data about which products other, like-minded customers would want. The algorithm, they hoped, would act like a highly precise, 21st-century version of a dynamic focus group, continuously revealing the detailed wisdom of the crowd—and boosting customers’ consumption, satisfaction, and loyalty.
Today, Netflix credits its algorithms for 80 percent of the hours customers stream, as well as a rock-bottom cancellation rate that saves the company $1 billion a year. Amazon says recommendations account for a stunning 35 percent of its revenue. This is great for them, but it’s also promising news for social change practitioners. That’s because these systems are built on a model called “collaborative filtering,” an approach that‘s jumped the rails from commerce to civil society, where it’s shown the potential to surface quicker, cheaper, better data about notoriously hard-to-measure social change.
Qualitative data analysis is grueling. Instead of hard numbers, line graphs, and percentage points, evidence often lies in the linguistic testimony of community members, narrative examples from partners, and other diverse observations. The voluminous and unstructured nature of this data makes it difficult to analyze quickly and accurately. Skilled evaluators scour the evidence manually or with the aid of software, identifying patterns and coding responses, parsing keywords and expressions. But this approach can be error-prone—bias creeps in, assumptions are made. Which anecdotes or observations verifiably represent the change that may or may not be taking place? How can you validate these insights? How often do we read an organization’s report that attests to complex transformation, only to be disappointed or unsatisfied that the offered proof lies in list-like or potentially cherry-picked quotes and examples?