I’ve been working on several recommendation systems recently and so I thought I’d write up a post about a nice technique for recommendation called collaborative filtering. This technique can be leveraged in a great variety of ways, and this post is a quick intro and mini-tutorial on the technique.

Many retail businesses have opportunities to recommend items to their customers for purchase. E-commerce businesses may display recommended items on a web site for customers to see, and walk-in stores might send personalized advertisements to customers. One way to make recommendations to customers is by looking at the items they have purchased and finding other items that are similar based on category, labels, and other “content-based attributes”.

Such content-based methods of recommendation are good at ensuring that items recommended are similar to items in a user’s purchase history, based on content descriptors. However, they have the drawback that the types of items customers get exposed to are constrained based on these content-descriptors, potentially missing opportunities to expand a user’s awareness of broader content items that they might be interested it.

How can a retailer decide which other items might be of interest to a customer? One good way to find out is by looking at other customers that might share interests with the customer in question, and seeing what other items they tend to be interested in based on their observed preferences. This is exactly what collaborative filtering does.

Arrows represent “likes”. Those who like strawberries also like kiwis.

Collaborative filtering uses a type of crowd sourcing to make predictions about the interests of a customer based on preference information that has been collected for other customers. These predictions can be used to make automatic, personalized recommendations to specific customers or from specific items. The former is called “customer-based collaborative filtering” and the latter is called “item-based collaborative filtering”.

Customer-based collaborative filtering provides recommendations for a particular customer, based on the full history of that customer’s preferences and other customers who are found to have the most similar preferences. It essentially says “People who like a lot of the same things Bob like also like these other things, so we will recommend these to Bob”. These recommendations can be provided to the customer in general contexts, since they take into account the customer’s overall preference profile. Item based collaborative filtering provides recommendations for a particular item, based on item’s similarity to other items in terms of the customers that have liked them. It essentially says “People who liked item X also liked these other items, so we will recommend these items to a customer who has shown interest in item X”. These type of recommendations may be provided to a customer as he is looking at item X on a website, but can also be used to make recommendations in a general context by chaining recommendations off of items in the customer’s purchase history.

So how can we do collaborative filtering to get such recommendations? The basic requirements and process is quite straight-forward. All we need to start with is records of customer preferences on items, either in the form of ratings, or just a binary indication of whether customers have purchased the item or not:

With such a record of preference indications, the first step is to create customer-item vectors. For example, going with the simplest case of preference indications of 1 (purchased) or 0 (not purchase), we might have the following:

In this example, Customer A has purchased Items 1, 4, 6, 7, 8 and 9, Customer B has purchased Items 1, 4, 8, and 9, and so on. Using these customer-item vectors, we want to determine the similarity between all customers in terms of their preference indications on items. We can do this by computing a similarity score between all customer-item vectors. We shall use the cosine similarity score in this example, although other similarity scores, like the Jaccard index, are possible. The cosine similarity between two vectors A and B is defined as follows:

Applying this similarity scores to all pairs of customer vectors gives us a customer-customer similarity matrix:

The higher the similarity scores between two users, the more similar the users are in terms of their item vectors. The diagonal holds the max similarity score of 1, since users are exactly similar to themselves. Using this similarity matrix, we can compute item recommendation scores. We do so by picking the K most similar customers (other than themselves) and computing a similarity score weighted average on all items in their item vectors. For example, say we want to make compute recommendation scores on items for user B, and picking K=2, we see that customers A and D are the most similar. To get the similarity weighted averages on items, we multiply the item vector entries by the similarity scores (0.816497 between A and B, and 0.288675 between D and B), and then simply average down the weighted entries on all items:

This leaves us with recommendation scores for each item, and we can pick the top N items (that user B does not already have) for recommendation to user B. In this example, the top 2 items to recommend are items 6 and 7, with a recommendation score of 0.4082.

The process above works just the same if we had rating scale entries in the customer-item vectors instead of binary entries. This process also showed customer-based collaborative filtering. If we wanted to do item-based collaborative filtering, the process would be exactly the same with the exception that we would transpose the customer-item vector table into an item-customer vector table, which would result in an item-item similarity matrix after computing a similarity between all vectors.

There are of course many decisions to be made on the exact implementation of the above process, with various tradeoffs, but the above described the entire process needed to get personalized recommendations for customers through collaborative filtering. Modifications can also be applied to the above process to get the results desired in practice. For example, if we wanted to bias our recommendations towards newer items, we could incorporate such content-based information by weighing the final recommendation scores by a decay function on item age.

An endless number of such and much more creative modifications to the basic collaborative filtering process exist. Applying the right ones for the right situation can almost be an art. The basic collaborative filtering process however is a straightforward and very useful method to decide what items to recommend to their customers in marketing and sales efforts.  

Leave A Comment

Your email address will not be published. Required fields are marked *