Recommendation Systems (1): Fundamentals and Core Concepts
Chen Kai BOSS

permalink: "en/recommendation-systems-1-fundamentals/" date: 2024-05-02 09:00:00 tags: - Recommendation Systems - Collaborative Filtering - Introduction categories: Recommendation Systems mathjax: true --- Imagine opening Netflix and seeing a carefully curated row of shows that perfectly match your taste, or scrolling through Amazon and discovering products you didn't even know you needed. Behind these experiences lies one of the most commercially successful applications of machine learning: recommendation systems. From e-commerce giants like Amazon generating 35% of their revenue through recommendations, to streaming platforms like Spotify keeping users engaged with personalized playlists, recommendation systems have become the invisible force driving modern digital experiences.

But what exactly makes a good recommendation system? How do these systems learn your preferences without explicitly asking you? And more importantly, how can you build one from scratch? This article takes you from the fundamental concepts to practical implementations, covering the three major paradigms (collaborative filtering, content-based filtering, and hybrid approaches), evaluation metrics that matter in production, real-world system architectures, and the core challenges that every recommendation engineer must face.

Whether you're a data scientist looking to understand the theory, a software engineer tasked with building a recommender, or simply curious about how Netflix knows you better than your friends do, this guide provides the foundation you need. We'll explore not just the "what" and "how," but critically, the "why" – understanding the trade-offs, failure modes, and design decisions that separate academic toys from production-grade systems handling billions of users.

Why Recommendation Systems Matter

Before diving into algorithms and architectures, let's understand why recommendation systems have become mission-critical infrastructure for modern platforms.

The Information Overload Problem

In 2025, the average user faces an overwhelming amount of choice: - Netflix hosts over 15,000 titles across all regions - Amazon lists hundreds of millions of products - YouTube sees 500 hours of video uploaded every minute - Spotify contains over 100 million tracks

Without intelligent filtering, users would spend more time searching than consuming. The paradox of choice is real: more options can lead to decision paralysis and user dissatisfaction. Recommendation systems solve this by acting as intelligent filters, surfacing the tiny fraction of content that's actually relevant to each individual user.

Business Impact: The Numbers That Matter

The commercial impact of recommendation systems is staggering:

  1. Revenue Generation: Amazon reports that 35% of their revenue comes from recommendation-driven purchases. For a company with$500+ billion in annual revenue, that's over$175 billion directly attributable to recommendations.

  2. User Engagement: YouTube reports that 70% of watch time comes from recommendations. Without their recommendation engine, user engagement would plummet, directly impacting ad revenue.

  3. Customer Retention: Netflix estimates that their recommendation system saves$1 billion per year by reducing churn. Users who receive good recommendations are significantly more likely to continue their subscriptions.

  4. Conversion Rates: E-commerce platforms see 2-3x higher conversion rates on recommended products compared to organic search results.

These aren't marginal improvements – recommendation systems are often the difference between a thriving platform and a failed one.

Real-World Examples: Where You See Recommendations Daily

examine how different platforms use recommendations:

E-commerce (Amazon, Alibaba) - "Customers who bought this also bought..." (item-to-item collaborative filtering) - "Recommended for you" on homepage (personalized recommendations) - "Complete the look" fashion bundles (multi-item recommendations)

Streaming Video (Netflix, YouTube) - Personalized homepage rows organized by taste clusters - "Because you watched..." explanations - Auto-playing next episode (engagement optimization)

Music (Spotify, Apple Music) - Discover Weekly (cold-start handling for new content) - Daily Mix playlists (user preference clustering) - Radio stations (similar item exploration)

Social Media (Facebook, Instagram, TikTok) - News feed ranking (engagement prediction) - "People you may know" (social network analysis) - Explore page (diversity-aware recommendations)

Professional Networks (LinkedIn) - Job recommendations (multi-criteria matching) - Connection suggestions (graph-based recommendations) - Content feed ranking (expertise-based relevance)

Each of these use cases has different requirements and constraints, which we'll explore throughout this article.

The Three Paradigms of Recommendation

At the highest level, all recommendation systems fall into three fundamental paradigms. Understanding these paradigms is crucial because they represent different philosophies about what makes a good recommendation and what data we use to generate it.

Collaborative Filtering: "Users Like You Liked This"

Collaborative filtering is based on a beautifully simple idea: if two users agreed in the past, they're likely to agree in the future. The system doesn't need to understand what a movie is about or why you liked it – it just needs to find patterns in collective behavior.

Core Intuition

Imagine you and your friend Sarah have watched 50 movies together, and you've rated them almost identically. Sarah is your "taste twin." Now Sarah watches a new movie you haven't seen yet and loves it. There's a good chance you'll love it too – even if the recommendation system has no idea what the movie is about.

This is collaborative filtering: using the collective wisdom of the crowd to make predictions. The "collaboration" happens implicitly through user behavior data.

Two Flavors: User-Based vs. Item-Based

Collaborative filtering comes in two main variants:

  1. User-Based Collaborative Filtering (User-CF)
    • Find users similar to you
    • Recommend items those similar users liked
    • Example: "Users with similar taste also enjoyed..."
  2. Item-Based Collaborative Filtering (Item-CF)
    • Find items similar to ones you liked
    • Recommend those similar items
    • Example: "If you liked Movie A, you'll like Movie B"

The mathematical foundation is the same, but the practical implications differ significantly. Item-based methods tend to be more stable (items' characteristics don't change much over time, but users' tastes evolve) and more scalable (fewer items than users in most systems).

Mathematical Formulation

formalize this. We have: -\(U = \{u_1, u_2, \dots, u_m\}\): set of\(m\)users -\(I = \{i_1, i_2, \dots, i_n\}\): set of\(n\)items -\(R \in \mathbb{R}^{m \times n}\): user-item rating matrix, where\(r_{ui}\)is user\(u\)'s rating for item\(i\)The matrix\(R\)is sparse – most users have rated only a tiny fraction of items. The goal of collaborative filtering is to predict missing entries in\(R\).

For user-based CF, we predict user\(u\)'s rating for item\(i\)as:\[\hat{r}_{ui} = \bar{r}_u + \frac{\sum_{v \in N(u)} \text{sim}(u,v) \cdot (r_{vi} - \bar{r}_v)}{\sum_{v \in N(u)} |\text{sim}(u,v)|}\]Where: -\(\bar{r}_u\)is user\(u\)'s average rating (bias correction) -\(N(u)\)is the set of\(k\)most similar users to\(u\)who rated item\(i\) -\(\text{sim}(u,v)\)is the similarity between users\(u\)and\(v\)Key: computing\(\text{sim}(u,v)\). Common similarity metrics include:

Cosine Similarity:\[\text{sim}_{\text{cos }}(u,v) = \frac{\sum_{i \in I_{uv }} r_{ui} \cdot r_{vi }}{\sqrt{\sum_{i \in I_{uv }} r_{ui}^2} \cdot \sqrt{\sum_{i \in I_{uv }} r_{vi}^2 }}\]Where\(I_{uv}\)is the set of items both users have rated.

Pearson Correlation:\[\text{sim}_{\text{pearson }}(u,v) = \frac{\sum_{i \in I_{uv }} (r_{ui} - \bar{r}_u)(r_{vi} - \bar{r}_v)}{\sqrt{\sum_{i \in I_{uv }} (r_{ui} - \bar{r}_u)^2} \cdot \sqrt{\sum_{i \in I_{uv }} (r_{vi} - \bar{r}_v)^2 }}\]Pearson correlation is centered (accounts for different rating scales between users), making it more robust in practice.

Strengths and Weaknesses

Strengths: - Domain-free: Works without understanding item content - Serendipity: Can recommend surprising items you wouldn't have found yourself - Quality: Leverages collective intelligence, often producing high-quality recommendations

Weaknesses: - Cold-start problem: Cannot recommend new items with no ratings or to new users with no history - Sparsity: Requires sufficient overlap between users/items to compute meaningful similarities - Scalability: Computing similarities between millions of users/items is computationally expensive - Grey sheep problem: Users with unique tastes don't have similar neighbors, leading to poor recommendations - Popularity bias: Tends to recommend popular items because they have more data

Content-Based Filtering: "You'll Like This Because It's Similar to What You Liked"

If collaborative filtering asks "who else liked this?", content-based filtering asks "what is this thing, and have you liked similar things before?"

Core Intuition

Content-based filtering builds a profile of your preferences based on the attributes of items you've interacted with. If you've consistently rated action movies highly, the system learns you like the "action" attribute and recommends more action movies.

This approach is fundamentally different from collaborative filtering: it doesn't care what other users think. It only cares about you and the characteristics of the items.

Feature Representation

The critical step in content-based filtering is representing items as feature vectors. The approach depends on the domain:

Text Content (news, articles, product descriptions) - TF-IDF vectors: Weight words by how distinctive they are - Word embeddings: Dense semantic representations (Word2Vec, GloVe, BERT) - Topic models: Latent topic distributions (LDA)

Images (products, artwork) - CNN embeddings: Features from pre-trained networks (ResNet, EfficientNet) - Color histograms: Distribution of colors - Visual attributes: Tags like "formal," "casual," "vintage"

Audio (music, podcasts) - Spectral features: MFCCs, chroma features - Music embeddings: Audio2Vec representations - Metadata: Genre, mood, tempo, key

Structured Data (movies, products) - Categorical features: Genre, brand, category - Numerical features: Price, year, duration - Graphs: Actor/director networks, knowledge graphs

Mathematical Formulation

Let\(\mathbf{x}_i \in \mathbb{R}^d\)be the feature vector for item\(i\), and\(\mathbf{w}_u \in \mathbb{R}^d\)be user\(u\)'s preference weights. The predicted rating is:\[\hat{r}_{ui} = \mathbf{w}_u^T \mathbf{x}_i + b_u\]We learn\(\mathbf{w}_u\)by minimizing the squared error over items the user has rated:\[\min_{\mathbf{w}_u, b_u} \sum_{i \in I_u} (r_{ui} - \mathbf{w}_u^T \mathbf{x}_i - b_u)^2 + \lambda ||\mathbf{w}_u||^2\]Where\(I_u\)is the set of items user\(u\)has rated, and\(\lambda\)is a regularization parameter to prevent overfitting.

Example: Movie Recommendation

say we represent movies with features:\[\mathbf{x}_{\text{movie }} = [\text{action}, \text{comedy}, \text{drama}, \text{year}, \text{director\_quality}]\]For "The Dark Knight" (2008):\[\mathbf{x}_{\text{TDK }} = [1.0, 0.1, 0.3, 2008, 0.9]\]If your preference vector is:\[\mathbf{w}_u = [0.8, -0.2, 0.5, 0.01, 0.7]\]The predicted rating is:\[\hat{r}_{u,\text{TDK }} = 0.8(1.0) + (-0.2)(0.1) + 0.5(0.3) + 0.01(2008) + 0.7(0.9) = 22.16\]This high score indicates you'd likely enjoy this movie.

Strengths and Weaknesses

Strengths: - No cold-start for items: Can recommend new items immediately based on their features - Transparency: Recommendations are explainable (e.g., "Because it has similar actors") - No sparsity issues: Doesn't require user-user overlap - User independence: Can work with a single user's data

Weaknesses: - Feature engineering: Requires domain expertise to extract meaningful features - Limited novelty: Tends to recommend similar items (over-specialization) - New user cold-start: Still struggles with users who have no history - Content limitations: Some preferences are hard to capture with features (e.g., a movie's "vibe")

Hybrid Methods: Combining the Best of Both Worlds

Pure collaborative filtering and pure content-based methods each have blind spots. Hybrid methods combine them to leverage the strengths of both while mitigating their weaknesses.

Why Hybrid?

Consider these scenarios: 1. A new movie is released (cold-start for CF) with great features and actors you love (content-based can help) 2. You usually watch sci-fi (content-based predicts more sci-fi) but similar users also enjoy unexpected comedies (CF adds serendipity) 3. An item has few ratings (CF struggles) but rich metadata (content-based provides signal)

Hybrid methods provide graceful degradation: when one approach fails, the other can compensate.

Hybrid Architectures

There are several ways to combine collaborative and content-based approaches:

1. Weighted Hybrid

Simply combine predictions with learned weights:\[\hat{r}_{ui} = \alpha \cdot \hat{r}_{ui}^{\text{CF }} + (1-\alpha) \cdot \hat{r}_{ui}^{\text{CB }}\]The weight\(\alpha \in [0,1]\)can be: - Fixed (e.g., 0.7 for CF, 0.3 for content-based) - Learned globally from validation data - Adaptive per user (more CF for users with rich history, more content-based for new users) - Adaptive per item (more CF for popular items, more content-based for new items)

2. Switching Hybrid

Use heuristics to choose which method to apply:

1
2
3
4
5
6
7
def predict(user, item):
if item.rating_count < MIN_RATINGS:
return content_based_predict(user, item)
elif user.rating_count < MIN_USER_RATINGS:
return content_based_predict(user, item)
else:
return collaborative_filtering_predict(user, item)

3. Feature Combination

Augment collaborative filtering with content features:\[\hat{r}_{ui} = \mathbf{w}_u^T \mathbf{x}_i + \sum_{v \in N(u)} \text{sim}(u,v) \cdot r_{vi}\]This treats collaborative signals as additional features in a unified model.

4. Cascade Hybrid

Use one method to refine candidates from another: 1. Use CF to generate top 100 candidates (fast, high recall) 2. Use content-based model to re-rank these candidates (precise, captures nuances)

This is common in industrial systems where you need to filter millions of items down to a small set efficiently.

5. Meta-Level Hybrid

Use content-based model to learn user/item representations, then apply CF on these learned representations:

1
2
3
4
5
# Step 1: Learn item embeddings from content
item_embeddings = content_model.encode(item_features)

# Step 2: Apply matrix factorization on embeddings
R_predicted = matrix_factorization(user_factors, item_embeddings)

Modern deep learning approaches often fall into this category, using neural networks to learn representations from both content and behavior.

Example: Netflix's Approach

Netflix uses a sophisticated hybrid approach: - Content-based: Extract features from video thumbnails, titles, descriptions, cast, genre - Collaborative filtering: Matrix factorization on viewing history and ratings - Deep learning: Neural networks that fuse both types of signals - Context-aware: Time of day, device, viewing patterns - Business rules: Recency, diversity, freshness constraints

The final ranking is an ensemble of dozens of models, each capturing different aspects of the recommendation problem.

Matrix Factorization: The Workhorse of Collaborative Filtering

While neighborhood-based methods (user-CF and item-CF) are intuitive, matrix factorization (MF) methods have become the dominant approach in modern recommendation systems. They won the Netflix Prize and power many production systems today.

The Core Idea: Latent Factors

Matrix factorization assumes that user preferences and item characteristics can be represented by a small number of latent factors. These factors aren't explicitly defined (like "action" or "comedy") but emerge from the data.

Intuition

Think of movies. Instead of manually defining genres, MF discovers latent dimensions like: - Factor 1: "Blockbuster vs. Art House" - Factor 2: "Serious vs. Light-hearted" - Factor 3: "Classic vs. Modern"

Each movie has a position in this latent space (e.g., "The Dark Knight" is high on Factor 1, medium on Factor 2, high on Factor 3). Each user also has preferences in this space (e.g., you like high Factor 1, medium Factor 2, low Factor 3).

The predicted rating is the dot product of user and item vectors in this space:\[\hat{r}_{ui} = \mathbf{p}_u^T \mathbf{q}_i\]Where: -\(\mathbf{p}_u \in \mathbb{R}^k\): user\(u\)'s latent factor vector (user embedding) -\(\mathbf{q}_i \in \mathbb{R}^k\): item\(i\)'s latent factor vector (item embedding) -\(k\): number of latent factors (typically 20-200)

Mathematical Formulation

We want to factorize the rating matrix\(R \in \mathbb{R}^{m \times n}\)into two lower-rank matrices:\[R \approx P Q^T\]Where: -\(P \in \mathbb{R}^{m \times k}\): user latent factor matrix (each row is\(\mathbf{p}_u\)) -\(Q \in \mathbb{R}^{n \times k}\): item latent factor matrix (each row is\(\mathbf{q}_i\))

We learn\(P\)and\(Q\)by minimizing the squared error on observed ratings:\[\min_{P,Q} \sum_{(u,i) \in \Omega} (r_{ui} - \mathbf{p}_u^T \mathbf{q}_i)^2 + \lambda (||\mathbf{p}_u||^2 + ||\mathbf{q}_i||^2)\]Where: -\(\Omega\)is the set of observed (user, item) pairs -\(\lambda\)is the regularization parameter to prevent overfitting

Adding Biases

In practice, we add bias terms to capture systematic tendencies:\[\hat{r}_{ui} = \mu + b_u + b_i + \mathbf{p}_u^T \mathbf{q}_i\]Where: -\(\mu\): global average rating -\(b_u\): user\(u\)'s bias (some users rate everything high, others rate everything low) -\(b_i\): item\(i\)'s bias (some items are universally loved, others are polarizing)

The optimization becomes:\[\min_{P,Q,b} \sum_{(u,i) \in \Omega} (r_{ui} - \mu - b_u - b_i - \mathbf{p}_u^T \mathbf{q}_i)^2 + \lambda (||\mathbf{p}_u||^2 + ||\mathbf{q}_i||^2 + b_u^2 + b_i^2)\]

Optimization: SGD and ALS

Two main approaches exist for optimizing the MF objective:

1. Stochastic Gradient Descent (SGD)

For each observed rating\((u,i,r_{ui})\):

  1. Compute prediction error:\[e_{ui} = r_{ui} - \hat{r}_{ui}\]2. Update parameters in the direction of the gradient:\[\mathbf{p}_u \leftarrow \mathbf{p}_u + \eta (e_{ui} \mathbf{q}_i - \lambda \mathbf{p}_u)\] \[\mathbf{q}_i \leftarrow \mathbf{q}_i + \eta (e_{ui} \mathbf{p}_u - \lambda \mathbf{q}_i)\] \[b_u \leftarrow b_u + \eta (e_{ui} - \lambda b_u)\] \[b_i \leftarrow b_i + \eta (e_{ui} - \lambda b_i)\]Where\(\eta\)is the learning rate.

2. Alternating Least Squares (ALS)

ALS alternates between fixing\(P\)and optimizing\(Q\), then fixing\(Q\)and optimizing\(P\).

When\(Q\)is fixed, the problem becomes a least squares problem for each user:\[\mathbf{p}_u = (Q^T Q + \lambda I)^{-1} Q^T \mathbf{r}_u\]Similarly for items when\(P\)is fixed:\[\mathbf{q}_i = (P^T P + \lambda I)^{-1} P^T \mathbf{r}_i\]Where\(\mathbf{r}_u\)is user\(u\)'s rating vector.

ALS is advantageous when: - You can parallelize across users/items (each update is independent) - You have implicit feedback (can be adapted to weighted least squares) - You need a GPU-friendly algorithm (matrix operations)

Implementation: Basic Matrix Factorization

implement a complete matrix factorization system with SGD:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
import numpy as np
from typing import Tuple, List, Dict

class MatrixFactorization:
def __init__(
self,
n_users: int,
n_items: int,
n_factors: int = 20,
learning_rate: float = 0.01,
regularization: float = 0.02,
n_epochs: int = 20
):
"""
Matrix Factorization with biases.

Args:
n_users: Number of users
n_items: Number of items
n_factors: Latent dimensionality
learning_rate: Learning rate for SGD
regularization: L2 regularization strength
n_epochs: Number of training epochs
"""
self.n_users = n_users
self.n_items = n_items
self.n_factors = n_factors
self.lr = learning_rate
self.reg = regularization
self.n_epochs = n_epochs

# Initialize parameters
# Small random values for latent factors
self.P = np.random.normal(0, 0.1, (n_users, n_factors))
self.Q = np.random.normal(0, 0.1, (n_items, n_factors))

# Biases initialized to zero
self.user_bias = np.zeros(n_users)
self.item_bias = np.zeros(n_items)
self.global_mean = 0.0

def fit(self, ratings: List[Tuple[int, int, float]]) -> 'MatrixFactorization':
"""
Train the model on observed ratings.

Args:
ratings: List of (user_id, item_id, rating) tuples

Returns:
Self for method chaining
"""
# Compute global mean
self.global_mean = np.mean([r for _, _, r in ratings])

# Training loop
for epoch in range(self.n_epochs):
# Shuffle ratings for SGD
np.random.shuffle(ratings)

train_loss = 0.0
for user_id, item_id, rating in ratings:
# Forward pass: compute prediction
pred = self._predict_single(user_id, item_id)
error = rating - pred

# Accumulate loss
train_loss += error ** 2

# Backward pass: compute gradients and update
# Cache old values for symmetric update
p_u = self.P[user_id].copy()
q_i = self.Q[item_id].copy()

# Update latent factors
self.P[user_id] += self.lr * (error * q_i - self.reg * p_u)
self.Q[item_id] += self.lr * (error * p_u - self.reg * q_i)

# Update biases
self.user_bias[user_id] += self.lr * (error - self.reg * self.user_bias[user_id])
self.item_bias[item_id] += self.lr * (error - self.reg * self.item_bias[item_id])

# Compute RMSE
rmse = np.sqrt(train_loss / len(ratings))
if (epoch + 1) % 5 == 0:
print(f"Epoch {epoch+1}/{self.n_epochs}, RMSE: {rmse:.4f}")

return self

def _predict_single(self, user_id: int, item_id: int) -> float:
"""Predict rating for a single user-item pair."""
prediction = (
self.global_mean +
self.user_bias[user_id] +
self.item_bias[item_id] +
np.dot(self.P[user_id], self.Q[item_id])
)
return prediction

def predict(self, user_id: int, item_id: int) -> float:
"""
Predict rating with bounds checking.

Args:
user_id: User ID
item_id: Item ID

Returns:
Predicted rating
"""
if user_id >= self.n_users or item_id >= self.n_items:
return self.global_mean

return self._predict_single(user_id, item_id)

def recommend(self, user_id: int, top_n: int = 10,
exclude_items: set = None) -> List[Tuple[int, float]]:
"""
Generate top-N recommendations for a user.

Args:
user_id: User ID
top_n: Number of recommendations
exclude_items: Set of item IDs to exclude (e.g., already rated)

Returns:
List of (item_id, predicted_rating) tuples, sorted by rating
"""
if exclude_items is None:
exclude_items = set()

# Predict ratings for all items
predictions = []
for item_id in range(self.n_items):
if item_id not in exclude_items:
pred = self._predict_single(user_id, item_id)
predictions.append((item_id, pred))

# Sort by predicted rating (descending) and return top N
predictions.sort(key=lambda x: x[1], reverse=True)
return predictions[:top_n]

# Example usage
if __name__ == "__main__":
# Toy dataset: 5 users, 5 items
ratings = [
(0, 0, 5.0), (0, 1, 3.0), (0, 2, 2.5),
(1, 0, 4.0), (1, 2, 4.5), (1, 3, 3.0),
(2, 1, 3.5), (2, 2, 4.0), (2, 4, 2.0),
(3, 0, 3.0), (3, 3, 4.0), (3, 4, 4.5),
(4, 1, 4.0), (4, 2, 3.5), (4, 3, 5.0)
]

# Train model
mf = MatrixFactorization(n_users=5, n_items=5, n_factors=10, n_epochs=50)
mf.fit(ratings)

# Get recommendations for user 0
recommendations = mf.recommend(user_id=0, top_n=3,
exclude_items={0, 1, 2})
print(f"\nTop 3 recommendations for user 0:")
for item_id, score in recommendations:
print(f" Item {item_id}: {score:.2f}")

Output explanation: This implementation demonstrates the core MF algorithm. In practice, you'd add: - Early stopping based on validation RMSE - Learning rate decay - Mini-batch updates instead of full SGD - More sophisticated initialization (e.g., SVD initialization)

Advanced: Implicit Feedback

Most real-world systems don't have explicit ratings. Instead, they have implicit feedback: clicks, views, purchases, play time. The challenge is that absence of interaction doesn't mean dislike – it might mean the user never saw the item.

Weighted Matrix Factorization for Implicit Feedback

Instead of predicting ratings, we predict confidence in preference. The objective becomes:\[\min_{P,Q} \sum_{u,i} c_{ui}(p_{ui} - \mathbf{p}_u^T \mathbf{q}_i)^2 + \lambda (||\mathbf{p}_u||^2 + ||\mathbf{q}_i||^2)\]Where: -\(p_{ui} \in \{0,1\}\): binary preference (1 if user interacted with item, 0 otherwise) -\(c_{ui}\): confidence in the observation (higher for repeated interactions)

A common confidence function:\[c_{ui} = 1 + \alpha \cdot f_{ui}\]Where\(f_{ui}\)is the interaction frequency (e.g., number of times user watched movie), and\(\alpha\)is a hyperparameter.

Key insight: We optimize over all user-item pairs (even unobserved ones), but with different confidence weights. Observed interactions get high confidence (we're sure the user likes this), while unobserved ones get low confidence (we're less sure the user dislikes this).

This formulation is solved efficiently with ALS, giving us one of the most powerful recommendation algorithms for implicit feedback data.

Evaluation Metrics: Measuring What Matters

Building a recommendation system is only half the battle – knowing whether it's actually good is the other half. Unlike supervised learning where accuracy is well-defined, recommendation systems need to balance multiple, often competing objectives.

The Challenge of Evaluation

Consider this: your recommendation system has 95% accuracy in predicting ratings. Is that good? Not necessarily: - If users only rate movies they like, predicting high ratings for everything might be "accurate" but useless - If the test set only includes popular movies, you're not measuring the hard cases - If recommendations lack diversity, users might be accurate satisfied initially but churned in the long run

Effective evaluation requires multiple metrics capturing different aspects of quality.

Classification-Based Metrics: Precision and Recall

When recommendations are binary (recommend vs. not recommend), we can use classic classification metrics.

Setup

For user\(u\), let: -\(R_u\): set of items recommended to user\(u\)(top-N recommendations) -\(T_u\): set of truly relevant items for user\(u\)(test set items the user liked)

Precision@N

What fraction of recommendations are relevant?\[\text{Precision@N} = \frac{|R_u \cap T_u|}{|R_u|}\]Example: You recommend 10 movies, user likes 7 of them → Precision@10 = 0.7

Recall@N

What fraction of relevant items did we recommend?\[\text{Recall@N} = \frac{|R_u \cap T_u|}{|T_u|}\]Example: User likes 20 movies total, you recommended 7 of them → Recall@10 = 0.35

F1@N

Harmonic mean of precision and recall:\[\text{F1@N} = 2 \cdot \frac{\text{Precision@N} \cdot \text{Recall@N }}{\text{Precision@N} + \text{Recall@N }}\]

When to use: These metrics are intuitive and easy to explain to stakeholders. Use them when: - You have binary feedback (like/dislike, click/no-click) - You care about set-level accuracy - You need metrics that non-technical stakeholders can understand

Limitations: - Don't consider ranking order within top-N - Don't account for rating scale (if available) - Precision and recall often trade off (hard to optimize both)

Ranking-Based Metrics: Position Matters

In reality, position matters immensely. A relevant item at position 1 is worth much more than at position 10. Ranking metrics capture this.

Mean Average Precision (MAP)

For a single user, Average Precision (AP) is:\[\text{AP}@N = \frac{1}{|T_u|} \sum_{k=1}^{N} \text{Precision}@k \cdot \text{rel}(k)\]Where\(\text{rel}(k) = 1\)if the item at position\(k\)is relevant, 0 otherwise.

MAP averages AP across all users:\[\text{MAP} = \frac{1}{|U|} \sum_{u \in U} \text{AP}_u\]

Intuition: MAP rewards systems that rank relevant items higher. If all relevant items are at the top, MAP is high. If they're scattered, MAP is lower even if Precision@N is the same.

Example: - Recommendations: [relevant, relevant, irrelevant, relevant, irrelevant] -\(\text{Precision}@1 = 1.0\),\(\text{Precision}@2 = 1.0\),\(\text{Precision}@4 = 0.75\) -\(\text{AP}@5 = \frac{1}{3}(1.0 + 1.0 + 0.75) = 0.917\) Normalized Discounted Cumulative Gain (NDCG)

NDCG is the gold standard for ranking evaluation when you have graded relevance (ratings, not just binary).

Discounted Cumulative Gain (DCG):\[\text{DCG}@N = \sum_{k=1}^{N} \frac{2^{\text{rel}_k} - 1}{\log_2(k+1)}\]Where\(\text{rel}_k\)is the relevance (e.g., rating) of the item at position\(k\).

The\(\log_2(k+1)\)discount means items at lower positions contribute less to the score.

Ideal DCG (IDCG):

DCG of the perfect ranking (all items sorted by true relevance):\[\text{IDCG}@N = \sum_{k=1}^{N} \frac{2^{\text{rel}_k^*} - 1}{\log_2(k+1)}\]Where\(\text{rel}_k^*\)is the\(k\)-th highest relevance score.

Normalized DCG:\[\text{NDCG}@N = \frac{\text{DCG}@N}{\text{IDCG}@N}\]NDCG ranges from 0 to 1, where 1 is perfect ranking.

Example: - True ratings: [5, 4, 3, 2, 1] - Predicted order: [5, 3, 4, 2, 1]\[\text{DCG}@5 = \frac{31}{1} + \frac{7}{1.585} + \frac{15}{2} + \frac{3}{2.322} + \frac{1}{2.585} = 44.93\] \[\text{IDCG}@5 = \frac{31}{1} + \frac{15}{1.585} + \frac{7}{2} + \frac{3}{2.322} + \frac{1}{2.585} = 45.32\] \[\text{NDCG}@5 = \frac{44.93}{45.32} = 0.991\]

When to use: NDCG is ideal when: - You have graded relevance (ratings, engagement scores) - Ranking order is critical to user experience - You want a metric used widely in research (for comparability)

Implementation: Ranking Metrics

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
import numpy as np
from typing import List, Tuple

def precision_at_k(recommended: List[int], relevant: set, k: int) -> float:
"""
Compute Precision@K.

Args:
recommended: Ordered list of recommended item IDs
relevant: Set of relevant item IDs
k: Cutoff position

Returns:
Precision@K score
"""
if k == 0:
return 0.0

recommended_at_k = recommended[:k]
num_relevant = sum(1 for item in recommended_at_k if item in relevant)
return num_relevant / k

def recall_at_k(recommended: List[int], relevant: set, k: int) -> float:
"""Compute Recall@K."""
if len(relevant) == 0:
return 0.0

recommended_at_k = recommended[:k]
num_relevant = sum(1 for item in recommended_at_k if item in relevant)
return num_relevant / len(relevant)

def average_precision(recommended: List[int], relevant: set, k: int) -> float:
"""
Compute Average Precision@K.

AP@K = (1/|relevant|) * sum(Precision@i * rel(i))
where rel(i) = 1 if item at position i is relevant, 0 otherwise.
"""
if len(relevant) == 0:
return 0.0

score = 0.0
num_relevant = 0

for i, item in enumerate(recommended[:k], start=1):
if item in relevant:
num_relevant += 1
score += num_relevant / i

return score / min(len(relevant), k)

def mean_average_precision(all_recommended: List[List[int]],
all_relevant: List[set],
k: int) -> float:
"""Compute MAP@K across multiple users."""
return np.mean([
average_precision(rec, rel, k)
for rec, rel in zip(all_recommended, all_relevant)
])

def dcg_at_k(scores: List[float], k: int) -> float:
"""
Compute Discounted Cumulative Gain at K.

Args:
scores: Relevance scores in recommended order
k: Cutoff position

Returns:
DCG@K score
"""
scores = np.array(scores[:k])
if scores.size == 0:
return 0.0

# Positions start at 1
discounts = np.log2(np.arange(2, scores.size + 2))
return np.sum((2 ** scores - 1) / discounts)

def ndcg_at_k(predicted_scores: List[float], true_scores: List[float], k: int) -> float:
"""
Compute Normalized Discounted Cumulative Gain at K.

Args:
predicted_scores: Relevance scores in predicted order
true_scores: Corresponding true relevance scores
k: Cutoff position

Returns:
NDCG@K score
"""
dcg = dcg_at_k(true_scores, k)

# Ideal DCG: sort true scores in descending order
ideal_scores = sorted(true_scores, reverse=True)
idcg = dcg_at_k(ideal_scores, k)

if idcg == 0.0:
return 0.0

return dcg / idcg

# Example usage
if __name__ == "__main__":
# User's recommendations and ground truth
recommended = [5, 2, 8, 1, 9, 3, 7, 4, 6, 10]
relevant = {1, 2, 5, 7}

print("Classification Metrics:")
print(f"Precision@5: {precision_at_k(recommended, relevant, 5):.3f}")
print(f"Recall@5: {recall_at_k(recommended, relevant, 5):.3f}")
print(f"Precision@10: {precision_at_k(recommended, relevant, 10):.3f}")
print(f"Recall@10: {recall_at_k(recommended, relevant, 10):.3f}")

print("\nRanking Metrics:")
print(f"AP@10: {average_precision(recommended, relevant, 10):.3f}")

# NDCG example with ratings
# True ratings for recommended items
true_ratings = [5, 5, 2, 5, 1, 3, 4, 2, 1, 1]
print(f"NDCG@5: {ndcg_at_k(None, true_ratings, 5):.3f}")
print(f"NDCG@10: {ndcg_at_k(None, true_ratings, 10):.3f}")

Beyond Accuracy: Coverage and Diversity

High accuracy metrics don't tell the whole story. A system that only recommends the top 10 most popular items might have high precision (because popular items are widely liked) but provides a terrible user experience.

Catalog Coverage

What fraction of items can the system recommend?\[\text{Coverage} = \frac{|\{i : \exists u, i \in R_u} |}{|I|}\]Where\(R_u\)is the set of items recommended to any user\(u\).

Example: If your system only ever recommends 1000 out of 100,000 items, coverage = 0.01. This is the long-tail problem: obscure but relevant items never get surfaced.

Intra-List Diversity

How different are items within a single recommendation list?\[\text{Diversity}(R_u) = \frac{2}{|R_u|(|R_u|-1)} \sum_{i,j \in R_u, i \ne j} (1 - \text{sim}(i,j))\]Where\(\text{sim}(i,j)\)is similarity between items (e.g., cosine similarity of features).

Example: If you recommend 10 sci-fi movies that are all sequels to each other, diversity is low. Users want some variety.

Serendipity

How surprising yet relevant are recommendations?\[\text{Serendipity} = \frac{1}{|R_u|} \sum_{i \in R_u} \text{relevance}(i) \cdot (1 - \text{expected}(i))\]Where\(\text{expected}(i)\)measures how obvious the recommendation is (e.g., based on popularity or content similarity).

High serendipity means the system finds relevant items the user wouldn't have discovered on their own.

Trade-offs

These metrics often conflict: - High accuracy → Low coverage (recommend safe, popular items) - High diversity → Lower accuracy (users prefer some redundancy) - High serendipity → Higher risk (unexpected recommendations might be wrong)

Production systems use multi-objective optimization or business rules to balance these trade-offs.

Online Metrics: What Matters in Production

Offline metrics (evaluated on historical data) are useful for development, but online metrics (measured from real users) are what truly matter.

Click-Through Rate (CTR)\[\text{CTR} = \frac{\text{Number of clicks on recommendations }}{\text{Number of recommendations shown }}\]

Conversion Rate\[\text{Conversion} = \frac{\text{Number of purchases/views }}{\text{Number of clicks }}\]

Engagement Metrics - Watch time (video platforms) - Time to click (speed of engagement) - Session length (does the user continue browsing?)

Business Metrics - Revenue per user - Customer lifetime value (LTV) - Churn rate

A/B Testing

The gold standard for evaluation is A/B testing: 1. Split users randomly into control (old system) and treatment (new system) 2. Run both systems in parallel for a statistically significant period 3. Compare online metrics 4. Ship the better system

This accounts for all the complexity that offline metrics miss: user interface effects, temporal dynamics, network effects, etc.

System Architecture: From Million Items to Top-10 Recommendations

Real-world recommendation systems face a daunting challenge: how do you select 10 items to recommend from a catalog of millions or billions, with milliseconds of latency, for hundreds of millions of users?

The answer is a multi-stage architecture that progressively narrows down candidates while applying increasingly expensive models.

The Funnel Architecture

Modern recommendation systems use a 4-stage funnel:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Stage 1: Candidate Generation (Recall)
Millions of items → Thousands of candidates
Latency: < 10ms per source
Goal: High recall, fast

Stage 2: Coarse Ranking
Thousands → Hundreds
Latency: < 50ms
Goal: Balance quality and speed

Stage 3: Fine Ranking
Hundreds → Dozens
Latency: < 100ms
Goal: Highest quality predictions

Stage 4: Reranking & Business Logic
Dozens → Top-10 final list
Latency: < 20ms
Goal: Diversity, freshness, policy compliance

examine each stage in detail.

Stage 1: Candidate Generation (Recall)

Goal: Quickly filter millions of items down to a few thousand candidates that are likely to be relevant.

Methods:

  1. Collaborative Filtering Recall
    • User-CF: Find similar users, retrieve their liked items
    • Item-CF: Find items similar to user's history
    • Matrix Factorization: Nearest neighbor search in embedding space
  2. Content-Based Recall
    • Query user's profile (learned preferences) against item features
    • "More like this" based on last N interactions
  3. Popular Items
    • Global trending items
    • Category-level popular items
    • Time-decayed popularity
  4. Sequential Models
    • Based on last item viewed (session-based)
    • Sequence models predicting next item
  5. Social/Network-Based
    • Items friends liked
    • Following/follower recommendations

Key Property: Each recall source is independent and runs in parallel. You might have 10-20 different recall sources, each retrieving 100-500 candidates. The union of all sources gives you 2000-5000 candidates total.

Efficiency: This stage must be extremely fast. Techniques include: - Pre-computed indexes (e.g., FAISS for embedding search) - Caching popular results - Approximate nearest neighbor search - Inverted indexes for sparse features

Example Architecture:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class CandidateGenerator:
def __init__(self):
self.user_cf = UserCFRecall()
self.item_cf = ItemCFRecall()
self.mf = MatrixFactorizationRecall()
self.popular = PopularItemsRecall()
self.content = ContentBasedRecall()

def generate(self, user_id: int, n_candidates: int = 2000) -> List[int]:
"""Generate candidates from multiple sources in parallel."""
candidates = set()

# Each source contributes some candidates
candidates.update(self.user_cf.recall(user_id, n=500))
candidates.update(self.item_cf.recall(user_id, n=500))
candidates.update(self.mf.recall(user_id, n=500))
candidates.update(self.popular.recall(user_id, n=200))
candidates.update(self.content.recall(user_id, n=300))

# Remove items user has already interacted with
candidates -= self.get_user_history(user_id)

return list(candidates)[:n_candidates]

Stage 2: Coarse Ranking

Goal: Apply a relatively cheap model to rank the thousands of candidates and filter down to a few hundred.

Methods:

  1. Lightweight Models
    • Logistic regression on hand-crafted features
    • Small neural networks (< 100k parameters)
    • Boosted trees (e.g., XGBoost with limited depth)
  2. Features:
    • User features: demographics, historical preferences, activity level
    • Item features: category, popularity, recency
    • User-item features: CF scores, content similarity, recall source
    • Context features: time of day, device, location

Why not skip this stage? Running a complex ranking model on thousands of items per user, millions of times per second, is prohibitively expensive. Coarse ranking eliminates candidates that clearly won't make the final cut, leaving only the promising ones for expensive processing.

Example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class CoarseRanker:
def __init__(self):
# Lightweight model (e.g., logistic regression)
self.model = LogisticRegression()
self.feature_extractor = FastFeatureExtractor()

def rank(self, user_id: int, candidates: List[int], top_k: int = 500) -> List[int]:
"""Rank candidates with a fast model."""
# Extract features in batch
features = self.feature_extractor.extract_batch(user_id, candidates)

# Score all candidates
scores = self.model.predict_proba(features)[:, 1]

# Sort and return top K
ranked_indices = np.argsort(scores)[::-1][:top_k]
return [candidates[i] for i in ranked_indices]

Stage 3: Fine Ranking

Goal: Apply your best, most expensive model to precisely rank the remaining few hundred candidates.

Methods:

  1. Deep Neural Networks
    • Wide & Deep models
    • Deep & Cross networks
    • Transformer-based models
    • Multi-task learning (predicting click, conversion, engagement)
  2. Rich Features:
    • Dense embeddings for all categorical features
    • User sequential history (last N interactions)
    • Cross features (2nd or 3rd order interactions)
    • Real-time features (current session behavior)
  3. Example: Wide & Deep Model
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
import torch
import torch.nn as nn

class WideAndDeep(nn.Module):
def __init__(self, n_wide_features: int, n_embedding_features: int,
embedding_dim: int = 16, deep_layers: List[int] = [128, 64, 32]):
"""
Wide & Deep model for recommendation ranking.

Wide component: Linear model for memorization
Deep component: MLP for generalization
"""
super().__init__()

# Wide component: linear model on sparse features
self.wide = nn.Linear(n_wide_features, 1)

# Deep component: embeddings + MLP
self.embeddings = nn.ModuleList([
nn.Embedding(num_embeddings, embedding_dim)
for num_embeddings in n_embedding_features
])

# MLP layers
deep_input_dim = len(n_embedding_features) * embedding_dim
layers = []
for i, hidden_size in enumerate(deep_layers):
if i == 0:
layers.append(nn.Linear(deep_input_dim, hidden_size))
else:
layers.append(nn.Linear(deep_layers[i-1], hidden_size))
layers.append(nn.ReLU())
layers.append(nn.Dropout(0.3))
self.deep = nn.Sequential(*layers)

# Final output
self.output = nn.Linear(deep_layers[-1] + 1, 1)
self.sigmoid = nn.Sigmoid()

def forward(self, wide_features, embedding_indices):
"""
Args:
wide_features: (batch_size, n_wide_features)
embedding_indices: (batch_size, n_embedding_features)
"""
# Wide component
wide_out = self.wide(wide_features)

# Deep component
embeddings = [emb(embedding_indices[:, i])
for i, emb in enumerate(self.embeddings)]
deep_input = torch.cat(embeddings, dim=1)
deep_out = self.deep(deep_input)

# Combine and output
combined = torch.cat([wide_out, deep_out], dim=1)
output = self.sigmoid(self.output(combined))
return output.squeeze()

Training Considerations: - Data: Millions to billions of (user, item, label) examples from logs - Labels: Click (binary), rating (regression), watch time (regression) - Sampling: Negative sampling (sample non-clicked items as negatives) - Loss: Binary cross-entropy for clicks, MSE for ratings

Stage 4: Reranking & Business Logic

Goal: Finalize the top-N list by applying constraints, diversity requirements, and business rules.

Operations:

  1. Diversity Injection
    • MMR (Maximal Marginal Relevance): iteratively select items that are relevant but dissimilar to already selected items
    • Submodular optimization: maximize relevance subject to diversity constraints
  2. Freshness Boost
    • Boost recently released items
    • Decay scores for items user has seen before
  3. Business Rules
    • Remove out-of-stock items
    • Apply content policy filters (age restrictions, etc.)
    • Ensure paid placements or sponsored content appear in designated positions
    • Fairness constraints (ensure diverse creator representation)
  4. Personalization Calibration
    • Adjust for user's tolerance to niche content
    • Apply explore-exploit trade-offs

Example: MMR for Diversification

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
def maximal_marginal_relevance(
candidates: List[int],
scores: np.ndarray,
similarities: np.ndarray,
top_n: int,
lambda_param: float = 0.5
) -> List[int]:
"""
Select top-N items using Maximal Marginal Relevance.

MMR balances relevance and diversity:
MMR = λ * Relevance(item) - (1-λ) * max Similarity(item, selected)

Args:
candidates: List of candidate item IDs
scores: Relevance scores (higher is better)
similarities: Pairwise similarity matrix
top_n: Number of items to select
lambda_param: Trade-off between relevance (1.0) and diversity (0.0)

Returns:
List of selected item IDs
"""
selected = []
remaining = list(range(len(candidates)))

# First item: highest relevance
first_idx = np.argmax(scores)
selected.append(first_idx)
remaining.remove(first_idx)

# Iteratively select items
for _ in range(top_n - 1):
if not remaining:
break

mmr_scores = []
for idx in remaining:
# Relevance component
relevance = scores[idx]

# Diversity component: max similarity to already selected items
max_sim = max(similarities[idx, s] for s in selected)

# MMR score
mmr = lambda_param * relevance - (1 - lambda_param) * max_sim
mmr_scores.append(mmr)

# Select item with highest MMR
best_idx = remaining[np.argmax(mmr_scores)]
selected.append(best_idx)
remaining.remove(best_idx)

return [candidates[i] for i in selected]

Complete Pipeline Example

Here's how all stages come together:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
class RecommendationSystem:
def __init__(self):
self.candidate_generator = CandidateGenerator()
self.coarse_ranker = CoarseRanker()
self.fine_ranker = FineRanker()
self.reranker = Reranker()

def recommend(self, user_id: int, top_n: int = 10) -> List[Tuple[int, float, str]]:
"""
Generate top-N recommendations for a user.

Returns:
List of (item_id, score, explanation) tuples
"""
# Stage 1: Candidate generation (millions → thousands)
candidates = self.candidate_generator.generate(user_id, n_candidates=2000)
print(f"Stage 1: Generated {len(candidates)} candidates")

# Stage 2: Coarse ranking (thousands → hundreds)
coarse_ranked = self.coarse_ranker.rank(user_id, candidates, top_k=500)
print(f"Stage 2: Coarse ranked to {len(coarse_ranked)} items")

# Stage 3: Fine ranking (hundreds → dozens)
fine_ranked, scores = self.fine_ranker.rank(user_id, coarse_ranked, top_k=50)
print(f"Stage 3: Fine ranked to {len(fine_ranked)} items")

# Stage 4: Reranking (dozens → top-N)
final = self.reranker.rerank(user_id, fine_ranked, scores, top_n=top_n)
print(f"Stage 4: Reranked to final {len(final)} items")

# Generate explanations
recommendations = [
(item_id, score, self._explain(user_id, item_id))
for item_id, score in final
]

return recommendations

def _explain(self, user_id: int, item_id: int) -> str:
"""Generate human-readable explanation for recommendation."""
# This would use explanation models/rules
return f"Based on your interest in similar items"

Core Challenges in Recommendation Systems

Despite decades of research and billions in commercial investment, recommendation systems still face fundamental challenges that have no perfect solutions. Understanding these challenges is crucial for building robust systems.

Challenge 1: The Cold-Start Problem

The cold-start problem comes in three flavors: new users, new items, and new systems.

New User Cold-Start

When a user first signs up, you have no behavioral history. How do you recommend anything?

Solutions:

  1. Onboarding: Explicitly ask users about preferences
    • Show popular items, ask them to select favorites
    • Ask demographic questions (age, location) to use stereotypes
    • Gradual: start with 3-5 items, then refine
  2. Stereotype-based: Use demographic information
    • Region-based (recommend local content)
    • Age/gender-based (statistical preferences)
    • Device-based (mobile users have different behavior patterns)
  3. Popular Items: Show globally popular or trending items
    • Safe bet: most people like popular items
    • Gets early interactions to bootstrap personalization
  4. Content-based: If user gives any signal (e.g., signup context)
    • Came from a blog post → recommend similar topics
    • Clicked on an ad → recommend related products

New Item Cold-Start

You've just added 1000 new movies to your catalog. How do you know who will like them?

Solutions:

  1. Content-based: Use item metadata
    • Genre, actors, director for movies
    • Brand, category, attributes for products
    • Match against users' historical preferences
  2. Explore-Exploit: Actively show new items to learn about them
    • Exploration bonus: boost scores for items with few ratings
    • Thompson Sampling or Upper Confidence Bound (UCB)
    • Example:\(\text{score}(i) = \hat{r}_i + \beta \sqrt{\frac{\log N}{n_i }}\)where\(n_i\)is item\(i\)'s rating count
  3. Transfer Learning: Use external signals
    • If it's a movie sequel, transfer ratings from the original
    • If it's a new product, use manufacturer reputation
    • Use features from other platforms (e.g., IMDB ratings for Netflix)
  4. Expert Curation: Human-curated lists for new items
    • "Staff Picks" or "New Releases You Might Like"
    • Feature in prominent positions to gather data quickly

Example: Epsilon-Greedy Exploration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
def recommend_with_exploration(user_id: int, candidates: List[int], 
scores: np.ndarray, epsilon: float = 0.1,
top_n: int = 10) -> List[int]:
"""
Recommend with epsilon-greedy exploration to handle cold-start.

With probability epsilon, include a random unexplored item.
"""
# Separate explored and unexplored items
user_history = get_user_history(user_id)
unexplored = [c for c in candidates if get_item_rating_count(c) < 10]
explored = [c for c in candidates if c not in unexplored]

# Determine how many positions to allocate to exploration
n_explore = int(top_n * epsilon)
n_exploit = top_n - n_explore

# Exploitation: top scoring explored items
exploit_indices = np.argsort(scores)[::-1]
exploit_items = [candidates[i] for i in exploit_indices if candidates[i] in explored][:n_exploit]

# Exploration: random sample of unexplored items
explore_items = np.random.choice(unexplored, size=min(n_explore, len(unexplored)), replace=False).tolist()

# Combine and shuffle to not always show explore items at the end
result = exploit_items + explore_items
np.random.shuffle(result)

return result[:top_n]

Challenge 2: Data Sparsity

Most users interact with a tiny fraction of items. The rating matrix is > 99.9% sparse. This makes similarity computation and pattern finding extremely difficult.

Magnitude of Sparsity

  • Netflix: ~200M users, ~15K titles → ~3 trillion possible interactions
  • Average user rates ~50 movies → 0.000002% density
  • YouTube: billions of users, billions of videos → essentially infinite sparsity

Consequences: - User-user similarity is hard to compute (few common items) - Long-tail items have almost no data - Statistical estimates have huge variance

Solutions:

  1. Dimensionality Reduction: Matrix factorization
    • Project into low-dimensional latent space
    • Smooths estimates by sharing statistical strength
  2. Implicit Feedback: Expand your signal
    • Don't rely only on explicit ratings
    • Use clicks, views, time spent, mouse hovers
    • More data, but noisier
  3. Multi-domain Learning: Transfer across domains
    • User's music taste might inform movie taste
    • Product purchases inform article interests
    • Shared user embeddings across domains
  4. Regularization: Prevent overfitting to sparse data
    • L2 regularization in matrix factorization
    • Dropout in neural networks
    • Early stopping

Challenge 3: Long-Tail Distribution

Item popularity follows a power law: a few items are extremely popular, most items have very few interactions. This creates multiple problems:

The Curve: - Top 1% of items account for 50% of interactions - Bottom 50% of items have < 10 interactions each - The "tail" extends very far

Problems:

  1. Popularity Bias: Models over-recommend popular items
    • Popular items have more data → better predictions → recommended more → get even more popular (rich get richer)
    • Hurts diversity and user satisfaction over time
  2. Creator Fairness: Niche creators never get discovered
    • New/small creators can't get initial traction
    • Reduces content diversity in ecosystem
  3. User Frustration: Users with niche tastes get poor recommendations
    • Their preferred content is in the tail
    • System doesn't have enough data on those items

Solutions:

  1. Debiasing Techniques:
    • Inverse Propensity Scoring: Weight training examples by\(\frac{1}{P(\text{observed})}\) - Popularity-dampened Loss:\(L = \sum (r - \hat{r})^2 / \log(1 + \text{popularity}(i))\) - Forces model to work harder on unpopular items
  2. Diversity Constraints: Explicitly limit popular items
    • "At most 3 blockbusters in top-10"
    • Reserve slots for different popularity tiers
  3. Multi-armed Bandits: Active learning for tail items
    • Intelligently explore tail items
    • Show them to users likely to appreciate them
  4. Social Recommendations: Leverage network effects
    • If someone in your network likes a niche item, you might too
    • Word-of-mouth recommendations

Challenge 4: Temporal Dynamics

User preferences drift over time. Items' popularity changes. Models trained on old data become stale.

Types of Temporal Effects:

  1. User Preference Drift: You liked action movies 5 years ago, now you prefer documentaries
  2. Item Aging: Last year's trending items are old news
  3. Seasonal Patterns: Holiday movies in December, beach reads in summer
  4. Event-driven Spikes: Olympics → sports content, election → political content

Solutions:

  1. Time-Weighted Training:
    • Exponential time decay: recent data weighted higher -\(w(t) = e^{-\lambda (t_{\text{now }} - t)}\) - Sliding window: only train on last N months
  2. Contextual Bandit Approaches:
    • Continuously update models with new data
    • Online learning algorithms (FTRL, online SGD)
  3. Temporal Features:
    • Time since release (freshness)
    • Day of week, hour of day
    • Days until major event (e.g., Christmas)
  4. Session-based Models:
    • Focus on recent session history, not all-time history
    • RNNs or Transformers on sequential interactions

Example: Time-Decayed Matrix Factorization

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
def fit_with_time_decay(self, ratings: List[Tuple[int, int, float, float]], 
decay_rate: float = 0.01):
"""
Matrix factorization with time-based weighting.

Recent ratings weighted more heavily than old ratings.
"""
current_time = max(t for _, _, _, t in ratings)

for epoch in range(self.n_epochs):
np.random.shuffle(ratings)

for user_id, item_id, rating, timestamp in ratings:
# Compute time weight
time_diff = current_time - timestamp
weight = np.exp(-decay_rate * time_diff)

# Prediction error
pred = self._predict_single(user_id, item_id)
error = rating - pred

# Weighted gradient update
p_u = self.P[user_id].copy()
q_i = self.Q[item_id].copy()

self.P[user_id] += self.lr * weight * (error * q_i - self.reg * p_u)
self.Q[item_id] += self.lr * weight * (error * p_u - self.reg * q_i)

Challenge 5: Scalability

Production systems need to serve millions of users per second with < 100ms latency. Training data grows to petabytes. Models have billions of parameters.

Scale Challenges:

  1. Online Serving: 10M+ requests per second
  2. Model Size: Billions of users × thousands of items = trillions of parameters
  3. Training Data: Petabytes of interaction logs
  4. Feature Computation: Real-time features for each request

Solutions:

  1. Distributed Systems:
    • Sharding: partition users/items across machines
    • Caching: cache popular predictions, user embeddings, item embeddings
    • Load balancing: distribute requests evenly
  2. Model Compression:
    • Quantization: 32-bit floats → 8-bit ints
    • Knowledge distillation: train small model to mimic large model
    • Pruning: remove low-importance parameters
  3. Approximate Algorithms:
    • Approximate nearest neighbor (e.g., FAISS, ScaNN)
    • Sampling negative examples instead of using all
    • Blocking/chunking strategies
  4. Offline/Online Split:
    • Precompute what you can offline (item embeddings, similarity matrices)
    • Online: only compute user-specific and real-time features

Implementing Classic Algorithms from Scratch

Theory is essential, but nothing beats implementing algorithms yourself. build complete, production-quality implementations of the two foundational collaborative filtering approaches.

User-Based Collaborative Filtering

User-based CF finds users similar to you and recommends items they liked. Despite being one of the oldest recommendation algorithms, it's still used in practice for its interpretability and effectiveness.

Algorithm Overview:

The User-CF algorithm follows these key steps: 1. Build user-item rating matrix: Store which users rated which items and their ratings 2. Compute user similarities: For each user pair, calculate how similar their rating patterns are 3. Find neighbors: For a target user-item pair, find K users most similar to the target user who have rated the target item 4. Predict rating: Weighted average of neighbors' ratings, where weights are the similarity scores 5. Recommend: Rank all unrated items by predicted rating and return Top-N

Why User-CF Works:

The fundamental assumption is that users who agreed in the past will agree in the future. If you and another user gave similar ratings to 50 movies, when that user rates a new movie highly, there's a strong signal you'll like it too.

Implementation Considerations:

  • Sparse data handling: Most users only rate a tiny fraction of items, so we use dictionaries instead of dense matrices
  • Similarity metrics: Pearson correlation is preferred over cosine similarity because it accounts for different rating scales (some users rate 4-5, others rate 2-3)
  • Minimum common items threshold: We only consider two users as neighbors if they've rated at least N items in common (default 3) to ensure statistical significance
  • Top-K neighbors: Using all similar users would be slow and noisy; we only use the K most similar (default 50)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
import numpy as np
from typing import List, Dict, Tuple, Set
from collections import defaultdict # For efficient sparse data structures
from scipy.spatial.distance import cosine
import heapq # For efficient top-K selection

class UserBasedCF:
def __init__(self, k_neighbors: int = 50, min_common_items: int = 3,
similarity_metric: str = 'pearson'):
"""
User-based collaborative filtering recommender.

This implementation uses lazy computation: similarities are computed
on-demand rather than precomputed, which is more memory-efficient
for large datasets where we can't afford O(n ²) similarity matrix storage.

Args:
k_neighbors: Number of similar users to consider for prediction.
- Higher K = more stable but slower predictions
- Lower K = faster but potentially less accurate
- Typical range: 20-100
min_common_items: Minimum number of co-rated items to consider two users as neighbors.
- Too low = unreliable similarity estimates
- Too high = fewer neighbors, especially for sparse data
- Typical range: 3-10
similarity_metric: Method for computing user similarity
- 'cosine': Measures angle between rating vectors (ignores magnitude)
- 'pearson': Centered cosine (accounts for rating bias)
- 'adjusted_cosine': Subtracts item mean (less common for user-CF)
"""
self.k = k_neighbors
self.min_common = min_common_items
self.similarity_metric = similarity_metric

# Data structures for efficient lookup
# Using defaultdict avoids key errors and simplifies code
self.user_ratings = defaultdict(dict) # user_id -> {item_id: rating}
self.item_users = defaultdict(set) # item_id -> {user_ids who rated it} (inverted index)
self.user_means = {} # user_id -> mean rating (for bias correction)
self.global_mean = 0.0 # overall rating mean (fallback for cold start)

def fit(self, ratings: List[Tuple[int, int, float]]) -> 'UserBasedCF':
"""
Build the model from rating data.

This method constructs the internal data structures needed for prediction.
Unlike matrix factorization, there's no "training" in the ML sense - we're
just organizing the data for efficient similarity computation and lookup.

Time complexity: O(N) where N is number of ratings
Space complexity: O(N + U + I) where U is users, I is items

Args:
ratings: List of (user_id, item_id, rating) tuples
Example: [(1, 101, 5.0), (1, 102, 4.0), (2, 101, 3.0)]

Returns:
self (for method chaining)
"""
# Build user-item and item-user mappings
# These bidirectional indexes enable efficient queries:
# - user_ratings: "What did user X rate?" (for similarity computation)
# - item_users: "Who rated item Y?" (for finding candidate neighbors)
all_ratings = []
for user_id, item_id, rating in ratings:
self.user_ratings[user_id][item_id] = rating # Forward index
self.item_users[item_id].add(user_id) # Inverted index
all_ratings.append(rating)

# Compute statistics for baseline predictions and bias correction
self.global_mean = np.mean(all_ratings) # Fallback when no info available
for user_id, items in self.user_ratings.items():
# User mean used for:
# 1. Pearson correlation (center ratings)
# 2. Baseline prediction when no neighbors found
self.user_means[user_id] = np.mean(list(items.values()))

print(f"Fitted on {len(ratings)} ratings from {len(self.user_ratings)} users "
f"and {len(self.item_users)} items")

return self

def _compute_similarity(self, user1: int, user2: int) -> float:
"""
Compute similarity between two users based on their rating patterns.

**Why similarity matters**: The quality of recommendations depends critically
on finding truly similar users. A good similarity metric should:
- Be robust to different rating scales (user A rates 4-5, user B rates 2-3)
- Require sufficient overlap (few common items = unreliable estimate)
- Be efficient to compute (will be called millions of times)

**Cosine vs Pearson**:
- Cosine: Measures vector angle, good when ratings are on same scale
- Pearson: Centers ratings first (subtracts mean), better for handling rating bias

Example:
User A rates [5, 5, 5] for items [1, 2, 3]
User B rates [2, 2, 2] for items [1, 2, 3]

Cosine similarity = high (same direction)
Pearson correlation = 1.0 (perfect agreement after centering)

Args:
user1, user2: User IDs to compare

Returns:
Similarity score in [0, 1] where 1 means most similar
"""
# Find common items (co-ratings)
# Only items rated by BOTH users can contribute to similarity
items1 = set(self.user_ratings[user1].keys())
items2 = set(self.user_ratings[user2].keys())
common = items1 & items2 # Set intersection

# Statistical significance check:
# With few common items, similarity estimate is unreliable
if len(common) < self.min_common:
return 0.0

# Extract ratings for common items as parallel arrays
# This enables vectorized numpy operations (much faster than loops)
ratings1 = np.array([self.user_ratings[user1][i] for i in common])
ratings2 = np.array([self.user_ratings[user2][i] for i in common])

if self.similarity_metric == 'cosine':
# Cosine similarity: cos(θ) = (A · B) / (||A|| × ||B||)
# Measures angle between rating vectors
norm1 = np.linalg.norm(ratings1) # ||A|| = sqrt(sum(A ²))
norm2 = np.linalg.norm(ratings2) # ||B|| = sqrt(sum(B ²))
if norm1 == 0 or norm2 == 0: # Avoid division by zero
return 0.0
return np.dot(ratings1, ratings2) / (norm1 * norm2)

elif self.similarity_metric == 'pearson':
# Pearson correlation: "centered" cosine similarity
# Formula: ρ = Σ[(r_ui - r ̄_u)(r_vi - r ̄_v)] / (σ_u × σ_v)
#
# Key benefit: Removes user rating bias
# User A always rates 4-5 (optimistic)
# User B always rates 2-3 (pessimistic)
# But their *relative* preferences might be identical
mean1 = self.user_means[user1]
mean2 = self.user_means[user2]

# Center ratings (subtract mean)
ratings1_centered = ratings1 - mean1
ratings2_centered = ratings2 - mean2

# Compute correlation (essentially cosine of centered vectors)
norm1 = np.linalg.norm(ratings1_centered)
norm2 = np.linalg.norm(ratings2_centered)

if norm1 == 0 or norm2 == 0:
return 0.0

correlation = np.dot(ratings1_centered, ratings2_centered) / (norm1 * norm2)
# Pearson ranges [-1, 1], convert to [0, 1] for consistency
# -1 (opposite preferences) -> 0
# 0 (no correlation) -> 0.5
# 1 (identical preferences) -> 1
return (correlation + 1) / 2

else:
raise ValueError(f"Unknown similarity metric: {self.similarity_metric}")

def _find_neighbors(self, user_id: int, item_id: int) -> List[Tuple[int, float]]:
"""
Find K most similar users who rated the given item.

Returns:
List of (neighbor_user_id, similarity) tuples
"""
if item_id not in self.item_users:
return []

# Candidate neighbors: users who rated this item
candidates = self.item_users[item_id]

# Compute similarities
similarities = []
for neighbor_id in candidates:
if neighbor_id == user_id:
continue
sim = self._compute_similarity(user_id, neighbor_id)
if sim > 0:
similarities.append((neighbor_id, sim))

# Return top K
similarities.sort(key=lambda x: x[1], reverse=True)
return similarities[:self.k]

def predict(self, user_id: int, item_id: int) -> float:
"""
Predict rating for a user-item pair.

Uses weighted average of neighbors' ratings.
"""
# Check if user or item is unknown
if user_id not in self.user_ratings:
return self.global_mean

if item_id not in self.item_users:
return self.user_means.get(user_id, self.global_mean)

# Check if user already rated this item
if item_id in self.user_ratings[user_id]:
return self.user_ratings[user_id][item_id]

# Find neighbors
neighbors = self._find_neighbors(user_id, item_id)

if not neighbors:
return self.user_means[user_id]

# Compute weighted average
weighted_sum = 0.0
similarity_sum = 0.0

user_mean = self.user_means[user_id]

for neighbor_id, similarity in neighbors:
neighbor_mean = self.user_means[neighbor_id]
neighbor_rating = self.user_ratings[neighbor_id][item_id]

# Centered rating (removes bias)
centered_rating = neighbor_rating - neighbor_mean

weighted_sum += similarity * centered_rating
similarity_sum += similarity

if similarity_sum == 0:
return user_mean

# Predicted rating = user's mean + weighted average of centered neighbors' ratings
prediction = user_mean + (weighted_sum / similarity_sum)

return prediction

def recommend(self, user_id: int, n: int = 10,
exclude_rated: bool = True) -> List[Tuple[int, float]]:
"""
Generate top-N recommendations for a user.

Args:
user_id: User ID
n: Number of recommendations
exclude_rated: Whether to exclude items user has already rated

Returns:
List of (item_id, predicted_rating) tuples
"""
if user_id not in self.user_ratings:
# Cold start: recommend popular items
item_popularity = [(item, len(users)) for item, users in self.item_users.items()]
item_popularity.sort(key=lambda x: x[1], reverse=True)
return [(item, self.global_mean) for item, _ in item_popularity[:n]]

# Get items to score
rated_items = set(self.user_ratings[user_id].keys())
if exclude_rated:
candidate_items = set(self.item_users.keys()) - rated_items
else:
candidate_items = set(self.item_users.keys())

# Predict ratings for all candidates
predictions = []
for item_id in candidate_items:
pred = self.predict(user_id, item_id)
predictions.append((item_id, pred))

# Sort by predicted rating and return top N
predictions.sort(key=lambda x: x[1], reverse=True)
return predictions[:n]

def evaluate(self, test_ratings: List[Tuple[int, int, float]]) -> Dict[str, float]:
"""
Evaluate the model on test data.

Returns:
Dictionary of metric name -> value
"""
predictions = []
actuals = []

for user_id, item_id, rating in test_ratings:
pred = self.predict(user_id, item_id)
predictions.append(pred)
actuals.append(rating)

predictions = np.array(predictions)
actuals = np.array(actuals)

# Compute metrics
mae = np.mean(np.abs(predictions - actuals))
rmse = np.sqrt(np.mean((predictions - actuals) ** 2))

return {
'MAE': mae,
'RMSE': rmse,
'n_predictions': len(predictions)
}

# Example usage
if __name__ == "__main__":
# Sample dataset
ratings = [
(0, 0, 5), (0, 1, 3), (0, 2, 4), (0, 3, 1),
(1, 0, 4), (1, 1, 2), (1, 3, 2), (1, 4, 4),
(2, 0, 3), (2, 2, 3), (2, 3, 2), (2, 4, 5),
(3, 1, 4), (3, 2, 5), (3, 3, 3), (3, 4, 2),
(4, 0, 2), (4, 1, 5), (4, 2, 3), (4, 4, 1),
]

# Train
cf = UserBasedCF(k_neighbors=3, similarity_metric='pearson')
cf.fit(ratings)

# Predict
print("\nPredictions:")
print(f"User 0, Item 4: {cf.predict(0, 4):.2f}")
print(f"User 1, Item 2: {cf.predict(1, 2):.2f}")

# Recommend
print("\nTop 3 recommendations for User 0:")
recs = cf.recommend(user_id=0, n=3)
for item_id, score in recs:
print(f" Item {item_id}: {score:.2f}")

Item-Based Collaborative Filtering

Item-based CF computes similarity between items and recommends items similar to ones you liked. It's generally more stable and scalable than user-based CF.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
class ItemBasedCF:
def __init__(self, k_neighbors: int = 50, min_common_users: int = 3,
similarity_metric: str = 'adjusted_cosine'):
"""
Item-based collaborative filtering recommender.

Args:
k_neighbors: Number of similar items to consider
min_common_users: Minimum user overlap to compute similarity
similarity_metric: 'cosine', 'pearson', or 'adjusted_cosine'
"""
self.k = k_neighbors
self.min_common = min_common_users
self.similarity_metric = similarity_metric

# Data structures
self.item_ratings = defaultdict(dict) # item_id -> {user_id: rating}
self.user_items = defaultdict(set) # user_id -> {item_id}
self.user_means = {} # user_id -> mean rating
self.item_similarity_cache = {} # (item1, item2) -> similarity
self.global_mean = 0.0

def fit(self, ratings: List[Tuple[int, int, float]]) -> 'ItemBasedCF':
"""Build the model and precompute item similarities."""
# Build item-user and user-item mappings
all_ratings = []
for user_id, item_id, rating in ratings:
self.item_ratings[item_id][user_id] = rating
self.user_items[user_id].add(item_id)
all_ratings.append(rating)

# Compute means
self.global_mean = np.mean(all_ratings)
for user_id, items in self.user_items.items():
user_ratings = [self.item_ratings[item][user_id] for item in items]
self.user_means[user_id] = np.mean(user_ratings)

# Optionally precompute all pairwise item similarities
# (For large datasets, you'd compute these on-demand or use approximate methods)
print(f"Fitted on {len(ratings)} ratings from {len(self.user_items)} users "
f"and {len(self.item_ratings)} items")

return self

def _compute_similarity(self, item1: int, item2: int) -> float:
"""Compute similarity between two items."""
# Check cache
cache_key = (min(item1, item2), max(item1, item2))
if cache_key in self.item_similarity_cache:
return self.item_similarity_cache[cache_key]

# Find common users
users1 = set(self.item_ratings[item1].keys())
users2 = set(self.item_ratings[item2].keys())
common = users1 & users2

if len(common) < self.min_common:
similarity = 0.0
else:
ratings1 = np.array([self.item_ratings[item1][u] for u in common])
ratings2 = np.array([self.item_ratings[item2][u] for u in common])

if self.similarity_metric == 'cosine':
norm1 = np.linalg.norm(ratings1)
norm2 = np.linalg.norm(ratings2)
if norm1 == 0 or norm2 == 0:
similarity = 0.0
else:
similarity = np.dot(ratings1, ratings2) / (norm1 * norm2)

elif self.similarity_metric == 'adjusted_cosine':
# Adjust by user means (accounts for different user rating scales)
user_means = np.array([self.user_means[u] for u in common])
ratings1_adjusted = ratings1 - user_means
ratings2_adjusted = ratings2 - user_means

norm1 = np.linalg.norm(ratings1_adjusted)
norm2 = np.linalg.norm(ratings2_adjusted)

if norm1 == 0 or norm2 == 0:
similarity = 0.0
else:
similarity = np.dot(ratings1_adjusted, ratings2_adjusted) / (norm1 * norm2)

else:
raise ValueError(f"Unknown similarity metric: {self.similarity_metric}")

# Cache and return
self.item_similarity_cache[cache_key] = similarity
return similarity

def _find_similar_items(self, item_id: int, rated_by_user: Set[int]) -> List[Tuple[int, float]]:
"""
Find K most similar items that the user has rated.

Args:
item_id: Target item
rated_by_user: Set of item IDs the user has rated

Returns:
List of (similar_item_id, similarity) tuples
"""
similarities = []
for rated_item in rated_by_user:
if rated_item == item_id:
continue
sim = self._compute_similarity(item_id, rated_item)
if sim > 0:
similarities.append((rated_item, sim))

# Return top K
similarities.sort(key=lambda x: x[1], reverse=True)
return similarities[:self.k]

def predict(self, user_id: int, item_id: int) -> float:
"""
Predict rating using weighted sum of similar items.
"""
# Handle cold start
if user_id not in self.user_items:
return self.global_mean

if item_id not in self.item_ratings:
return self.user_means.get(user_id, self.global_mean)

# Check if already rated
if user_id in self.item_ratings[item_id]:
return self.item_ratings[item_id][user_id]

# Find similar items that user has rated
user_rated_items = self.user_items[user_id]
similar_items = self._find_similar_items(item_id, user_rated_items)

if not similar_items:
return self.user_means.get(user_id, self.global_mean)

# Weighted sum
weighted_sum = 0.0
similarity_sum = 0.0

for similar_item_id, similarity in similar_items:
rating = self.item_ratings[similar_item_id][user_id]
weighted_sum += similarity * rating
similarity_sum += abs(similarity)

if similarity_sum == 0:
return self.user_means.get(user_id, self.global_mean)

prediction = weighted_sum / similarity_sum
return prediction

def recommend(self, user_id: int, n: int = 10,
exclude_rated: bool = True) -> List[Tuple[int, float]]:
"""Generate top-N recommendations."""
if user_id not in self.user_items:
# Cold start: popular items
item_popularity = [(item, len(users)) for item, users in self.item_ratings.items()]
item_popularity.sort(key=lambda x: x[1], reverse=True)
return [(item, self.global_mean) for item, _ in item_popularity[:n]]

# Candidate items
rated_items = self.user_items[user_id]
if exclude_rated:
candidate_items = set(self.item_ratings.keys()) - rated_items
else:
candidate_items = set(self.item_ratings.keys())

# Predict
predictions = []
for item_id in candidate_items:
pred = self.predict(user_id, item_id)
predictions.append((item_id, pred))

predictions.sort(key=lambda x: x[1], reverse=True)
return predictions[:n]

def get_similar_items(self, item_id: int, n: int = 10) -> List[Tuple[int, float]]:
"""
Find items most similar to the given item.

Useful for "people who liked this also liked..." recommendations.
"""
if item_id not in self.item_ratings:
return []

all_items = set(self.item_ratings.keys()) - {item_id}
similarities = [(other_item, self._compute_similarity(item_id, other_item))
for other_item in all_items]

similarities.sort(key=lambda x: x[1], reverse=True)
return similarities[:n]

Frequently Asked Questions

address common questions that arise when building recommendation systems.

Q1: When should I use collaborative filtering vs. content-based filtering?

Use collaborative filtering when: - You have substantial user-item interaction data - Items are hard to describe with features (e.g., what makes a song "good"?) - You want serendipitous recommendations (users discover unexpected items)

Use content-based filtering when: - You have rich item metadata but sparse interactions - Items are new (cold-start problem for CF) - You need explainable recommendations ("Because it has similar actors") - Privacy is a concern (content-based doesn't need other users' data)

In practice, use both (hybrid approach).

Q2: How do I handle implicit feedback (clicks, views) vs. explicit feedback (ratings)?

Implicit feedback is more abundant but noisier: - Positive class: User interacted (clicked, viewed, purchased) - Negative class: User didn't interact (but this could mean they never saw it!)

Approaches: 1. Treat all observations as positive, unobserved as negative with low confidence (weighted matrix factorization) 2. Sample negatives from unobserved items (negative sampling) 3. Use ranking losses instead of rating prediction (BPR, pairwise ranking)

Q3: What's the right dimensionality for matrix factorization?

Typical range: 20-200 latent factors.

  • Too small (< 10): Underfitting, can't capture complexity
  • Too large (> 500): Overfitting, slow, diminishing returns

Find the sweet spot using validation data. Start with 50-100 and tune.

Q4: How often should I retrain my model?

Depends on data velocity and user tolerance for staleness: - High velocity (TikTok, news): Every few hours or online learning - Medium velocity (Netflix, Spotify): Daily or weekly - Low velocity (Amazon products): Weekly to monthly

Use a freshness monitor: track how often recommendations include recently added items. If freshness drops, retrain more frequently.

Q5: How do I make recommendations explainable?

Explainability builds trust and helps users understand recommendations:

  1. Content-based: "Because it has similar genre/actors/attributes"
  2. Collaborative filtering: "Because users with similar taste enjoyed this"
  3. Hybrid: Combine both explanations
  4. Post-hoc: Train a separate model to generate explanations from features

Example implementation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
def explain_recommendation(user_id: int, item_id: int) -> str:
"""Generate human-readable explanation."""
# Check which signal contributed most
content_score = content_based_model.predict(user_id, item_id)
cf_score = collaborative_model.predict(user_id, item_id)

if content_score > cf_score:
# Find most similar item in user's history
user_history = get_user_history(user_id)
similarities = [(hist_item, item_similarity(hist_item, item_id))
for hist_item in user_history]
most_similar = max(similarities, key=lambda x: x[1])
return f"Because you liked {most_similar[0]}"
else:
return "Because users with similar taste enjoyed this"

Q6: How do I balance exploitation (recommend what users like) vs. exploration (try new things)?

Pure exploitation leads to filter bubbles. Pure exploration annoys users.

Strategies: 1. Epsilon-greedy: With probability\(\epsilon\), recommend random items 2. Thompson Sampling: Bayesian approach that naturally balances exploitation/exploration 3. Upper Confidence Bound (UCB): Boost items with high uncertainty 4. Diversity in reranking: Reserve 1-2 spots in top-10 for "exploratory" items

Typical\(\epsilon\): 0.05-0.15 (5-15% exploration).

Q7: What about multi-criteria recommendations (price, location, etc.)?

For items with multiple attributes (price, location, availability):

  1. Filtering: Apply hard constraints first (price range, in-stock only)
  2. Multi-objective optimization: Pareto optimal solutions balancing multiple criteria
  3. Utility function: Learn a function that combines multiple criteria -\(\text{utility}(i) = \alpha \cdot \text{relevance}(i) + \beta \cdot \text{price\_score}(i) + \gamma \cdot \text{location\_score}(i)\) Q8: How do I prevent recommending the same item repeatedly?

Users don't want to see the same recommendations every time they visit.

Solutions: 1. Exclude recently seen: Don't recommend items shown in last N sessions 2. Decay seen items: Reduce scores for previously shown items 3. Session diversity: Track what's shown in current session, ensure variety 4. Fatigue modeling: Explicitly model probability user is "tired" of an item

1
2
3
4
5
6
7
8
9
10
def apply_fatigue(item_scores: Dict[int, float], 
seen_counts: Dict[int, int]) -> Dict[int, float]:
"""Apply fatigue penalty based on how often user has seen each item."""
adjusted_scores = {}
for item_id, score in item_scores.items():
seen = seen_counts.get(item_id, 0)
# Exponential decay based on times seen
fatigue_penalty = 0.7 ** seen
adjusted_scores[item_id] = score * fatigue_penalty
return adjusted_scores

Q9: What about group recommendations (e.g., movie for family)?

Recommending for groups is challenging because individuals have different preferences.

Approaches: 1. Average: Aggregate individual scores -\(\text{score}(i) = \frac{1}{|G|} \sum_{u \in G} \text{score}_u(i)\)

  1. Least misery: Optimize for the least satisfied member -\(\text{score}(i) = \min_{u \in G} \text{score}_u(i)\)

  2. Most pleasure: Maximize the most satisfied member -\(\text{score}(i) = \max_{u \in G} \text{score}_u(i)\)

  3. Fairness: Ensure each member gets some preferred items

    • Alternate between optimizing for each member

Q10: How do I evaluate a recommendation system without A/B testing?

A/B tests are gold standard, but offline evaluation is useful during development:

  1. Temporal split: Train on data before time\(T\), test on data after
    • Simulates production setting
  2. Leave-one-out: For each user, hide one item they liked, try to recommend it
  3. User-stratified split: Split by users (some users for train, others for test)
    • Tests generalization to new users

Be careful: offline metrics don't always correlate with online business metrics. A model with lower RMSE might have lower engagement. Always validate with A/B tests before shipping.

Summary and Next Steps

We've covered the foundational concepts of recommendation systems: from the three core paradigms (collaborative filtering, content-based, hybrid) to evaluation metrics (precision, recall, NDCG) to system architecture (multi-stage funnel) to critical challenges (cold-start, sparsity, long-tail) to practical implementations.

Key Takeaways:

  1. No single best algorithm: The right approach depends on your data, scale, and business goals
  2. Hybrid methods dominate: Combining multiple signals (collaborative + content + context) works best
  3. Architecture matters: Multi-stage systems balance quality and latency at scale
  4. Evaluation is multi-faceted: Accuracy, diversity, coverage, and business metrics all matter
  5. Production is different from research: Real systems must handle cold-start, scale, freshness, and business constraints

Where to Go from Here:

This article focused on fundamentals. The next articles in this series will cover:

  • Part 2: Deep Learning for Recommendations: Neural collaborative filtering, attention mechanisms, graph neural networks, transformer-based models
  • Part 3: Context-Aware Recommendations: Sequential models, session-based recommendations, multi-armed bandits, reinforcement learning
  • Part 4: Advanced Topics: Multi-task learning, fairness & bias, explanation generation, real-time systems, large-scale serving

Resources for Further Learning:

Books: - "Recommender Systems: The Textbook" by Charu Aggarwal - "Deep Learning for Recommender Systems" by Heng-Tze Cheng et al.

Papers (foundational): - Matrix Factorization Techniques (Netflix Prize) - BPR: Bayesian Personalized Ranking (implicit feedback) - Wide & Deep Learning (Google) - Neural Collaborative Filtering (deep learning for CF)

Open-source libraries: - Surprise (Python, educational) - LightFM (hybrid models) - RecBole (comprehensive research toolkit) - TensorFlow Recommenders (production-grade)

The field of recommendation systems continues to evolve rapidly, with recent advances in foundation models, multi-modal recommendations, and conversational recommenders. But the fundamentals covered here remain essential – they're the building blocks upon which all modern systems are built. Master these concepts, and you'll have a solid foundation for understanding and building state-of-the-art recommendation systems.

  • Post title:Recommendation Systems (1): Fundamentals and Core Concepts
  • Post author:Chen Kai
  • Create time:2026-02-03 23:11:11
  • Post link:https://www.chenk.top/recommendation-systems-1-fundamentals/
  • Copyright Notice:All articles in this blog are licensed under BY-NC-SA unless stating additionally.
 Comments