Problem 2: Heatmap vs Direct Regression for Keypoint Detection
Implement and compare two approaches to keypoint localization: spatial heatmap regression and direct coordinate regression. Quantify the performance difference between these methods.
Part A: Dataset and Data Loading
You will work with synthetic “stick figure” images containing 5 keypoints per figure. The dataset includes keypoint annotations in pixel coordinates.
Create dataset.py implementing data loading for both approaches:
import torch
from torch.utils.data import Dataset
from PIL import Image
import numpy as np
import json
class KeypointDataset(Dataset):
def __init__(self, image_dir, annotation_file, output_type='heatmap',
heatmap_size=64, sigma=2.0):
"""
Initialize the keypoint dataset.
Args:
image_dir: Path to directory containing images
annotation_file: Path to JSON annotations
output_type: 'heatmap' or 'regression'
heatmap_size: Size of output heatmaps (for heatmap mode)
sigma: Gaussian sigma for heatmap generation
"""
self.image_dir = image_dir
self.output_type = output_type
self.heatmap_size = heatmap_size
self.sigma = sigma
# Load annotations
pass
def generate_heatmap(self, keypoints, height, width):
"""
Generate gaussian heatmaps for keypoints.
Args:
keypoints: Array of shape [num_keypoints, 2] in (x, y) format
height, width: Dimensions of the heatmap
Returns:
heatmaps: Tensor of shape [num_keypoints, height, width]
"""
# For each keypoint:
# 1. Create 2D gaussian centered at keypoint location
# 2. Handle boundary cases
pass
def __getitem__(self, idx):
"""
Return a sample from the dataset.
Returns:
image: Tensor of shape [1, 128, 128] (grayscale)
If output_type == 'heatmap':
targets: Tensor of shape [5, 64, 64] (5 heatmaps)
If output_type == 'regression':
targets: Tensor of shape [10] (x,y for 5 keypoints, normalized to [0,1])
"""
passDataset Properties:
- Images: 128×128 grayscale images
- Keypoints: 5 points (head, left_hand, right_hand, left_foot, right_foot)
- Annotations: (x, y) coordinates in pixel space
Part B: Network Architectures
Create model.py with both heatmap and regression networks:
import torch
import torch.nn as nn
import torch.nn.functional as F
class HeatmapNet(nn.Module):
def __init__(self, num_keypoints=5):
"""
Initialize the heatmap regression network.
Args:
num_keypoints: Number of keypoints to detect
"""
super().__init__()
self.num_keypoints = num_keypoints
# Encoder (downsampling path)
# Input: [batch, 1, 128, 128]
# Progressively downsample to extract features
# Decoder (upsampling path)
# Progressively upsample back to heatmap resolution
# Output: [batch, num_keypoints, 64, 64]
# Skip connections between encoder and decoder
pass
def forward(self, x):
"""
Forward pass.
Args:
x: Input tensor of shape [batch, 1, 128, 128]
Returns:
heatmaps: Tensor of shape [batch, num_keypoints, 64, 64]
"""
pass
class RegressionNet(nn.Module):
def __init__(self, num_keypoints=5):
"""
Initialize the direct regression network.
Args:
num_keypoints: Number of keypoints to detect
"""
super().__init__()
self.num_keypoints = num_keypoints
# Use same encoder architecture as HeatmapNet
# But add global pooling and fully connected layers
# Output: [batch, num_keypoints * 2]
pass
def forward(self, x):
"""
Forward pass.
Args:
x: Input tensor of shape [batch, 1, 128, 128]
Returns:
coords: Tensor of shape [batch, num_keypoints * 2]
Values in range [0, 1] (normalized coordinates)
"""
passArchitecture Specifications:
Encoder (shared between both networks):
- Conv1: Conv(1→32) → BN → ReLU → MaxPool (128→64)
- Conv2: Conv(32→64) → BN → ReLU → MaxPool (64→32)
- Conv3: Conv(64→128) → BN → ReLU → MaxPool (32→16)
- Conv4: Conv(128→256) → BN → ReLU → MaxPool (16→8)
HeatmapNet Decoder:
- Deconv4: ConvTranspose(256→128) → BN → ReLU (8→16)
- Concat with Conv3 output (skip connection)
- Deconv3: ConvTranspose(256→64) → BN → ReLU (16→32)
- Concat with Conv2 output (skip connection)
- Deconv2: ConvTranspose(128→32) → BN → ReLU (32→64)
- Final: Conv(32→num_keypoints) (no activation)
RegressionNet Head:
- Global Average Pooling
- FC1: Linear(256→128) → ReLU → Dropout(0.5)
- FC2: Linear(128→64) → ReLU → Dropout(0.5)
- FC3: Linear(64→num_keypoints*2) → Sigmoid
Part C: Training Implementation
Create train.py to train both models:
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
import json
def train_heatmap_model(model, train_loader, val_loader, num_epochs=30):
"""
Train the heatmap-based model.
Uses MSE loss between predicted and target heatmaps.
"""
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# Training loop
# Log losses and save best model
pass
def train_regression_model(model, train_loader, val_loader, num_epochs=30):
"""
Train the direct regression model.
Uses MSE loss between predicted and target coordinates.
"""
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# Training loop
# Log losses and save best model
pass
def main():
# Train both models with same data
# Save training logs for comparison
pass
if __name__ == '__main__':
main()Training specifications:
- Train both models for 30 epochs
- Use Adam optimizer with lr=0.001
- Batch size: 32
- Save models as
heatmap_model.pthandregression_model.pth - Log training/validation loss to
training_log.json
Part D: Evaluation Metrics
Create evaluate.py to compute PCK (Percentage of Correct Keypoints):
import torch
import numpy as np
import matplotlib.pyplot as plt
def extract_keypoints_from_heatmaps(heatmaps):
"""
Extract (x, y) coordinates from heatmaps.
Args:
heatmaps: Tensor of shape [batch, num_keypoints, H, W]
Returns:
coords: Tensor of shape [batch, num_keypoints, 2]
"""
# Find argmax location in each heatmap
# Convert to (x, y) coordinates
pass
def compute_pck(predictions, ground_truths, thresholds, normalize_by='bbox'):
"""
Compute PCK at various thresholds.
Args:
predictions: Tensor of shape [N, num_keypoints, 2]
ground_truths: Tensor of shape [N, num_keypoints, 2]
thresholds: List of threshold values (as fraction of normalization)
normalize_by: 'bbox' for bounding box diagonal, 'torso' for torso length
Returns:
pck_values: Dict mapping threshold to accuracy
"""
# For each threshold:
# Count keypoints within threshold distance of ground truth
pass
def plot_pck_curves(pck_heatmap, pck_regression, save_path):
"""
Plot PCK curves comparing both methods.
"""
pass
def visualize_predictions(image, pred_keypoints, gt_keypoints, save_path):
"""
Visualize predicted and ground truth keypoints on image.
"""
passPart E: Comparative Analysis
Create baseline.py for additional experiments:
def ablation_study(dataset, model_class):
"""
Conduct ablation studies on key hyperparameters.
Experiments to run:
1. Effect of heatmap resolution (32x32 vs 64x64 vs 128x128)
2. Effect of Gaussian sigma (1.0, 2.0, 3.0, 4.0)
3. Effect of skip connections (with vs without)
"""
# Run experiments and save results
pass
def analyze_failure_cases(model, test_loader):
"""
Identify and visualize failure cases.
Find examples where:
1. Heatmap succeeds but regression fails
2. Regression succeeds but heatmap fails
3. Both methods fail
"""
passDeliverables
Your problem2/ directory must contain:
All code files as specified above
results/training_log.jsonwith training curves for both methodsresults/heatmap_model.pthandresults/regression_model.pthresults/visualizations/containing:- PCK curves comparing both methods
- Predicted heatmaps at different training stages
- Sample predictions from both methods on test images
- Failure case analysis
Your report must include:
- PCK curves at thresholds [0.05, 0.1, 0.15, 0.2]
- Analysis of why heatmap approach works better (or worse)
- Ablation study results showing effect of sigma and resolution
- Visualization of learned heatmaps and failure cases