Problem 1: Multi-Scale Single-Shot Detector
Build a simplified single-shot object detector that handles multiple object scales through a basic feature pyramid architecture.
Part A: Dataset and Data Loading
You will work with a synthetic shape detection dataset containing three classes of objects at different scales. The dataset is provided in COCO-style JSON format.
Create dataset.py implementing the following data loader:
import torch
from torch.utils.data import Dataset
from PIL import Image
import json
class ShapeDetectionDataset(Dataset):
def __init__(self, image_dir, annotation_file, transform=None):
"""
Initialize the dataset.
Args:
image_dir: Path to directory containing images
annotation_file: Path to COCO-style JSON annotations
transform: Optional transform to apply to images
"""
self.image_dir = image_dir
self.transform = transform
# Load and parse annotations
# Store image paths and corresponding annotations
pass
def __len__(self):
"""Return the total number of samples."""
pass
def __getitem__(self, idx):
"""
Return a sample from the dataset.
Returns:
image: Tensor of shape [3, H, W]
targets: Dict containing:
- boxes: Tensor of shape [N, 4] in [x1, y1, x2, y2] format
- labels: Tensor of shape [N] with class indices (0, 1, 2)
"""
passThe dataset contains:
- Classes: 0: circle (small), 1: square (medium), 2: triangle (large)
- Images: 224×224 RGB images
- Annotations: Bounding boxes in [x1, y1, x2, y2] format with class labels
Part B: Multi-Scale Architecture
Create model.py with a detector that extracts features at multiple scales:
import torch
import torch.nn as nn
class MultiScaleDetector(nn.Module):
def __init__(self, num_classes=3, num_anchors=3):
"""
Initialize the multi-scale detector.
Args:
num_classes: Number of object classes (not including background)
num_anchors: Number of anchors per spatial location
"""
super().__init__()
self.num_classes = num_classes
self.num_anchors = num_anchors
# Feature extraction backbone
# Extract features at 3 different scales
# Detection heads for each scale
# Each head outputs: [batch, num_anchors * (4 + 1 + num_classes), H, W]
pass
def forward(self, x):
"""
Forward pass.
Args:
x: Input tensor of shape [batch, 3, 224, 224]
Returns:
List of 3 tensors (one per scale), each containing predictions
Shape: [batch, num_anchors * (5 + num_classes), H, W]
where 5 = 4 bbox coords + 1 objectness score
"""
passArchitecture Requirements:
Backbone: 4 convolutional blocks
- Block 1 (Stem): Conv(3→32, stride=1) → BN → ReLU → Conv(32→64, stride=2) → BN → ReLU [224→112]
- Block 2: Conv(64→128, stride=2) → BN → ReLU [112→56] → Output as Scale 1
- Block 3: Conv(128→256, stride=2) → BN → ReLU [56→28] → Output as Scale 2
- Block 4: Conv(256→512, stride=2) → BN → ReLU [28→14] → Output as Scale 3
Detection Heads: For each scale, apply:
- 3×3 Conv (keep channels same)
- 1×1 Conv →
num_anchors * (5 + num_classes)channels
Output Format: Each spatial location predicts for each anchor:
- 4 values: bbox offsets (tx, ty, tw, th)
- 1 value: objectness score
num_classesvalues: class scores
Part C: Anchor Generation and Matching
Create anchor generation utilities in utils.py:
import torch
import numpy as np
def generate_anchors(feature_map_sizes, anchor_scales, image_size=224):
"""
Generate anchors for multiple feature maps.
Args:
feature_map_sizes: List of (H, W) tuples for each feature map
anchor_scales: List of lists, scales for each feature map
image_size: Input image size
Returns:
anchors: List of tensors, each of shape [num_anchors, 4]
in [x1, y1, x2, y2] format
"""
# For each feature map:
# 1. Create grid of anchor centers
# 2. Generate anchors with specified scales and ratios
# 3. Convert to absolute coordinates
pass
def compute_iou(boxes1, boxes2):
"""
Compute IoU between two sets of boxes.
Args:
boxes1: Tensor of shape [N, 4]
boxes2: Tensor of shape [M, 4]
Returns:
iou: Tensor of shape [N, M]
"""
pass
def match_anchors_to_targets(anchors, target_boxes, target_labels,
pos_threshold=0.5, neg_threshold=0.3):
"""
Match anchors to ground truth boxes.
Args:
anchors: Tensor of shape [num_anchors, 4]
target_boxes: Tensor of shape [num_targets, 4]
target_labels: Tensor of shape [num_targets]
pos_threshold: IoU threshold for positive anchors
neg_threshold: IoU threshold for negative anchors
Returns:
matched_labels: Tensor of shape [num_anchors]
(0: background, 1-N: classes)
matched_boxes: Tensor of shape [num_anchors, 4]
pos_mask: Boolean tensor indicating positive anchors
neg_mask: Boolean tensor indicating negative anchors
"""
passAnchor Configuration:
- Scale 1 (56×56): anchor scales [16, 24, 32]
- Scale 2 (28×28): anchor scales [48, 64, 96]
- Scale 3 (14×14): anchor scales [96, 128, 192]
- All scales use aspect ratios: [1:1]
Part D: Loss Implementation
Implement the multi-task loss in loss.py:
import torch
import torch.nn as nn
import torch.nn.functional as F
class DetectionLoss(nn.Module):
def __init__(self, num_classes=3):
super().__init__()
self.num_classes = num_classes
def forward(self, predictions, targets, anchors):
"""
Compute multi-task loss.
Args:
predictions: List of tensors from each scale
targets: List of dicts with 'boxes' and 'labels' for each image
anchors: List of anchor tensors for each scale
Returns:
loss_dict: Dict containing:
- loss_obj: Objectness loss
- loss_cls: Classification loss
- loss_loc: Localization loss
- loss_total: Weighted sum
"""
# For each prediction scale:
# 1. Match anchors to targets
# 2. Compute objectness loss (BCE)
# 3. Compute classification loss (CE) for positive anchors
# 4. Compute localization loss (Smooth L1) for positive anchors
# 5. Apply hard negative mining (3:1 ratio)
pass
def hard_negative_mining(self, loss, pos_mask, neg_mask, ratio=3):
"""
Select hard negative examples.
Args:
loss: Loss values for all anchors
pos_mask: Boolean mask for positive anchors
neg_mask: Boolean mask for negative anchors
ratio: Negative to positive ratio
Returns:
selected_neg_mask: Boolean mask for selected negatives
"""
passLoss Weights:
- Objectness loss weight: 1.0
- Classification loss weight: 1.0
- Localization loss weight: 2.0
Part E: Training Script
Create train.py that trains the model:
import torch
import torch.optim as optim
from torch.utils.data import DataLoader
import json
def train_epoch(model, dataloader, criterion, optimizer, device):
"""Train for one epoch."""
model.train()
# Training loop
pass
def validate(model, dataloader, criterion, device):
"""Validate the model."""
model.eval()
# Validation loop
pass
def main():
# Configuration
batch_size = 16
learning_rate = 0.001
num_epochs = 50
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Initialize dataset, model, loss, optimizer
# Training loop with logging
# Save best model and training log
pass
if __name__ == '__main__':
main()The training script must:
- Train for 50 epochs
- Use SGD with momentum=0.9
- Save the best model based on validation loss
- Log training metrics to
results/training_log.json
Part F: Evaluation and Visualization
Create evaluate.py to compute detection metrics and generate visualizations:
def compute_ap(predictions, ground_truths, iou_threshold=0.5):
"""Compute Average Precision for a single class."""
pass
def visualize_detections(image, predictions, ground_truths, save_path):
"""Visualize predictions and ground truth boxes."""
pass
def analyze_scale_performance(model, dataloader, anchors):
"""Analyze which scales detect which object sizes."""
# Generate statistics on detection performance per scale
# Create visualizations showing scale specialization
passDeliverables
Your problem1/ directory must contain:
All code files as specified above
results/training_log.jsonwith loss curves and metricsresults/best_model.pth- saved model weightsresults/visualizations/containing:- Detection results on 10 validation images
- Anchor coverage visualization for each scale
- Analysis showing which scales detect which object sizes
Your report must include analysis of:
- How different scales specialize for different object sizes
- The effect of anchor scales on detection performance
- Visualization of the learned features at each scale