Project Deliverables

EE 641: A Computational Introduction to Deep Learning

Deliverables Overview

Deliverable Weight Due Date
Initial Proposal 4% 02 Nov, 23:59
Revised Proposal 8% 09 Nov, 23:59
Status Report 8% 22 Nov, 23:59
Presentation 20% 03 Dec, 15:00
Final Report 25% 07 Dec, 23:59
Model Card 3% 07 Dec, 23:59
Video 2% 07 Dec, 23:59
Source Code 30% 07 Dec, 23:59
Total 100%

GitHub Repository Access

All project code must be maintained in a private GitHub repository. Grant read access to the GitHub user github-share-uscece no later than 02 November (initial proposal deadline) and maintain this access through 22 December. The instructor will clone your repository directly for evaluation—you do not submit code archives. Your repository should show regular commits from both team members throughout the project period demonstrating ongoing development and collaboration.


Initial Proposal

See sample template.

The initial proposal articulates the scope, goals, and technical approach for your project. This is an early-stage document that establishes your direction and allows the instructor to provide feedback before you invest significant implementation effort. The proposal is not an immutable commitment—deviations are expected as your understanding evolves through implementation and experimentation.

Content Requirements

Problem Description
State the problem you aim to solve with precision. What specific challenge are you addressing? What makes this problem difficult or interesting from a deep learning perspective? Your problem statement should be concrete enough that success can be measured.
Significance
Explain why this problem matters. Who would benefit from a solution? What gap in understanding or capability does this address? How does solving this problem demonstrate the depth expected for this course?
Technical Approach
Outline your methodology at a high level. What model architectures or techniques will you investigate? What is your overall strategy for addressing the problem? This should demonstrate that you have a viable path forward, not that you’ve solved everything already.
Dataset Description
Describe the data you will use and why it’s appropriate for your problem. If using existing datasets, explain their characteristics and what makes them suitable. If generating or curating data, outline your approach and validation strategy. Address any known limitations or biases in your data.

Evaluation Criteria

The initial proposal is evaluated on clarity of problem definition, feasibility of the proposed approach, and appropriateness of the dataset. You should demonstrate understanding of what you’re undertaking and have a reasonable plan, but you are not expected to have all details finalized.


Revised Proposal

The revised proposal builds on your initial submission by integrating instructor feedback and reflecting any changes in scope or approach that emerged from early implementation work. By this point, you should have begun implementation and have preliminary results or insights that inform your direction.

Content Requirements

Updates to Technical Approach
Document any modifications to your original plan. What changed and why? If your approach remains unchanged, explicitly state this and explain why the original plan still seems optimal. Changes based on early experimental results demonstrate good scientific practice.
Progress Summary
Describe what you have accomplished since the initial proposal. This might include data analysis, baseline implementations, preliminary experiments, or infrastructure setup. Be specific about what works and what doesn’t yet.
Revised Goals
Update your project goals based on what you’ve learned. If early experiments revealed the problem is harder than expected, how are you adjusting scope? If initial results are promising, what deeper questions can you now pursue?
Updated Timeline
Provide a realistic timeline for remaining work. What major milestones remain before the status report, presentation, and final submission?

Evaluation Criteria

The revised proposal is evaluated on how well you’ve integrated feedback, whether your goals are appropriately scoped given early progress, and the quality of your preliminary work. Strong revised proposals show evidence of thoughtful iteration rather than simple resubmission of the initial proposal.


Status Report

The status report provides a snapshot of your project’s progress and any deviations from your planned approach. This is a critical checkpoint that demonstrates you have made substantial progress and are on track to complete a successful project.

Content Requirements

Executive Summary
Provide a brief overview of your project status. What have you accomplished? What remains to be done? Are you on track to meet your goals?
Progress Summary
Detail what you have achieved since the revised proposal. This should include implementation milestones, experimental results, and technical insights. Discuss both successes and setbacks—understanding what didn’t work is valuable.
Results to Date
Present preliminary results with appropriate evaluation. This might include training curves, initial performance metrics, visualizations of learned representations, or generated samples. Even if results are not yet strong, show what you have and analyze why.
Challenges and Solutions
Discuss obstacles you have encountered and how you addressed them. This might include training instability, data quality issues, computational constraints, or unexpected model behaviors. Explain your problem-solving process.
Revised Timeline and Remaining Work
Update your timeline based on current progress. What specific tasks remain before final submission? Are there any risks to completing your planned work? If you need to reduce scope, what would you prioritize?

Evaluation Criteria

The status report is evaluated on the amount of progress demonstrated, the quality of preliminary results, and your ability to analyze and respond to challenges. You should have substantial implementation complete and early experimental results by this deadline.


Presentation

The presentation is your opportunity to share your work with the class and demonstrate your technical findings. Presentations should focus on results, insights, and what you learned rather than exhaustive background or methodology. Attendance and participation during all presentations is mandatory and contributes to your presentation grade.

Format Requirements

Duration: 15 minutes per team, with approximately 5 additional minutes for questions. Practice your timing—presentations that significantly exceed time limits will be stopped.

Slides: Use PowerPoint or equivalent presentation software (Keynote, Google Slides). Submit a draft of your slides by 20:00 the day before the presentations to allow time for technical setup. You may submit revisions after this deadline, but initial submission is required. Submit final slides as PDF.

Content Focus: Your presentation is for a technical audience familiar with deep learning. Do not include table of contents slides. Minimize theoretical background, especially for topics covered in EE541 or EE641—if review is necessary, limit it to a single slide as a reminder.

Content Requirements

Problem and Approach: Briefly establish what problem you addressed and your technical approach. This should be concise—most of your time should focus on results and analysis.

Results and Analysis: Demonstrate what you achieved through visualizations, metrics, and examples. Show both successes and interesting failures. This is the core of your presentation.

Demonstrations: If you have interactive demonstrations, pre-record them as video clips to include in your slides. Live demos during presentations often fail due to technical issues and time constraints.

Insights and Conclusions: What did you learn from this project? What worked and why? What surprised you? What questions remain?

Evaluation Criteria

Presentations are evaluated on clarity of communication, quality of technical content, depth of analysis, and effective use of time. You should demonstrate mastery of your work and ability to explain it clearly.

Participation Requirement: Asking thoughtful questions during other presentations is mandatory and contributes to your presentation grade. Engage with your peers’ work critically and constructively.


Final Report

The final report is a comprehensive technical document that captures your complete project. It should provide sufficient depth that an expert unfamiliar with your work can understand what you did, why you did it, how you did it, and what it means. The report is your permanent record of this work and should meet standards appropriate for technical communication in deep learning.

Format Requirements

Use a conference paper format for compact, structured presentation. LaTeX is not required, but your submission must be PDF. There is no strict length requirement—aim for clarity and completeness rather than hitting a page count. Include quantifiable metrics to justify engineering tradeoffs and validate all examples before including them.

Content Requirements

Introduction: Introduce your problem and provide relevant background. Motivate why this problem is interesting or important. Clearly state your objectives and what contributions you are making. An expert should understand what you set out to accomplish and why it matters.

Methodology: Describe your technical approach in sufficient detail for replication.

  • Models: Specify your architecture choices, including diagrams where helpful. Explain why these choices are appropriate for your problem.
  • Analytic Decisions: Document your design decisions. Why did you choose these loss functions, optimization strategies, or evaluation metrics? What alternatives did you consider?
  • Architecture and Implementation: Provide implementation details that affect reproducibility. This includes training procedures, hyperparameters, data preprocessing, and augmentation strategies.

Results: Present your findings comprehensively.

  • Outcomes: Report your main results with appropriate metrics, baselines, and statistical measures. Include learning curves, performance tables, and visualizations.
  • Engineering Challenges: Discuss technical obstacles you encountered during implementation and training. How did you diagnose and address issues like training instability, overfitting, or computational constraints?
  • Quantifiable Metrics: Provide measurements that justify your design choices. If you selected one architecture over another, show the performance difference. If you adjusted hyperparameters, show how they affected results.

Discussion: Reflect critically on your work.

  • Milestones, Timeline, and Contributions: Document your project progression and how work was divided between team members. This demonstrates project management and collaboration.
  • Challenges: Discuss difficulties that shaped your project. What didn’t work initially? What required rethinking? How did challenges affect your final approach?
  • Questions Answered: What did you learn from this project? What questions did you answer through your investigation?
  • Remaining Curiosity: What questions remain unanswered? What would you investigate with more time or resources? What unexpected directions emerged?

Extensions: Provide at least one substantive extension or future direction. This should be a concrete proposal for how this work could be extended, not vague statements about “trying other datasets.” Demonstrate that you understand the limitations of your current work and productive paths forward.

Conclusion: Summarize your project and its key takeaways. What should readers remember about your work?

Evaluation Criteria

The final report is evaluated on technical depth, clarity of presentation, thoroughness of evaluation, and quality of critical analysis. Your report should demonstrate that you developed deep understanding of your problem and approach, not just that you produced results.


Model Card

The model card provides essential information about your trained model’s capabilities, limitations, and appropriate use. This document serves as transparent documentation for anyone who might use or build on your model. Model cards are standard practice in responsible machine learning development.

Content Requirements

Model Details: Specify the model name, type, and version. Include architectural details such as number of parameters, layer configurations, and any notable design choices. Document the training framework and key dependencies (PyTorch version, CUDA version, etc.).

Training Data: Summarize the data used for training, validation, and testing. Describe dataset characteristics including size, class distributions, and any preprocessing or augmentation applied. Note any known biases or limitations in the training data.

Performance Metrics: Report your model’s performance using metrics appropriate for your task. Include performance on training, validation, and test sets. If applicable, report performance across different subgroups or conditions. Discuss what these metrics reveal about model capabilities and limitations.

Intended Use: Describe scenarios where your model performs well and where it does not. What tasks is the model designed for? What input characteristics lead to good performance? What conditions cause performance degradation? Be specific about both strengths and weaknesses.

Limitations and Failure Modes: Document known failure modes with examples where possible. What types of inputs confuse the model? What edge cases have you identified? What assumptions does the model make that might not hold in all contexts?

Fairness and Bias: Discuss potential biases in model predictions. If your model processes data about people or makes decisions that could affect people, analyze whether performance differs across demographic groups. If bias analysis isn’t applicable to your problem (e.g., abstract mathematical tasks), explain why. Even for non-social applications, discuss what biases might exist (dataset bias, class imbalance effects, etc.).

Ethical Considerations: Note any ethical considerations relevant to your model. Could this model be misused? Are there potential negative consequences of deployment? If your model is purely research-oriented with no deployment intent, state this explicitly.

Evaluation Criteria

Model cards are evaluated on completeness, honesty about limitations, and quality of analysis. Strong model cards demonstrate critical thinking about your model’s capabilities and appropriate use rather than simply advocating for your work.


Video

The video is a concise summary of your project aimed at a broader technical audience than your report. This is your opportunity to explain your work to viewers who are interested in what you accomplished but may not read your full technical report. Think of this as presenting at a research symposium or technical showcase.

Format Requirements

Length: 3-4 minutes maximum. Strictly enforce the 4-minute limit—do not speed up your video to fit content; instead, edit your content.

File Format: Any video format supported by YouTube (mp4, mov, avi, etc.).

Participants: Not all team members must appear or speak, though both should contribute to content development.

Distribution: Videos may be distributed or posted for class or academic purposes. If you are comfortable with this, include names in the video. If you prefer to avoid having your face on YouTube, produce your video accordingly. Names may be omitted if you prefer.

Content Requirements

Project Summary: Explain what problem you addressed and why it’s interesting. What was your approach? What did you discover or achieve?

Technical Depth for Practitioners: Your audience consists of people with technical background who are interested in understanding your work. Provide enough detail that a knowledgeable viewer understands your implementation and findings. Focus on “why” questions—why did you make specific design choices, why did certain approaches work or fail?

Engaging Presentation: This is not a reading of your report or a replay of your presentation slides. The video should be engaging and teach the viewer something interesting about your problem or approach. Show visualizations, demonstrate results, highlight insights.

What Not to Include: Do not present MLOps, deployment infrastructure, or tangential technical details. Stay focused on your core problem and findings. Assume your viewer is interested in your research contribution, not the machinery around it.

Evaluation Criteria

Videos are evaluated on clarity of communication, technical content quality, and engagement. Strong videos make viewers understand and care about your work within four minutes.


Source Code

Your source code is the implementation artifact of your project and the most heavily weighted deliverable. Code should be well-organized, documented, and reproducible. This is not a software engineering course, but your code should demonstrate care and understanding.

Repository Access

Your code is evaluated through your GitHub repository. Ensure you have granted read access to github-share-uscece by the initial proposal deadline and maintain access through the required date. The instructor will clone your repository for evaluation.

Do not include trained model files or training datasets in your repository. If the instructor needs access to your trained models for evaluation, this will be arranged separately (likely through S3 upload or SCP transfer).

Documentation Requirements

README File: Include a comprehensive README that describes:

  • The major files and directories in your repository
  • Setup and installation requirements (dependencies, environment setup)
  • How to run your code (training, evaluation, inference)
  • Data format and layout expectations
  • Repository structure and organization
  • Any special technical requirements or dependencies

Your README should enable someone familiar with deep learning to understand and run your code without guessing.

Code Documentation: Include docstrings or comments for non-obvious code sections. You don’t need to comment every line, but complex algorithms, architectural decisions, or non-standard approaches should be explained.

Code Organization

Organize your code logically with clear separation of concerns. Typical organization might include:

  • Model definitions (architecture implementations)
  • Training scripts with clear entry points
  • Evaluation and analysis code
  • Data loading and preprocessing utilities
  • Configuration files for hyperparameters and settings

Reproducibility

Your code should allow reproduction of your main results. This doesn’t mean perfect bit-for-bit reproduction (random seeds, hardware differences, etc. affect this), but someone should be able to train your model and achieve similar performance.

Include or document:

  • Random seeds used for reported results
  • Hyperparameter configurations
  • Training procedures and schedules
  • Data preprocessing steps
  • Any code used to generate figures or results in your report

Evaluation Criteria

Source code is evaluated on correctness, organization, documentation quality, and reproducibility. Code that runs correctly and produces results is expected—evaluation focuses on whether your implementation demonstrates understanding of what you built and whether others could build on your work.

Implementation Quality: Does your code correctly implement your approach? Are technical components properly integrated? Does your implementation reflect understanding of the underlying algorithms?

Organization and Clarity: Is your code organized logically? Can someone understand your repository structure? Are file and function names meaningful?

Documentation: Can someone else run your code based on your documentation? Have you explained non-obvious design decisions? Is your README comprehensive?

Scientific Rigor: Can your results be reproduced? Have you documented what affects outcomes? Is your experimental setup clear?