Task design
Turn an ambiguous AI idea into a specific job with boundaries and success criteria.
Build an AI feature with a measurable quality bar, not just a demo.
This track is for people who want to build with models, prompts, retrieval, agents, and evaluations. You will focus on practical AI systems: how to define a task, test outputs, handle failures, and communicate limitations.
6-10 hrs/week
AI workflow, RAG prototype, eval harness, or tool-calling demo.
Async review on task design, eval evidence, grounding, and failure notes.
Use this section as a practical gut check before you apply.
The goal is proof of work, not passive course completion.
Turn an ambiguous AI idea into a specific job with boundaries and success criteria.
Use clear instructions, examples, schemas, and context to reduce avoidable model failure.
Add source context when needed and evaluate whether the answer is actually supported.
Build a lightweight rubric and sample set so quality can improve through evidence.
Builder means you have shipped an AI feature with prompt, retrieval or workflow design, evaluation evidence, and a clear explanation of failure modes.