What this new flow is actually teaching
This program now behaves less like a simple product-exploration course and more like a guided applied AI builder sprint. Students do not only learn concepts. They set up real tools, build their own APIs, create AI-enabled chatbot products, and then integrate those products into a larger collaborative system.
Week 1: Learn the Stack
Students set up Python, Git, GitHub, Ollama, Streamlit, Gradio, and basic frontend and API concepts while understanding the product they will build.
Week 2: Build APIs and Agent Features
Students build a shared core feature plus their own custom feature, develop APIs, connect tools, and create a working AI-agent style chatbot product.
Week 3: Integrate, Present, Improve
Students merge products, integrate with a second boilerplate, add an advanced layer, present their work, and finish through mentor-led technical review and hackathon-style iteration.
Program Value Projection
Curriculum Weightage
The new structure deliberately shifts from product theory into applied tooling, API construction, and collaborative integration work.
30% Foundations & Setup
Environment setup, Ollama, GitHub, UI/UX basics, Streamlit, Gradio, APIs, and presentation basics.
35% Core Product Build
Custom APIs, common core feature, chatbot setup, feature development, and score-sheet logic implementation.
20% Integration Layer
Connecting products, using peer APIs, advanced AI layer, and integrating with a second boilerplate system.
15% Demo, Review, Hackathon
Project presentation, technical rounds, mentor feedback, and final live hackathon-style polishing.
The 3-Week Program Lifecycle
Students move from setup and understanding into feature building, then into integration and final technical defense.
Week 1 — Foundations
Students understand the product direction, set up their environment, connect Ollama, learn Git/GitHub, understand UI/UX and API basics, and prepare to build.
Week 2 — Product Build
Students build their API-backed chatbot product on a boilerplate base, complete a shared core feature, and add at least one feature of their own.
Week 3 — Integration & Demo
Students integrate peer systems and a second boilerplate, add more advanced AI features, present their work, and improve it through technical rounds and hackathon sessions.
What the final product now looks like
By the end of Week 2, each student should already have a working AI-agent style product: a chatbot connected to an open-source LLM through Ollama, with memory and API tools. By the end of Week 3, that product evolves through integration and collaborative enhancement.
Common Core Feature
Every student implements the score-sheet analysis workflow, loading match data into their models and exposing logic through APIs.
Individual Feature
Each student must also design and build at least one custom feature beyond the common task, with limited direct mentor guidance.
API Tool Layer
Students create and expose their own APIs and later use peer-built APIs through shared documentation containing endpoints, schemas, and ownership.
Integration Outcome
Students combine their original product with a second boilerplate system and collaborative inputs to create a stronger final product.
Final Product Composition
15-Day Delivery Plan
The program now progresses week by week: setup first, then build, then integration and high-pressure refinement.
Program Direction and Product Overview
Students understand what they are building, why it matters, and how the 3-week journey will unfold.
Environment Setup: Python, Git, GitHub
Students set up Python, repositories, version control basics, and prepare a local development workflow.
Ollama, Chatbots, Streamlit, and Gradio
Students install Ollama, understand local LLM usage, and explore how chatbot interfaces work in Streamlit and Gradio.
UI/UX, Data Flow, and API Basics
Students study what makes interfaces usable, how data flows through a product, and how APIs are structured and created.
Presentation Basics and Guided Practice
Students learn how to explain an MVP, structure a demo, and implement small guided exercises from the week’s concepts.
Boilerplate Project Setup
Students receive the base product and connect their environment, repositories, and initial app structure.
Create the Common Core Feature
All students work on the shared score-sheet analysis task and expose the logic through app screens and APIs.
Custom API Development
Students build their own APIs for logic such as Man of the Match, game analytics, or impact explanations.
Ollama Integration and AI-Agent Flow
Students connect Ollama-powered LLM behavior, memory, and tool use to form a working chatbot-style AI agent.
Independent Feature + Peer API Exchange
Students complete at least one feature of their own and receive peer API documentation with schema, endpoint, and ownership details.
Second Boilerplate + Product Integration
Students start combining their original product with the second boilerplate and align it with peer-built components where suitable.
Advanced AI Layer and Product Consolidation
Students add another AI or agentic layer, refine merged product behavior, and push toward a stronger final system.
Project Presentation and Technical Round
Students present their work, explain product decisions, and go through technical questioning and mentor review.
Live Hackathon Session I
Students improve their product based on mentor feedback, debugging needs, and integration challenges.
Live Hackathon Session II and Final Submission
Students complete final improvements, submit their merged product, and finish the program with a polished working outcome.
Final Submission Package
1. Working AI-Agent Product
A chatbot-based product connected to Ollama, memory, and APIs, built on the provided boilerplate.
2. Custom API Set
Student-built APIs for score-sheet logic, analytics, explanations, or other approved feature paths.
3. Peer API Integration Layer
At least one demonstrated integration using peer documentation, shared endpoints, or the second boilerplate system.
4. Presentation Deck
A concise project story covering the product concept, tools used, feature logic, API architecture, and integration outcome.
5. Final Improved Submission
A revised version of the product incorporating mentor feedback from technical rounds and hackathon sessions.
Evaluation Rubric
This is now a guided applied AI builder sprint.
Students leave with real setup experience, a working AI-agent style product, API creation exposure, integration practice, and the pressure-tested confidence of presenting and improving their work under review.