Every few years, the “best” web stack changes. In 2025, here’s what we’re using for AI-powered applications:
Backend: FastAPI (Python) Frontend: React + TypeScript + Vite Styling: Tailwind CSS v4 + shadcn/ui State: React Query
This isn’t theoretical. We built a complete YouTube-to-Obsidian pipeline with this stack - 50+ API endpoints, real-time progress updates, background job processing, and a polished UI. Here’s what we learned.
The Architecture
Figure 1 - Full-stack architecture diagram showing the layered structure: React frontend with components and React Query, API layer with FastAPI endpoints, backend services connecting to PostgreSQL, Qdrant, and Anthropic API
The architecture follows a clean separation of concerns:
- Frontend: React components → React Query → Fetch API
- Backend: FastAPI routers → Services → Repositories
- Data: PostgreSQL (relational) + Qdrant (vectors) + External APIs
Part 1: FastAPI Backend
Why FastAPI?
After years of Flask and Django, FastAPI feels like a revelation:
- Type hints everywhere - Pydantic models for request/response validation
- Auto-generated docs - Swagger UI at
/docsfor free - Async-first - Native
async/awaitsupport - Fast - Built on Starlette and Uvicorn
Project Structure
api/├── main.py # FastAPI app, CORS, routers├── models.py # Pydantic request/response models├── jobs.py # Background job management├── youtube_batch.py # YouTube processing service├── vault_batch.py # Vault processing service└── repositories/ # Database access layer ├── tags.py ├── notes.py └── batches.pyThe Main App
from fastapi import FastAPIfrom fastapi.middleware.cors import CORSMiddleware
app = FastAPI( title="YouTube Markdown Agent", description="Convert YouTube videos to Obsidian notes", version="1.0.0",)
# CORS for local developmentapp.add_middleware( CORSMiddleware, allow_origins=["http://localhost:5173", "http://localhost:5174"], allow_credentials=True, allow_methods=["*"], allow_headers=["*"],)Request/Response Models
Pydantic models define your API contract:
from pydantic import BaseModelfrom typing import Optionalfrom enum import Enum
class ProcessingMode(str, Enum): SUMMARY = "summary" DETAILED = "detailed"
class YouTubeRequest(BaseModel): urls: list[str] mode: ProcessingMode = ProcessingMode.SUMMARY
class ProcessingStatus(BaseModel): job_id: str status: str # "pending", "processing", "completed", "failed" progress: int # 0-100 current_step: Optional[str] = None results: Optional[list[dict]] = None
class YouTubeResponse(BaseModel): job_id: str mode: str # "sync" or "batch" status: ProcessingStatusFastAPI validates requests automatically. Send invalid JSON, get a clear error message. No manual parsing needed.
Endpoint Patterns
Processing endpoint:
@app.post("/api/youtube/process", response_model=YouTubeResponse)async def process_youtube(request: YouTubeRequest): """Process YouTube videos."""
if len(request.urls) == 1: # Single video: sync processing result = await process_single_video(request.urls[0], request.mode) return YouTubeResponse( job_id="sync", mode="sync", status=ProcessingStatus( job_id="sync", status="completed", progress=100, results=[result], ), ) else: # Multiple videos: batch processing job_id = create_batch_job(request.urls, request.mode) return YouTubeResponse( job_id=job_id, mode="batch", status=ProcessingStatus( job_id=job_id, status="pending", progress=0, ), )Status endpoint:
@app.get("/api/youtube/status/{job_id}", response_model=ProcessingStatus)async def get_status(job_id: str): """Get processing status for a job."""
job = get_job(job_id)
if not job: raise HTTPException(status_code=404, detail="Job not found")
return ProcessingStatus( job_id=job_id, status=job.status, progress=job.progress, current_step=job.current_step, results=job.results if job.status == "completed" else None, )Background Jobs
For long-running tasks, we use a simple in-memory job store:
from dataclasses import dataclass, fieldfrom datetime import datetimefrom typing import Anyimport asyncio
@dataclassclass Job: job_id: str status: str = "pending" progress: int = 0 current_step: str = "" results: list[Any] = field(default_factory=list) created_at: datetime = field(default_factory=datetime.now)
# Simple in-memory store (use Redis for production)jobs: dict[str, Job] = {}
async def run_job(job_id: str, urls: list[str], mode: str): """Run processing job in background."""
job = jobs[job_id] job.status = "processing"
try: total = len(urls) for i, url in enumerate(urls): job.current_step = f"Processing video {i+1}/{total}" job.progress = int((i / total) * 100)
result = await process_single_video(url, mode) job.results.append(result)
job.status = "completed" job.progress = 100
except Exception as e: job.status = "failed" job.current_step = str(e)Figure 2 - Swagger UI documentation screenshot showing the auto-generated API docs at /docs endpoint
Part 2: React Frontend
Why React + Vite?
- Vite - Instant hot reload, fast builds, native ESM
- TypeScript - Catch errors before runtime
- React Query - Server state management that just works
Project Structure
frontend/├── src/│ ├── App.tsx # Main app shell│ ├── main.tsx # Entry point│ ├── lib/│ │ └── api.ts # API client│ ├── components/│ │ ├── VideoForm.tsx # URL input form│ │ ├── VaultFileList.tsx│ │ ├── ProcessingStatus.tsx│ │ └── ui/ # shadcn components│ └── hooks/│ └── useProcessing.ts # Processing logic hook├── tailwind.config.js└── vite.config.tsThe API Client
Type-safe API calls:
const API_BASE = "http://localhost:8000/api";
export interface YouTubeRequest { urls: string[]; mode: "summary" | "detailed";}
export interface ProcessingStatus { job_id: string; status: "pending" | "processing" | "completed" | "failed"; progress: number; current_step?: string; results?: ProcessingResult[];}
export async function processYouTube( request: YouTubeRequest): Promise<YouTubeResponse> { const response = await fetch(`${API_BASE}/youtube/process`, { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify(request), });
if (!response.ok) { throw new Error(`API error: ${response.status}`); }
return response.json();}React Query for Server State
React Query handles caching, refetching, and loading states:
import { useMutation, useQuery } from "@tanstack/react-query";import { processYouTube, getStatus } from "../lib/api";
export function useProcessYouTube() { return useMutation({ mutationFn: processYouTube, });}
export function useJobStatus(jobId: string | null, enabled: boolean) { return useQuery({ queryKey: ["job-status", jobId], queryFn: () => getStatus(jobId!), enabled: enabled && !!jobId, refetchInterval: (data) => { // Poll every 5 seconds until complete if (data?.status === "completed" || data?.status === "failed") { return false; } return 5000; }, });}Usage in components:
function ProcessingPanel() { const [jobId, setJobId] = useState<string | null>(null); const [isPolling, setIsPolling] = useState(false);
const processMutation = useProcessYouTube(); const { data: status } = useJobStatus(jobId, isPolling);
async function handleSubmit(urls: string[], mode: string) { const response = await processMutation.mutateAsync({ urls, mode });
if (response.mode === "sync") { displayResults(response.status.results); } else { setJobId(response.job_id); setIsPolling(true); } }
useEffect(() => { if (status?.status === "completed") { setIsPolling(false); displayResults(status.results); } }, [status]);
return ( <div> <VideoForm onSubmit={handleSubmit} /> {status && <ProgressBar progress={status.progress} />} </div> );}Part 3: Tailwind + shadcn/ui
Why Tailwind?
- Utility-first - No context switching to CSS files
- Consistent - Design tokens built in (spacing, colors, etc.)
- Purging - Only ship CSS you actually use
- Dark mode - One class to rule them all
Why shadcn/ui?
shadcn/ui isn’t a component library - it’s a collection of copy-paste components built on Radix UI primitives. You own the code, so you can customize everything.
npx shadcn@latest add buttonnpx shadcn@latest add inputnpx shadcn@latest add cardnpx shadcn@latest add tabsThis adds the component source to your project. No npm dependency, no version conflicts.
Dark Mode by Default
We default to dark mode because we have standards:
function App() { return ( <div className="min-h-screen bg-background text-foreground dark"> <AppShell /> </div> );}The dark class on a parent element enables dark mode for all children.
Two-Panel Layout
For productivity apps, the list-detail pattern works well:
function AppShell() { const [selectedItem, setSelectedItem] = useState<Item | null>(null);
return ( <div className="flex h-screen"> {/* Left panel: List */} <div className="w-1/3 border-r border-border overflow-auto"> <ItemList onSelect={setSelectedItem} selectedId={selectedItem?.id} /> </div>
{/* Right panel: Detail */} <div className="flex-1 overflow-auto"> {selectedItem ? ( <ItemDetail item={selectedItem} /> ) : ( <EmptyState message="Select an item to view details" /> )} </div> </div> );}Figure 3 - Two-panel layout screenshot showing file list on left, detail/preview panel on right, dark theme with teal accents
Component Example: Video Form
import { Button } from "@/components/ui/button";import { Textarea } from "@/components/ui/textarea";import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
interface VideoFormProps { onSubmit: (urls: string[], mode: string) => void; isLoading: boolean;}
function VideoForm({ onSubmit, isLoading }: VideoFormProps) { const [urls, setUrls] = useState(""); const [mode, setMode] = useState<"summary" | "detailed">("summary");
function handleSubmit(e: React.FormEvent) { e.preventDefault();
const urlList = urls .split("\n") .map((u) => u.trim()) .filter((u) => u.length > 0);
onSubmit(urlList, mode); }
return ( <Card> <CardHeader> <CardTitle>Process YouTube Videos</CardTitle> </CardHeader> <CardContent> <form onSubmit={handleSubmit} className="space-y-4"> <Textarea placeholder="Paste YouTube URLs (one per line)" value={urls} onChange={(e) => setUrls(e.target.value)} className="min-h-[120px]" />
<div className="flex gap-2"> <Button type="button" variant={mode === "summary" ? "default" : "outline"} onClick={() => setMode("summary")} > Summary </Button> <Button type="button" variant={mode === "detailed" ? "default" : "outline"} onClick={() => setMode("detailed")} > Detailed </Button> </div>
<Button type="submit" disabled={isLoading} className="w-full"> {isLoading ? "Processing..." : "Process Videos"} </Button> </form> </CardContent> </Card> );}Part 4: Lessons Learned
What Worked Well
-
Pydantic models everywhere - Define once, validate everywhere. The auto-generated docs are a bonus.
-
React Query for polling - The
refetchIntervaloption makes polling trivial. Stop polling when complete, no manual cleanup. -
shadcn/ui components - Copy-paste beats npm dependencies. You can actually read and modify the code.
-
Dark mode first - Easier to start dark and add light than the reverse.
-
Two-panel layout - List on left, detail on right. Users understand it immediately.
What We’d Change
-
Redis for job storage - Our in-memory store works for development, but won’t survive server restarts. Redis would be production-ready.
-
WebSocket for progress - We poll every 5 seconds, but WebSocket would give instant updates. Worth adding for better UX.
-
Better error boundaries - We handle API errors, but React error boundaries would catch rendering crashes.
-
E2E tests - Unit tests are great, but Playwright tests would catch integration issues.
Performance Tips
-
Parallel video processing - We process 3 videos concurrently instead of sequentially. 3x speedup.
-
Reduced polling - Poll every 5 seconds, not 2. 60% fewer API calls.
-
Deduplicate URLs - Frontend removes duplicate video IDs before submission.
-
Incremental indexing - Only re-index notes when content changes.
The Full Stack in Action
Here’s how a request flows through the system:
- User enters YouTube URLs in the React form
- Frontend sends POST to
/api/youtube/process - FastAPI validates the request (Pydantic)
- Backend creates a job, returns job ID
- Frontend starts polling
/api/youtube/status/{job_id} - Backend processes videos (Anthropic API)
- Backend indexes notes (Qdrant)
- Backend updates job status
- Frontend sees “completed”, fetches results
- User reviews and saves notes
Figure 4 - Request flow diagram showing the complete journey from user input through frontend, API, backend processing, and back to UI display
Key Takeaways
-
FastAPI + Pydantic is the modern Python web stack. Type hints, validation, and docs for free.
-
React Query eliminates 90% of server state boilerplate. Use it.
-
shadcn/ui gives you polished components without dependency hell.
-
Dark mode first is easier than retrofitting.
-
Poll responsibly - 5 seconds is usually fine. Don’t hammer your own API.
-
Two-panel layout works for productivity apps. Don’t overthink it.
Related Articles
- From YouTube to Knowledge Graph - System overview
- Anthropic Batch API in Production - Backend processing
- Building a Semantic Note Network - Vector search integration
This article is part of our series on building AI-powered knowledge management tools. Written with assistance from Claude Code.