About
Hi, I'm Linda! I study computer science and classics at Stanford.
I love thinking about how people think and how machines think.
- How do we discover and consume content?
- How do we draw connections across ideas, people, and things — and how do we hop from one link to the next?
- How do we build systems to organize knowledge at scale?
Recent work:
Selleb — Implemented CLIP embedding pipeline using HuggingFace Transformers + FastAPI, deployed on AWS EC2; built first production recommender system (top-K cosine similarity on CLIP embeddings) for personalized product and user recommendations; explored GNN-based link prediction architectures; deployed backend with ECS + nginx for reliable scaling.
Stanford AI Lab — Trained language models on synthetic data to improve implicit reasoning; implemented GPT-Neo pretraining pipeline with PyTorch + HuggingFace; generated chain-of-thought datasets.
Stanford School of Engineering — TA for CS106A/B (intro Python & C++), leading weekly sections, debugging, grading, and teaching core CS concepts (pointers, objects, recursion, algorithmic analysis, data structures, graphs).
Research interests: Long-form video generation, RLAIF/RLHF methods for complex reasoning, multimodal retrieval & recommendation, and post-training behavior shaping through synthetic data and in-context learning.
In my free time, you can probably find me reading, riding at SoulCycle (guilty pleasure lol), or hacking on something with friends!