Articles & Insights

Exploring ideas in software engineering, AI systems, and the craft of building great products.

6 posts published
The NLP Landscape: From 1960 to 2026
✦ Latest

The NLP Landscape: From 1960 to 2026

How machines learned to understand us — one decade at a time

Mar 15, 20263 min read37 views
AINLPHistory
Read Article

I Built an AI That Talks to Your Codebase in 48 Hours — Here's Everything That Went Wrong (and Right)

What happens when you give a 3rd-year CS student 48 hours, a vector database, and a problem every developer hates? CodeMind — an AI tool that lets you ask questions about your codebase in plain English and get cited, context-aware answers in real time. This is the full build story.

Mar 12, 20269 min156

How I Used Ollama Locally Instead of Paying for the OpenAI API

The OpenAI API could do all of that beautifully. But at scale, even a modest number of daily users would rack up a bill I couldn't justify as a third-year CS student building a side project between assignments. So I asked myself: what if I just ran the model myself?

Mar 11, 20265 min239

I Built an App Where You Can Say Anything — Here's Why

There's a conversation most of us have never had. Not because we don't want to have it, but because we don't know how to start it. Maybe it's something you did that you're not proud of. Maybe it's a feeling you've been carrying alone for months. Maybe it's just a thought that would sound strange coming from your mouth, attached to your name, in front of people who know you.

Mar 4, 20265 min62

What Is Machine Learning? A Beginner's Guide to the Technology Shaping Our World

If you've ever asked a virtual assistant a question, gotten a suspiciously accurate product recommendation, or watched a spam filter quietly protect your inbox — you've already benefited from machine learning. But what exactly is it, and why is everyone talking about it?

Feb 26, 20266 min59

The Chinchilla Law of Scaling: Why Bigger Isn't Always Better in AI

For years, the AI industry chased one goal: bigger models. Then a 70-billion-parameter underdog named Chinchilla beat models four times its size — not by being larger, but by being better trained. The 2022 DeepMind paper that introduced it rewrote the rules of scaling, revealing that the field had been systematically leaving performance on the table. Here's what the Chinchilla Law says, why it shook the research world, and what it means for the future of AI development.

Feb 18, 20265 min545