Diving into LLM Applications — From Curiosity to Practice
Diving into LLM Applications — From Curiosity to Practice
After a few months of exploring various technologies, one thing has become increasingly clear to me: large language models are reshaping how we work, live, and build software.
The Turning Point
During my early months in grad school, I spent time studying different areas — fuzzing, software testing, generative models. But it was while experimenting with LLM-based tools that something clicked. These models aren't just academic toys. They're being used in real products, real workflows, and real businesses right now.
I'd played around with LLMs before — writing simple chatbots, testing prompt engineering, building toy projects. But looking back, that was all surface-level stuff. I hadn't really understood how these models are integrated into production systems, or how businesses are leveraging them at scale.
Going Deeper
To bridge that gap, I've been doing a few things:
- Reading extensively — research papers on LLM applications in software engineering, industry case studies, and technical blogs from companies deploying these models
- Building prototypes — small projects that go beyond "call the API and display the result" to understand retrieval, context management, and prompt design
- Talking to people — reaching out to developers and researchers who are working with LLMs professionally
Upcoming Internship
I'm also about to join an internship focused on business applications of generative AI. The goal is to get hands-on experience with how companies actually use these technologies — not just the technical side, but the product thinking, the user experience, and the business model.
I think this kind of practical exposure is essential. You can read all the papers you want, but there's no substitute for seeing how things work in a real-world context.
What I've Learned So Far
A few early takeaways:
- LLMs are powerful but messy — they need careful engineering (retrieval, guardrails, evaluation) to be useful in production
- The gap between demo and product is huge — a cool ChatGPT wrapper is not a business
- Domain knowledge matters — the best LLM applications come from people who deeply understand the problem they're solving
- The field moves insanely fast — what was state-of-the-art three months ago might already be outdated
Looking Ahead
This summer is going to be a big learning period. Between the internship and my own research, I'm hoping to build a much deeper understanding of how LLMs can be applied meaningfully.
I'll share what I learn along the way. If you're also exploring this space, feel free to reach out via email or GitHub.