Member-only story

Steal My Blueprint to Build and Deploy RAGs [In Minutes].

Most RAGs are built on this stack; why would you redo it every time?

Thuwarakesh Murallie
AI Advances
7 min readFeb 4, 2025

Photo by Warren on Unsplash

RAGs make LLMs useful.

Yes, before RAG, LLMs were simply toys. There weren’t many applications other than some trivial sentiment classification for LLM. This is primarily due to the LLM’s inability to learn things on the go. Anything real-time didn’t work with LLMs.

When RAGs came into practice, this changed.

RAGs allowed us to build apps with real-time data, and they helped us build intelligent apps around our private data with LLMs.

My go-to starter app has the following technologies: Langchain (with LlamaIndex being the only comparable alternative), ChromaDB, and OpenAI for both LLMs and embeddings. I often develop in a docker environment because the results are easy to reproduce in someone else’s computer (besides their many other benefits).

To my needs, I rarely package them. When I do, I either use Streamlit or Gradio. I used to work with Django before. It’s a…

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Written by Thuwarakesh Murallie

Data Science Journalist & Independent Consultant

Responses (4)

What are your thoughts?