Published inTowards Data ScienceRAG Isn’t Immune to LLM HallucinationHow to measure how much of your RAG’s output is correct11h ago11h ago
Published inTowards Data SciencePreparing PDFs for RAGsI created a graph storage from dozens of annual reports (with tables)3d ago23d ago2
Published inTowards Data ScienceHow to Build a Knowledge Graph in Minutes (And Make It Enterprise-Ready)I tried and failed creating one—but it was when LLMs were not a thing!Jan 136Jan 136
Published inTowards Data ScienceI Tested Frontline M-LLMs on Their Chart Interpretation SkillsCan multimodal LLMs infer basic charts accurately?Nov 5, 2024Nov 5, 2024
Published inTowards Data ScienceHow Much Stress Can Your Server Handle When Self-Hosting LLMs?Do you need more GPUs or a modern GPU? How do you make infrastructure decisions?Oct 19, 20245Oct 19, 20245
Published inTowards Data ScienceI Fine-Tuned the Tiny Llama 3.2 1B to Replace GPT-4oIs the fine-tuning effort worth more than few-shot prompting?Oct 15, 202429Oct 15, 202429
Published inTowards Data ScienceThe Most Valuable LLM Dev Skill is Easy to Learn, But Costly to Practice.Here’s how not to waste your budget on evaluating models and systems.Oct 9, 202429Oct 9, 202429
Published inTowards Data ScienceBuilding RAGs Without A Retrieval Model Is a Terrible MistakeHere are my favorite techniques — one is faster, the other is more accurate.Sep 17, 20242Sep 17, 20242
Published inTowards Data ScienceHow I Used Clustering to Improve Chunking and Build Better RAGsIt’s both fast and cost-effectiveSep 4, 20244Sep 4, 20244
Published inTowards Data ScienceHow to Achieve Near Human-Level Performance in Chunking for RAGsThe costly yet powerful splitting technique for superior RAG retrievalAug 26, 20246Aug 26, 20246