Conversational Agent for Medical Question-Answering Using RAG and LLM
This project demonstrates how to create a conversational agent for medical question-answering using a local LLM Mistral and Haystack's RAG pipeline. Unlike typical setups that rely on cloud-based models like OpenAI's GPT, this project uses a fully local setup with Ollama, making it cost-effective and privacy-focused.
Fri May 30 2025 00:00:00 GMT+0000 (Coordinated Universal Time)