Welcome
MetricDust hosted the third edition of MetricTalks on Local AI Assistants: A workshop on Integrating multi AI Locally. It was succesfully concluded on 8th March, 2025. We thank everyone who took out their time for this session.
At the helm of this workshop were our innovators, Rahul & Vivek who took us through a step by step of building a local AI assistant of your own. This is what Vivek & Rahul had to say respectively.
"We designed the Local AI Assistant’s backend with Django as the core framework, integrating Ollama as the local AI engine to eliminate cloud dependencies. When a user submits a request, Django routes it to Ollama, which processes the input using a locally stored LLM and returns the response in real time. This architecture ensures fast, private, and fully controlled AI interactions without relying on external servers."
"As a frontend developer, my goal with LocAi was simple—I wanted a single-page platform where I could seamlessly switch between multiple locally installed LLMs without juggling multiple tabs. Most online AI services come with frustrating limitations like usage caps, time restrictions, or paid subscriptions. By integrating Ollama, I built a solution that runs entirely on my system, giving me full control, unlimited access, and a smooth, uninterrupted AI experience—without extra costs."
Any of you who missed being there, we've got you covered. Watch the complete session
You can access the our Git code to build your Local AI here: https://github.com/MetricDust/LocalAI Our key takeaway was that you need not be dependent on a stable internet connection or AI cloud servers to take AI into your own hands.
We are immensely proud of our team whose dedication made this workshop a success, and we look forward to seeing you at future sessions that are equally interactive, informative, and engaging. Follow us on our LinkedIn page to get more updates regularly. Feel free to let us know on what would you like us to take up as the next topic for webinar