Semantic Kernel Chatbot with Cosmos DB

A sophisticated AI-powered chatbot built with Microsoft Semantic Kernel that intelligently queries databases using both predefined SQL templates and dynamic query generation. The system includes RAG capabilities, analytics dashboards, and semantic query clustering.

Overview

This project implements an intelligent database query assistant that leverages Large Language Models (LLMs) to interact with data stored in Azure Cosmos DB. The chatbot can understand natural language queries and either use predefined SQL templates or generate custom queries on the fly.

Key Features

Intelligent Query System

RAG Implementation

Analytics Dashboard

Architecture

Semantic Kernel Chatbot image

Langchain RAG + Ollama Chatbot image

Technical Stack

Cloud Deployment -

Local Deployment -

Core Components

  1. Semantic Kernel Plugins The system includes custom plugins that enable the LLM to interact with the database:

Predefined Query Plugin: Contains template SQL queries with parameters that the LLM fills based on user intent Dynamic Query Generator: Allows the LLM to construct SQL queries from scratch for complex requests RAG Plugin: Retrieves relevant context from historical queries to improve responses

  1. Query Processing Flow

User submits natural language query Semantic Kernel analyzes intent System decides between:

Using a predefined SQL template (parameter filling) Generating a new SQL query dynamically

Query executes against Cosmos DB Query and result stored for analytics and RAG Response returned to user

  1. Analytics & Monitoring

Query Storage: All queries logged to Cosmos DB with metadata Error Monitoring: Track and analyze failed queries Semantic Clustering: Queries grouped by semantic similarity using RAG embeddings Usage Patterns: Identify common query types and user behaviors