Skip to main content
Milvus is a production-ready vector database that enables semantic search over your documents using dense vector embeddings. Upload documents to collections, and agents will automatically search them using semantic similarity.

Overview

Milvus provides:
  • Semantic Search: Find relevant information using natural language queries
  • Document Collections: Organize documents into searchable collections
  • Custom Embeddings: Configure embedding models and dimensions
  • Fine-grained Control: Tune chunking, retrieval, and search parameters
For cloud storage integration (Google Drive, SharePoint, etc.), see UKnow Cloud Storage.

Configuration

agents:
  - name: "support_agent"
    agent_type: "llm_agent"
    
    knowledge_bases:
      - name: "company_docs"
        knowledge_base_type: "milvus"
        id: "kb_abc123"
        config:
          top_k: 10
          search_ef: 64
          metric_type: "COSINE"

Required Fields

name
string
required
Unique identifier for this knowledge base instance
knowledge_base_type
string
required
Must be set to "milvus"
id
string
required
Knowledge base ID for isolation and multi-tenancyReferences a KnowledgeBase record in the database.

Retrieval Configuration

The config object controls how documents are retrieved:
config.top_k
integer
default:"10"
Number of most relevant results to returnRange: 1-1000
config.search_ef
integer
default:"64"
HNSW search parameter controlling accuracy vs speed trade-offRange: 1-512
Higher values = more accurate but slower
config.metric_type
string
default:"COSINE"
Distance metric for similarity searchOptions:
  • COSINE - Cosine similarity (recommended)
  • L2 - Euclidean distance
  • IP - Inner product
config.offset
integer
default:"0"
Number of results to skip (for pagination)
config.score_threshold
float
Minimum similarity score thresholdRange: 0.0-1.0
Only return results above this score
config.enable_rerank
boolean
default:"false"
Enable reranking of search results for improved relevance
config.rerank_model
string
Reranking model to use (required if enable_rerank is true)

Usage Examples

Basic Knowledge Base Agent

agents:
  - name: "support_agent"
    agent_type: "llm_agent"
    
    knowledge_bases:
      - name: "support_kb"
        knowledge_base_type: "milvus"
        id: "kb_support_001"
        config:
          top_k: 5
          metric_type: "COSINE"
    
    system_prompt: |
      You are a customer support agent with access to our support documentation.
      
      Always search the knowledge base first for answers to customer questions.
      Provide accurate information based on our documentation.

Advanced Configuration with Reranking

agents:
  - name: "research_agent"
    agent_type: "react_agent"
    
    tools:
      - "tavily_search"
    
    knowledge_bases:
      - name: "research_kb"
        knowledge_base_type: "milvus"
        id: "kb_research_001"
        config:
          top_k: 20
          search_ef: 128
          metric_type: "COSINE"
          score_threshold: 0.7
          enable_rerank: true
          rerank_model: "cross-encoder"
    
    system_prompt: |
      You are a research assistant with access to internal research papers.
      
      Search the knowledge base for relevant research before using web search.
      Prioritize high-quality, relevant results.

Multiple Knowledge Bases

agents:
  - name: "comprehensive_agent"
    agent_type: "llm_agent"
    
    knowledge_bases:
      - name: "technical_docs"
        knowledge_base_type: "milvus"
        id: "kb_tech_001"
        config:
          top_k: 10
      
      - name: "company_policies"
        knowledge_base_type: "milvus"
        id: "kb_policy_001"
        config:
          top_k: 5
    
    system_prompt: |
      You have access to multiple knowledge bases:
      - Technical documentation
      - Company policies
      
      Search the appropriate knowledge base based on the question type.

Best Practices

Top K Selection: Start with top_k: 5-10 for most use cases. Increase if you need more context.
Search Accuracy: Use search_ef: 64 for balanced performance. Increase to 128+ for higher accuracy needs.
Metric Type: Use COSINE for most semantic search applications as it’s normalized and works well with embeddings.
Score Threshold: Set a score_threshold (e.g., 0.7) to filter out low-relevance results.
Higher search_ef values improve accuracy but increase query latency. Balance based on your performance requirements.

Next Steps