Hipster🤘

Building the Future with Multi-Agent Architecture and Model Context Protocol

The Challenge: Beyond Single-Agent AI

We wanted to build something that wasn’t just another chatbot. We wanted an AI system that could think, collaborate, and orchestrate complex workflows across multiple data sources. Single-agent systems are powerful, but they’re limited—they can only do one thing at a time, they struggle with complex multi-step tasks, and they can’t leverage specialized knowledge sources simultaneously.

We needed something cooler. Something that would make other developers say “wait, how did you do that?”

We needed multi-agent architecture with Model Context Protocol (MCP).

The Solution: Multi-Agent Orchestration with MCP Servers

We built a cutting-edge multi-agent system using Microsoft Copilot Studio, Model Context Protocol (MCP) servers, and agent-to-agent communication. Our solution demonstrates the hippest, coolest tech stack available: agents that collaborate, MCP servers that provide specialized capabilities, and a unified system that orchestrates everything seamlessly.

Here’s how we built the future.

The Architecture: Agents, MCPs, and Orchestration

Our multi-agent architecture looks like this:

User Query → Copilot Studio Agent → MCP DataAgent → Multiple Knowledge Sources → Orchestrated Response
     ↓              ↓                    ↓                        ↓                      ↓
  Question    Scrum Assistant    DataAgent MCP          Azure AI Search        Intelligent Answer
                                    Server              Fabric Semantic         Multi-Source
                                                         Model                  Synthesis

Each component represents the latest in AI technology, working together to create something greater than the sum of its parts.

Step 1: Model Context Protocol (MCP) – The Foundation of Agent Communication

Model Context Protocol is the newest standard for connecting AI agents to external systems. It’s like REST APIs, but designed specifically for AI agents. MCP enables agents to discover capabilities, invoke tools, and communicate with specialized servers.

What Makes MCP Hipster-Level Cool

1. Standardized Agent Communication

  • Agents can discover available tools dynamically
  • No hardcoded integrations—everything is protocol-based
  • Agents can adapt to new capabilities without code changes

2. Specialized MCP Servers

  • Each server provides domain-specific capabilities
  • DataAgent MCP Server for data operations
  • Azure DevOps MCP Server for DevOps operations
  • Custom MCP servers for specialized tasks

3. Tool Discovery and Invocation

  • Agents discover tools at runtime
  • Tools are self-describing with schemas
  • Agents can chain tools together intelligently

Our MCP Implementation

We configured a DataAgent MCP Server that connects our Scrum Assistant to multiple knowledge sources:

{
  "mcpServers": {
    "dataagent": {
      "command": "dataagent-mcp-server",
      "args": ["--workspace-id", "11d99e27-c621-4340-8eb8-7940714ae333"],
      "env": {
        "DATA_AGENT_ID": "8b9a6308-db2e-4c5a-9073-61c9205a7334"
      }
    }
  }
}

What This Enables:

  • The Scrum Assistant can query Fabric semantic models
  • Agents can access Azure AI Search indexes
  • Multi-source knowledge retrieval in a single query
  • Dynamic tool discovery and invocation

Result: Our agents aren’t hardcoded—they discover and use capabilities dynamically, making our system truly extensible.

Step 2: Multi-Agent Architecture – Agents That Collaborate

We didn’t stop at one agent. We built multiple specialized agents that can work together:

The Scrum Assistant Agent

Our primary agent, built in Copilot Studio, specializes in scrum ceremonies and backlog management:

Agent Configuration:

  • Model: GPT-4.1
  • Purpose: Support scrum team members in facilitating ceremonies, backlog management, and continuous improvement
  • Knowledge Sources:
  • Azure AI Search (code and documentation)
  • Fabric Semantic Model (AgileToolboxxModel)
  • Web search (for scrum best practices)

Agent Instructions:

You are an AI assistant designed to support the scrum team members in 
facilitating scrum ceremonies, backlog management, and continuous improvement.

Skills:
1. Assist scrum teams with scrum ceremonies
2. Help with scrum best practices and coaching
3. Analyze team artifacts and provide data-driven insights

What Makes This Cool:

  • The agent adapts its responses based on available knowledge sources
  • It can query multiple sources simultaneously
  • It synthesizes information from different data types (structured + unstructured)

The DataAgent MCP Server

The DataAgent acts as a specialized MCP server that provides data access capabilities:

Capabilities:

  • Query Fabric semantic models (star schema analytics)
  • Access dimension tables (projects, repositories, teams, users, iterations)
  • Query fact tables (work items, commits, pull requests, branches)
  • Multi-source search across Azure AI Search and Fabric

Tool: DataAgent_DataAgent

  • Description: Provides access to AgileToolboxxModel semantic model
  • Inputs:
  • Data Agent ID
  • Workspace ID
  • Output: Structured query results from Fabric

How It Works:

When a user asks “How many work items are in different stages?”, the agent:

  1. Receives Query: User asks about work item stages
  2. Discovers Tools: Agent discovers DataAgent_DataAgent tool via MCP
  3. Invokes Tool: Agent calls DataAgent_DataAgent with appropriate query
  4. Multi-Source Search: Tool searches both Azure AI Search and Fabric semantic model
  5. Synthesizes Results: Agent combines results from multiple sources
  6. Generates Response: Agent provides comprehensive answer with citations

Result: The agent doesn’t just answer—it orchestrates a multi-source query, synthesizes results, and provides intelligent responses.

Step 3: Agent-to-Agent Communication – The Real Magic

The coolest part? Our agents can communicate with each other through MCP.

How Agent-to-Agent Communication Works

Scenario: User asks “How does the cart functionality work in the OEN project?”

Step 1: Scrum Assistant Receives Query

  • User asks about cart functionality
  • Scrum Assistant analyzes the query

Step 2: Agent Discovers Available Tools

  • Scrum Assistant discovers DataAgent MCP Server tools
  • Discovers Azure AI Search knowledge source
  • Identifies relevant data sources

Step 3: Multi-Source Query Execution

Scrum Assistant → DataAgent MCP → Fabric Semantic Model
                → Azure AI Search → Code Index
                → Web Search → Scrum Best Practices

Step 4: Agent Synthesizes Results

  • Combines code documentation from Azure AI Search
  • Integrates project data from Fabric semantic model
  • Adds context from web search
  • Generates comprehensive answer

Step 5: Response with Citations

  • Provides answer with source citations
  • Shows which knowledge source provided which information
  • Enables users to verify information

The Workflow Visualization:

┌─────────────────────────────────────────────────────────────┐
│                    USER QUERY                                │
│  "How does the cart functionality work?"                    │
└──────────────────────┬──────────────────────────────────────┘
                       │
                       ▼
┌─────────────────────────────────────────────────────────────┐
│              SCRUM ASSISTANT AGENT                          │
│  (Copilot Studio - GPT-4.1)                                 │
│  • Analyzes query                                            │
│  • Discovers available tools                                 │
│  • Plans multi-source query                                  │
└──────────────────────┬──────────────────────────────────────┘
                       │
                       ├──────────────────────────────────┐
                       │                                  │
                       ▼                                  ▼
        ┌──────────────────────┐          ┌──────────────────────┐
        │  DATAAGENT MCP       │          │  AZURE AI SEARCH      │
        │  SERVER              │          │  KNOWLEDGE SOURCE     │
        │                      │          │                      │
        │  • Fabric Semantic   │          │  • Code Index        │
        │  • Star Schema       │          │  • Documentation     │
        │  • Analytics Data    │          │  • Vector Search     │
        └──────────┬───────────┘          └──────────┬───────────┘
                   │                                  │
                   └──────────────────┬───────────────┘
                                      │
                                      ▼
                    ┌──────────────────────────────────┐
                    │      RESULT SYNTHESIS            │
                    │  • Combines multiple sources     │
                    │  • Generates comprehensive answer │
                    │  • Provides citations            │
                    └──────────────────────────────────┘

Result: Agents collaborate seamlessly, each contributing their specialized knowledge to create comprehensive answers.

Step 4: Multi-Source Knowledge Integration

One of the coolest features is how our agents can query multiple knowledge sources simultaneously:

Knowledge Source 1: Azure AI Search

Configuration:

  • Name: Agent – Azure AI Search
  • Type: Azure AI Search connection
  • Index: gptkbindex
  • Document Count: 1,136 documents
  • Vectorized: Yes
  • Semantic Configuration: Yes

What It Provides:

  • Code documentation
  • Implementation details
  • File references
  • Semantic search capabilities

Knowledge Source 2: Fabric Semantic Model

Configuration:

  • Model: AgileToolboxxModel
  • Type: Semantic Model (Star Schema)
  • Tables:
  • Dimension tables (dim_project, dim_repository, dim_team, dim_user, dim_iteration)
  • Fact tables (fact_work_item, fact_commit, fact_pull_request, fact_branch)

What It Provides:

  • Structured analytics data
  • Work item metrics
  • Commit statistics
  • Team performance data

Knowledge Source 3: Web Search

Configuration:

  • Type: Public web search
  • Scope: Trusted scrum and agile sites

What It Provides:

  • Latest scrum best practices
  • Industry standards
  • Current methodologies
  • External context

How Multi-Source Queries Work

Example Query: “How many work items are in different stages?”

Query Execution:

  1. Azure AI Search Query:
  • Searches code and documentation
  • Finds implementation details
  • Returns semantic matches
  1. Fabric Semantic Model Query:
  • Queries fact_work_item table
  • Groups by WorkItemState
  • Returns counts by stage
  1. Result Synthesis:
  • Combines structured data (Fabric) with unstructured data (Azure AI Search)
  • Generates comprehensive answer
  • Provides citations for each source

Response:

Work Items by Stage:
- New: 357 work items
- Active: 13 work items  
- Closed: 56 work items

[Source: AgileToolboxxModel Semantic Model]

Result: Users get answers that combine the best of structured analytics and unstructured knowledge.

Step 5: Real-Time Agent Orchestration

The workflow execution shows agents working in real-time:

Workflow Visualization

When a query is executed, we can see the agent orchestration:

┌─────────────────────────────────────────┐
│  DataAgent (MCP) - Initialized         │
│  ↓                                      │
│  DataAgent (MCP) - Completed (18.11s)  │
│  ↓                                      │
│  DataAgent_DataAgent (Tool)            │
│  ↓                                      │
│  Search sources (Knowledge) - 1.64s    │
│  ✓ Complete                            │
└─────────────────────────────────────────┘

What’s Happening:

  1. MCP Server Initialization: DataAgent MCP server initializes and discovers available tools
  2. Tool Discovery: Agent discovers DataAgent_DataAgent tool
  3. Query Planning: Agent plans multi-source query
  4. Knowledge Search: Searches Azure AI Search and Fabric semantic model
  5. Result Synthesis: Combines results from multiple sources
  6. Response Generation: Generates comprehensive answer

Response Time: ~20 seconds for complex multi-source queries

Result: Users see the agent thinking, planning, and executing—transparency in AI decision-making.

Step 6: Dynamic Tool Discovery and Invocation

One of the hippest features is how agents discover tools dynamically:

Tool Discovery Process

1. MCP Server Registration

{
  "mcpServers": {
    "dataagent": {
      "command": "dataagent-mcp-server",
      "args": ["--workspace-id", "..."],
      "env": {
        "DATA_AGENT_ID": "..."
      }
    }
  }
}

2. Tool Schema Discovery

  • Agent queries MCP server for available tools
  • MCP server returns tool schemas with:
  • Tool name
  • Description
  • Input parameters
  • Output format

3. Dynamic Tool Invocation

  • Agent decides which tools to use based on query
  • Invokes tools with appropriate parameters
  • Receives structured results
  • Synthesizes responses

Example: Cart Functionality Query

User Query: “How does the cart functionality work in the OEN project?”

Agent Discovery:

  1. Discovers DataAgent_DataAgent tool
  2. Discovers Azure AI Search knowledge source
  3. Plans query strategy

Tool Invocation:

{
  "tool": "DataAgent_DataAgent",
  "query": "How does the cart functionality work in the OEN project?",
  "sources": [
    "Azure AI Search",
    "Fabric Semantic Model"
  ]
}

Multi-Source Results:

From Azure AI Search:

  • Code snippets from useCart.ts
  • Function definitions (updateQuantity, clearCart)
  • Implementation details

From Fabric Semantic Model:

  • Project metadata
  • Work item references
  • Related features

Synthesized Response:

Cart Functionality in the OEN Project

The cart functionality is managed through core operations:
- Adding Items (with specified quantity)
- Updating Quantity (using updateQuantity function)
- Clearing the Cart (using clearCart function)
- Calculating Total (summing item prices multiplied by quantity)

These operations are typically handled in a file like useCart.ts, 
which manages cart state and provides utility functions.

[Sources: Azure AI Search (gptkbindex), Fabric Semantic Model]

Result: Agents dynamically discover and use the right tools for each query, creating truly intelligent responses.

The Technical Stack: Why It’s Hipster-Level Cool

1. Model Context Protocol (MCP)

What It Is: The newest standard for AI agent communication
Why It’s Cool:

  • Standardized protocol for agent-to-system communication
  • Dynamic tool discovery
  • Self-describing capabilities
  • Extensible architecture

Our Implementation:

  • DataAgent MCP Server for data operations
  • Azure DevOps MCP Server for DevOps operations
  • Custom MCP servers for specialized tasks

2. Microsoft Copilot Studio

What It Is: Microsoft’s latest AI agent platform
Why It’s Cool:

  • No-code agent creation
  • Multi-source knowledge integration
  • Built-in MCP support
  • Real-time workflow visualization

Our Implementation:

  • Scrum Assistant agent with specialized instructions
  • Multi-source knowledge configuration
  • MCP server integration
  • Real-time agent orchestration

3. Fabric Semantic Models

What It Is: Microsoft Fabric’s semantic modeling layer
Why It’s Cool:

  • Star schema analytics
  • Direct SQL query interface
  • Real-time data access
  • Integration with AI agents

Our Implementation:

  • AgileToolboxxModel with dimension and fact tables
  • Direct query access via DataAgent MCP
  • Real-time analytics integration

4. Azure AI Search

What It Is: Microsoft’s semantic search service
Why It’s Cool:

  • Vector embeddings
  • Semantic search
  • Hybrid search (text + vector)
  • Multi-index support

Our Implementation:

  • Code documentation index
  • Vector embeddings for semantic search
  • Integration with agent knowledge sources

Real-World Use Cases

Use Case 1: Multi-Source Sprint Analysis

User Query: “How many work items are in different stages?”

Agent Orchestration:

  1. Scrum Assistant receives query
  2. Discovers DataAgent MCP Server
  3. Queries Fabric semantic model (fact_work_item)
  4. Groups by WorkItemState
  5. Returns counts by stage

Response:

Work Items by Stage:
- New: 357 work items
- Active: 13 work items
- Closed: 56 work items

[Source: AgileToolboxxModel Semantic Model]

Why It’s Cool: The agent queries structured analytics data and provides instant insights.

Use Case 2: Code + Analytics Integration

User Query: “How does the cart functionality work in the OEN project?”

Agent Orchestration:

  1. Scrum Assistant receives query
  2. Discovers multiple knowledge sources:
  • Azure AI Search (code documentation)
  • Fabric Semantic Model (project data)
  1. Executes parallel queries
  2. Synthesizes results from both sources
  3. Provides comprehensive answer with citations

Response:

Cart Functionality in the OEN Project

The cart functionality is managed through core operations:
- Adding Items (with specified quantity)
- Updating Quantity (using updateQuantity function)
- Clearing the Cart (using clearCart function)
- Calculating Total (summing item prices multiplied by quantity)

[Sources: Azure AI Search (gptkbindex), Fabric Semantic Model]

Why It’s Cool: The agent combines unstructured code documentation with structured project data to provide comprehensive answers.

Use Case 3: Real-Time Agent Workflow

User Query: “What work items are present in the closed stage?”

Agent Workflow Visualization:

DataAgent (MCP) - Initialized
  ↓
DataAgent (MCP) - Completed (18.11s)
  ↓
DataAgent_DataAgent (Tool)
  ↓
Search sources (Knowledge) - 1.64s
  ✓ Complete

Response: Lists all 18 closed work items with details

Why It’s Cool: Users can see the agent thinking, planning, and executing—complete transparency in AI decision-making.

The Business Impact

Before Multi-Agent Architecture

  • Single Knowledge Source: Agents could only query one source at a time
  • Manual Integration: Developers had to manually combine results from different sources
  • Limited Capabilities: Agents were hardcoded with specific capabilities
  • No Collaboration: Agents couldn’t work together

After Multi-Agent Architecture

  • Multi-Source Queries: Agents query multiple sources simultaneously
  • Automatic Synthesis: Agents automatically combine results from different sources
  • Dynamic Capabilities: Agents discover and use new tools dynamically
  • Agent Collaboration: Agents work together through MCP

Result: We’ve created an AI system that’s truly intelligent—it can think, plan, and orchestrate complex multi-source queries automatically.

What Makes This Hipster-Level Cool

1. Cutting-Edge Technology

We’re using the latest technologies:

  • Model Context Protocol (MCP): The newest standard for AI agent communication
  • Microsoft Copilot Studio: Microsoft’s latest AI agent platform
  • Fabric Semantic Models: Real-time analytics integration
  • Multi-Agent Architecture: Agents that collaborate

2. Dynamic Capabilities

Our agents aren’t hardcoded:

  • Tool Discovery: Agents discover available tools at runtime
  • Dynamic Invocation: Agents decide which tools to use based on query
  • Extensible Architecture: New capabilities can be added without code changes

3. Multi-Source Intelligence

Our agents query multiple sources simultaneously:

  • Structured Data: Fabric semantic models (analytics)
  • Unstructured Data: Azure AI Search (code, documentation)
  • External Knowledge: Web search (best practices)
  • Synthesis: Agents combine results intelligently

4. Real-Time Orchestration

Users can see agents working:

  • Workflow Visualization: See agents thinking and planning
  • Transparency: Understand how agents make decisions
  • Performance: See response times for each step

5. Agent-to-Agent Communication

Agents collaborate seamlessly:

  • MCP Protocol: Standardized communication
  • Tool Sharing: Agents can use each other’s capabilities
  • Orchestration: Complex workflows executed automatically

The Technical Architecture

┌─────────────────────────────────────────────────────────────┐
│                    USER INTERFACE                            │
│              (Copilot Studio Chat Interface)                │
└──────────────────────┬──────────────────────────────────────┘
                       │
                       ▼
┌─────────────────────────────────────────────────────────────┐
│              SCRUM ASSISTANT AGENT                          │
│         (Microsoft Copilot Studio - GPT-4.1)                 │
│  • Analyzes queries                                          │
│  • Discovers available tools                                 │
│  • Plans multi-source queries                                 │
│  • Synthesizes results                                       │
└──────────────────────┬──────────────────────────────────────┘
                       │
                       │ MCP Protocol
                       │
        ┌──────────────┼──────────────┐
        │              │              │
        ▼              ▼              ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ DATAAGENT     │ │ AZURE AI      │ │ WEB SEARCH    │
│ MCP SERVER    │ │ SEARCH        │ │ (Optional)    │
│               │ │               │ │               │
│ • Fabric      │ │ • Code Index  │ │ • Best        │
│   Semantic    │ │ • Vector      │ │   Practices   │
│   Model       │ │   Search      │ │ • External    │
│ • Star Schema │ │ • Semantic    │ │   Knowledge   │
│ • Analytics   │ │   Search      │ │               │
└───────┬───────┘ └───────┬───────┘ └───────┬───────┘
        │                 │                 │
        └─────────────────┼─────────────────┘
                          │
                          ▼
        ┌───────────────────────────────────┐
        │      RESULT SYNTHESIS             │
        │  • Combines multiple sources      │
        │  • Generates comprehensive answer │
        │  • Provides citations             │
        └───────────────────────────────────┘

The Code That Makes It Work

MCP Server Configuration

{
  "mcpServers": {
    "dataagent": {
      "command": "dataagent-mcp-server",
      "args": [
        "--workspace-id", "11d99e27-c621-4340-8eb8-7940714ae333"
      ],
      "env": {
        "DATA_AGENT_ID": "8b9a6308-db2e-4c5a-9073-61c9205a7334"
      }
    }
  }
}

Agent Instructions

You are an AI assistant designed to support the scrum team members 
in facilitating scrum ceremonies, backlog management, and continuous 
improvement.

Skills:
1. Assist scrum teams with scrum ceremonies
   - Guide teams through effective Sprint Planning, Daily Stand-ups, 
     Sprint Reviews, and Retrospectives
   - Identify and suggest ways to improve ceremony effectiveness

2. Help with scrum best practices and coaching
   - Provide guidance on agile principles
   - Suggest improvements based on team data

3. Analyze team artifacts and provide data-driven insights
   - Query work items, commits, and pull requests
   - Provide metrics and trends
   - Generate actionable recommendations

Knowledge Source Configuration

Azure AI Search:

  • Index: gptkbindex
  • Documents: 1,136
  • Vectorized: Yes
  • Semantic: Yes

Fabric Semantic Model:

  • Model: AgileToolboxxModel
  • Tables: dim_, fact_
  • Type: Star Schema Analytics

What’s Next

Multi-agent architecture opens new possibilities:

  • More Specialized Agents: Create agents for specific domains (security, performance, quality)
  • Agent Chains: Chain agents together for complex workflows
  • Custom MCP Servers: Build specialized MCP servers for unique capabilities
  • Agent Marketplace: Share agents and MCP servers across teams
  • Autonomous Operations: Agents that operate independently and collaborate automatically

But the foundation is set: we have a multi-agent system that can think, collaborate, and orchestrate complex workflows. That’s hipster-level cool.


This multi-agent architecture demonstrates the power of cutting-edge AI technology. By combining Model Context Protocol, Microsoft Copilot Studio, Fabric Semantic Models, and Azure AI Search, we’ve created an AI system that’s not just intelligent—it’s collaborative, extensible, and truly next-generation. Agents that think together, discover capabilities dynamically, and orchestrate complex workflows automatically. That’s the future of AI. That’s hipster-level cool.