Use what others have created! And search the web!

Sometimes you already have what you need, or you can build it yourself. Other times, someone out there has already created exactly what you’re looking for. So why not just use what’s openly available?

In our reporting solution, we use a Copilot agent that retrieves data and information from open sources — including nartverket.no, ngu.no, and geonorge.no. The combination of these datasets gives us a solid foundation for assembling a complete picture of the areas we’re analyzing, as well as generating a high‑quality report for our customers 🤑

To convert Markdown to HTML, we use the open‑source NuGet package Markdig. This is used in the solution to present text in a clean and readable format within our app.

To send SMS confirmations to our customers, we use the SMS gateway from LinkMobility.

Crawl through the data.

So our Creeper Commentator Copilot Agent is connected to dataverse with the Microsoft Dataverse MCP Server

We want it to generate a lot of different things, based on the data inside the tables we have there

So it crawls and crawls,

And it adds valuable insights to the data for the participants

Project & Bids – Recipe Crawler 

This document describes the Recipe functionality developed as part of the Project solution. The focus of this feature is to help the business quickly identify Minecraft items, understand their properties, and prepare accurate printed material using an innovative search-based approach. 

Business Problem 

When working with Projects, users often need to reference Minecraft items for recipes. Searching for items manually or relying on external websites is time-consuming and slows down project preparation. 

Solution Overview 

To solve this, we built a custom PCF (PowerApps Component Framework) control using Fluent UI React v9. The control integrates with the Minecraft API to dynamically retrieve item data such as: 
– Item name 
– Item description 
– Item icon 
 
All data is presented directly inside the Model Driven App, eliminating the need to leave the application. 

Technical Implementation 

The Recipe control is implemented as a PCF control built with: 
– Fluent UI React v9 for a modern, consistent user interface 
– Minecraft API as the external data source for items and recipes 

Processing geodata in Fabric

Finding resources in nature is not a simple task, and requires enormous amounts of data to locate all sources of all types. Luckily, Norway has a source for free and open source geodata located at “Geonorge” that are available to everyone. There are many different suppliers of this kind of data, but as the core datasource for our solution, we went with the “N50 Kartdata” supplied by “Kartverket”.

Using this as a source, we decided to use Lakehouse in Fabric as a way of uploading the XML-file with over 430.000 lines of data and then, by using a pipeline and dataflow in fabric, converted it into an table within av SQL analytics endpoint. Additional CSV files with descriptive support data were also uploaded and merged using the dataflow to make sure alle data were located at the same place and making it easier to search for the data needed..

Within the Fabric workspace we also created an AI-driven data agent, specialized on the imported dataset and available to use as an supportive agent within other AI-agents like those created using Copilot studio. As we were planning in using this data within the Power Platform ecosystem, we had to add some very detailed instructions to this agent to make sure the data output is available to all resources that need to access it. This makes it generally very effective at finding necessary data in the table, making it an very effective way of searching for the resources needed at any time.

I don’t know how to SQL as well as my AI does

I hate creating SQL queries for each new question i want answers to, but my AI don’t (mostly because AI don’t have feelings).

With Fabric data agent i am able to search in my data through conversations with the agent. The queries are based on the specific question you ask, no repetitive template query. it understands what fields are available, and acts accordingly.

Crawler Badge : How we are teaching a Minecraft Bot to Understand Intent

When a user sends a vague request like “collect 20 stone blocks”, how does a Minecraft bot know what to actually do?

Answer: We let AI figure it out.

We’ve integrated Azure OpenAI  into our automation pipeline to break down high-level goals into executable task sequences. The bot doesn’t just blindly follow commands… it plans!

The Planning Pipeline

When a goal arrives via our Logic App:

  1. Azure Function receives the goal
  2. OpenAI breaks it into step-by-step tasks
  3. Service Bus queues the planned order
  4. Minecraft Bot executes using A* pathfinding

User: “collect 20 stone blocks”
   ↓
AI Planner → tasks: [{ type: “collect”, params: { itemType: “stone”, quantity: 20 }}]
   ↓
Bot executes with A* navigation algorithm

Our bot uses implements the classic A (A-star) search algorithm* to navigate the Minecraft world:

  • A* finds the optimal path to destinations
  • BFS (Breadth-First Search) locates nearby blocks to mine
  • Stuck detection with automatic recovery maneuvers
  • Self-preservation system monitors health, oxygen, and hostile mobs

The AI plans the what. The algorithms handle the how.

Instead of hardcoding every possible command, we let the LLM interpret natural language and generate valid task sequences. The bot becomes smarter without more code.

We’ve officially unlocked portal search mode in the CCCP Factory Portal

During the first night we turned on Dataverse Search in the environment and wired it straight into our Power Pages experience. Instead of hiding search away on a default page, we built a custom search entry point right in the UI, a proper “Search CCCP Portal” field that feels like part of the site, not a bolt-on feature. By refactoring and implementing a clean-looking modal with search entry, the user gets a modern search experience.

Now you can type something like “wheat” and instantly jump to a results page that scans across the portal and surfaces the relevant pages and content. It’s already useful for finding resources, dashboards, and anything mentioning current stock or production status, AND it sets us up nicely for the next step: narrowing results into actual resource cards and order-related data as we keep hooking more of the Minecraft facility into Dataverse.

Crawler badge? We’re taking it. Search that solves a real problem: finding what you need in the portal, fast.

Crawling the Web for Magic: The PowerPotters’ Quest for the Crawler Badge

Greetings, magical technologists! ✨ At Team PowerPotters, we’ve combined the power of Bing Search API, OpenAI GPT-4, and Power Automate to create a truly innovative web-crawling solution. Our workflow dynamically discovers, analyzes, and integrates external APIs, transforming how potion-related data is sourced and utilized. Here’s how we earned the Crawler Badge by blending search, AI, and automation into one seamless process.


The Crawler Workflow: Step-by-Step Magic

  1. Discovering APIs with Bing Search API
  • Purpose: To dynamically find public APIs related to Harry Potter spells or magical data.
  • Execution:
    • Query: "public API Harry Potter spells".
    • Filters: Results are restricted to recent entries with keywords like /api or “documentation”.
  1. Analyzing APIs with OpenAI GPT-4
  • Purpose: To validate URLs as APIs and extract relevant schemas, field mappings, and example data.
  • Execution: For each URL, OpenAI determines if the URL links to an API or its documentation. If valid, it provides:
    • API schema.
    • Example JSON response.
    • Field mappings for key data (e.g., Name for spell name, Description for spell effects).
  1. Integrating with Power Automate
  • Purpose: To process, validate, and integrate the data into our system.
  • Workflow Steps:
    • Parse Bing Results: Extract relevant URLs using JSON parsing.
    • Validate URLs: OpenAI determines if the URL links to a valid API and provides field mappings.
    • Dynamic Integration: Call validated APIs and use extracted data to:
      • Create new product entries in D365FO.
      • Enrich existing products with spell names (Name) and effects (Description).
    • Automation: Run schema validations dynamically, ensuring data consistency.

A Scenario in Action

A potion master requests information on new magical spells for potion research. Using this workflow:

  1. Search: Bing Search API identifies APIs like Potterhead API.
  2. Validation: OpenAI ensures the API provides valid spell data, extracting fields like Name (e.g., Accio) and Description (e.g., Summons an object).
  3. Integration: Power Automate dynamically updates the potion master’s research database with enriched spell information, saving hours of manual effort.

Why This Deserves the Crawler Badge

  1. Innovative Use of Search:
    • Bing Search API dynamically finds and filters public APIs, extending its use beyond static results.
  2. AI-Powered Validation:
    • OpenAI GPT-4 dynamically analyzes URLs, validates APIs, and generates schemas and field mappings for seamless integration.
  3. Solving Real Business Problems:
    • Potion masters gain enriched, real-time product data without manual intervention, enabling informed decisions.
  4. Scalability:
    • The workflow is adaptable for future needs, such as integrating potions, artifacts, or even non-magical domains.

Crawling the Web for Magical Insights

This dynamic web-crawling solution exemplifies how search, AI, and automation can revolutionize the way data is discovered and integrated. With this innovation, we humbly submit our case for the Crawler Badge, showcasing how Team PowerPotters continues to push boundaries: acdc.blog/category/cepheo25.

#ACDC2025 #CrawlerBadge #PowerPotters #SearchAndAIInnovation

Everyone needs AI – Witches and Wizards too!

The area of AI and Copilot has started and finally has reached the wizarding world. The Ministry of Magic is using several AI-based features to automate their Letter heavy processes.

And now it’s also time that school like Hogwarts use AI to support students and upcoming students with their application process.

This follows certain guidelines to protect the magical users privacy and the right handling of sensible data.
We don’t want to spill the tea about death eater, unforgiving curses or you-know-who don’t we?

First of all, we want to protect the privacy of the users and internal business data. Such as the information about our students and their applications

This sensible data is excluded from our externally accessible HermioneAI. We only make that available as knowledge base for specific teachers on Microsoft Teams.

Additional the authentication itself is based on Microsoft Entra, to limit the risks of leaking sensible data.

This makes general and internal knowledge available to selected teachers of the school:

In this example, we use AI to search for specific information from Dataverse and display internal data that has been summarized and collected for easy access to the End User Wizard.

The handling of information and authentication is different on our web application: https://owlexpress.app/chat

As this is publicly available, mainly for students going through the application process, no authentication is necessary. Also these users don’t have access to internal data and only rely on Hermione’s general knowledge. Which already is huge.

Similar to the internal chat experience, the external relies on security measurements to avoid certain topics or harmful content. This for example covers Death eaters, You-Know-Who and the dark side of magic in general, but also jailbreak attempts

So, if a potential student or any other user of the chatbot on the website asks about joining the dark side, forbidden spells or details on You-know-who, we won’t help here

LOW CODE category and Crawler for social data

As a team, we have a primary goal: We help magic happen by utilizing modern AI approaches. 

We also understand that every student needs a mentor, but the number of available mentors is minimal today. Our idea is to introduce digital twins for available mentors, which would allow us to help a more significant number of students. To achieve that, we are using all available data about the mentors. But to keep it fresh and reliable, we must have the latest events in our OneLake. 

So, we implemented the LinkedIn profiles’ social crawlers to track all the mentors’ activities and events and suggest new potential mentors. 

We do that via the most modern and most straightforward way by utilizing the PowerAutomate Desktop: 

Then, the data will be collected and stored in an Excel spreadsheet, making it available for future processing with our Data Factory.  

The primary consumer of the data regarding upcoming events is the students, who use the complex search request to find the proper event by searching across different fields. Participation in community events potentially increases the chances of successful onboarding to the new school. 

Relevant search API functionality if covering 98% percent of the required functionality. Unfortunately, this API is unavailable on the Power Pages side. We implemented the Power Automate Flow from the portal, a wrapper around the original Dataverse API, to resolve that issue. 

On the portal side, we are using the Select2 component to implement autocomplete functionality.