Earning the Retro Tech Badge: What Jelly Bean Flavor Are You?

In a world of cutting-edge technology and modern SharePoint capabilities, sometimes it’s fun—and rewarding—to take a step back in time. For our quest to earn the coveted Retro Tech Badge, we decided to embrace some retro magic by implementing a nostalgic throwback: an early 2000s-style personality quiz!

The result? A delightful HTML-based quiz embedded in SharePoint that answers the all-important question: “What Jelly Bean Flavor Are You?”

Why Retro?

As part of our journey to explore deprecated or legacy technologies, we wanted to celebrate the quirks of simpler times while showing off some classic techniques that still work today. Think back to those vibrant, personality-packed quizzes from the early 2000s—quirky, colorful, and full of charm. By creating one of these in SharePoint, we combined a blast from the past with modern-day SharePoint flexibility.

Code:

<!DOCTYPE html>
<html>
<head>
  <title>What Jelly Bean Flavor Are You?</title>
</head>
<body style="font-family: Arial, sans-serif; text-align: center; margin: 20px; background-color: #fef5e7; color: #5c3d2e;">
  <h1><font size="5">What Jelly Bean Flavor Are You?</font></h1>
  <p>Answer the questions to discover your inner jelly bean flavor!</p>

  <table align="center" style="margin-top: 20px;">
    <tr>
      <td>
        <p><b>1. What is your favorite time of day?</b></p>
        <input type="radio" name="q1" value="Coconut"> Morning<br>
        <input type="radio" name="q1" value="Watermelon"> Afternoon<br>
        <input type="radio" name="q1" value="Chocolate"> Evening<br>
        <input type="radio" name="q1" value="Cinnamon"> Late Night<br>
      </td>
    </tr>
    <tr>
      <td>
        <p><b>2. What is your go-to weekend activity?</b></p>
        <input type="radio" name="q2" value="Cinnamon"> Party with friends<br>
        <input type="radio" name="q2" value="Chocolate"> Movie night<br>
        <input type="radio" name="q2" value="Coconut"> Hiking or outdoors<br>
        <input type="radio" name="q2" value="Watermelon"> Relaxing at home<br>
      </td>
    </tr>
    <tr>
      <td>
        <p><b>3. Pick a color:</b></p>
        <input type="radio" name="q3" value="Watermelon"> Green<br>
        <input type="radio" name="q3" value="Coconut"> White<br>
        <input type="radio" name="q3" value="Cinnamon"> Red<br>
        <input type="radio" name="q3" value="Chocolate"> Brown<br>
      </td>
    </tr>
  </table>

  <button style="padding: 10px 20px; margin-top: 20px; background-color: #ffcc00; color: #5c3d2e; border: none; cursor: pointer;" onclick="showResult()">Find Out!</button>

  <p id="result" style="margin-top: 30px; font-weight: bold;"></p>

  <script>
    function showResult() {
      var answers = {};
      for (var i = 1; i <= 3; i++) {
        var radios = document.getElementsByName('q' + i);
        for (var j = 0; j < radios.length; j++) {
          if (radios[j].checked) {
            var flavor = radios[j].value;
            answers[flavor] = (answers[flavor] || 0) + 1;
          }
        }
      }

      var topFlavor = null;
      var maxCount = 0;
      for (var flavor in answers) {
        if (answers[flavor] > maxCount) {
          topFlavor = flavor;
          maxCount = answers[flavor];
        }
      }

      var resultDiv = document.getElementById('result');
      if (topFlavor) {
        resultDiv.innerHTML = '<font size="4">You are <b>' + topFlavor + '</b>! Sweet and full of flavor!</font>';
      } else {
        resultDiv.innerHTML = '<font size="4">Please answer all the questions to get your flavor!</font>';
      }
    }
  </script>
</body>
</html>

What Makes This Retro?

  1. Use of <font> Tags: Instead of modern CSS, the <font> tag is used for styling, which has been deprecated for years.
  2. Inline Styles: Styling is applied directly to elements instead of through a <style> block or external stylesheet.
  3. Table-Based Layout: Questions are wrapped in a <table> for structure, which was a common practice in the early days of web development before <div>-based layouts became standard.
  4. Direct DOM Manipulation: Instead of modern querySelector, it uses document.getElementsByName for gathering inputs.
  5. Global JavaScript Functions: Functions like showResult are globally scoped, which was common before modern practices like modules or IIFEs (Immediately Invoked Function Expressions).

Badge-Claim Justification

  • The code directly incorporates deprecated HTML elements and practices.
  • The use of inline styles and table-based layout aligns with retro web development techniques.
  • It adds charm and nostalgia to our SharePoint implementation while meeting the criteria for retro tech.

Resco Integration: Enhancing Production and Installation at Hogwarts

Greetings, wizarding innovators and hackathon enthusiasts! ✨

At Team PowerPotters, we understand that every great potion not only needs the right ingredients but also a seamless production and installation process. To achieve this, we’ve integrated the Resco PCF component into our Finance and Operations (F&O) platform, bringing magical synergy to production workflows. Here’s how this integration streamlines operations and elevates efficiency in potion-making.


How Resco Was Integrated into F&O

Our initial goal was to integrate the Resco PCF component directly into a Canvas App for potion production approvals. However, platform compatibility challenges led us to innovate:

  1. Embedding Resco in F&O:
    • The Resco component is embedded in the Installation Details section of production orders in F&O.
    • This ensures that production and installation processes are centrally managed within the F&O interface.
  2. Bridging Resco and Canvas Apps:
    • To address compatibility issues, we developed an additional PCF control that retrieves forms from a Model-Driven App and displays them in the Canvas App.
    • This solution bridges the gap between Resco technology and Canvas Apps, ensuring cross-platform functionality without losing F&O context.

Key Benefits of Resco Integration

  1. Streamlined Operations:
    • Embedding Resco functionality in the Installation Details ensures that production workers and installers can collaborate directly within F&O, eliminating the need for external integrations or disconnected tools.
  2. Seamless User Experience:
    • The Resco PCF component provides an intuitive interface for scheduling and coordination, aligning perfectly with potion production workflows.
  3. Cross-Platform Collaboration:
    • Integrating forms from Model-Driven Apps into Canvas Apps demonstrates the flexibility of Resco tools, showcasing how they can extend and enhance F&O capabilities.
  4. Increased Productivity:
    • By reducing unnecessary integration layers and centralizing all production and installation details in one place, teams can focus on value-adding tasks.

The Impact of Resco in Our Potion Production System

This integration is a game-changer for potion production and installation:

  • Production Workers: Gain a clear overview of scheduling and installation details directly in F&O.
  • Installers: Collaborate effortlessly with production teams through streamlined workflows, reducing errors and delays.
  • Business Value: The flexibility of Resco tools, combined with the innovative PCF control, creates a scalable solution adaptable to any business-critical process.

Why This Earns the Resco Badge

Our solution highlights the innovative use of Resco components in a Finance and Operations context, demonstrating:

  1. Technical Expertise: Bridging Resco, Canvas Apps, and Model-Driven Apps to create a cohesive experience.
  2. Operational Efficiency: Centralizing production and installation workflows for seamless collaboration.
  3. Flexibility and Scalability: Adapting Resco tools to overcome platform limitations, ensuring their full potential is realized in F&O.

A Magical Collaboration of Tools

With Resco at the heart of our potion production workflows, we’ve turned challenges into opportunities, creating a solution that embodies innovation and efficiency. We humbly submit our case for the Resco Badge and invite you to explore the magic of integration with Team PowerPotters: acdc.blog/category/cepheo25.

#ACDC2025 #RescoBadge #PowerPotters #PotionProductionInnovation

Enchanting Interfaces: Claiming the Glossy Pixels, Chameleon Badges and Plug N’ Play

Greetings, fellow wizards and witches of the digital realm!

This afternoon, we are thrilled to announce that our magical journey through the enchanted forest of user interfaces has led us to claim two prestigious badges: Glossy Pixels and Chameleon. 🪄✨

Glossy Pixels: Just like the shimmering surface of the Mirror of Erised, our Power App user interfaces are crafted with a spellbinding glossiness that captivates the eye. These interfaces are not only visually stunning but also resilient, ensuring they won’t shatter like fragile glass on smaller screens. Whether you’re viewing them on a Muggle’s smartphone or a wizard’s enchanted tablet, the glossy charm remains unbroken.

Chameleon: Much like the Animagus who can transform at will, our solutions are incredibly responsive. They adapt seamlessly to all devices and screen sizes, from the smallest of handheld devices to the grandest of desktop monitors. This badge signifies our commitment to creating interfaces that are as versatile as a Polyjuice Potion, ensuring a smooth and consistent experience for all users, regardless of their device.

With these badges, we continue to push the boundaries of digital enchantment, creating user experiences that are as magical as a Patronus charm. Stay tuned for more spellbinding updates as we continue our quest to bring a touch of magic to the world of technology.

Plug N’ Play: In the spirit of the Weasleys’ Wizard Wheezes, we have conjured an app that seamlessly integrates with both Microsoft Teams and SharePoint, bringing a touch of magic to our daily business operations. This app is not just a mere spell; it is a powerful tool designed to solve a crucial business need, ensuring our workflows are as smooth as a well-brewed potion.

Our app, much like the Room of Requirement, adapts to the needs of its users. It enhances collaboration and communication within Teams, allowing us to pull information from various systems, have meaningful conversations about it, and take action—all within the enchanted walls of Teams. Additionally, we have woven our magic into SharePoint, creating a unified experience that ensures seamless access to documents, data, and collaboration tools.

Mischief managed! 🧙‍♂️✨

Everyone needs AI – Witches and Wizards too!

The area of AI and Copilot has started and finally has reached the wizarding world. The Ministry of Magic is using several AI-based features to automate their Letter heavy processes.

And now it’s also time that school like Hogwarts use AI to support students and upcoming students with their application process.

This follows certain guidelines to protect the magical users privacy and the right handling of sensible data.
We don’t want to spill the tea about death eater, unforgiving curses or you-know-who don’t we?

First of all, we want to protect the privacy of the users and internal business data. Such as the information about our students and their applications

This sensible data is excluded from our externally accessible HermioneAI. We only make that available as knowledge base for specific teachers on Microsoft Teams.

Additional the authentication itself is based on Microsoft Entra, to limit the risks of leaking sensible data.

This makes general and internal knowledge available to selected teachers of the school:

In this example, we use AI to search for specific information from Dataverse and display internal data that has been summarized and collected for easy access to the End User Wizard.

The handling of information and authentication is different on our web application: https://owlexpress.app/chat

As this is publicly available, mainly for students going through the application process, no authentication is necessary. Also these users don’t have access to internal data and only rely on Hermione’s general knowledge. Which already is huge.

Similar to the internal chat experience, the external relies on security measurements to avoid certain topics or harmful content. This for example covers Death eaters, You-Know-Who and the dark side of magic in general, but also jailbreak attempts

So, if a potential student or any other user of the chatbot on the website asks about joining the dark side, forbidden spells or details on You-know-who, we won’t help here

ONE FLOW

We are using the OneFlow custom connector in Power Automate to create and sign the contract in case of an approved request to change the school.

A copy of the signed contract will also be uploaded to SharePoint .

LOW CODE: Power Pages implementation

We have built a portal for students using Power Pages.

We were utilising the best no-code features available.

On a landing page, we did a quick and easy set up of the header with a logo of our academy, added the links to the pages. 

The student first opens it and clicks a button “Find me a school” to start the process of finding a better school for them. The student is redirected to the Submit Request page:


The submit request page contains a form of the student that they should fill in:

On submit the student will be redirected to the My Requests page:


On the backend, the managers see the request:

And the activities are assigned to the Student using Power automate (trigger on student request update/create

Student will see all the activities on the My Activities page in Power Pages in the List:

When the Survey is assigned to the Student – user navigates to the Survey tab and the first question appears on the screen:

Fabric Fera Verto

“It’s EVIDIosa, not Leviosaaaa” a contribution to the delivery for the Fabric Fera Verto category. Here we explain our Fabric set up in for this category.

Workspaces

We are using three workspaces. Dev for development, test for UAT testing and prod for production.

  • Dev: The Room of Requirement
  • Prod: The Great Hall
  • Test: The Restricted Section

Deployment Pipelines

We have implemented deployment pipelines for deploying changes between workspaces so that the reports can be tested and verified before going into production.

Hogverse Deployment Pipelines for deploying items between workspaces.

Medallion Architecture

We use the medallion architecture to organize data into three layers: Bronze (raw, unprocessed data), Silver (cleaned and enriched data), and Gold (aggregated and analytics-ready data), enabling a structured, scalable approach to data processing and analytics.

Bronze layer
The bronze or raw layer of the medallion architecture is the first layer of the lakehouse. It’s the landing zone for all data, whether it’s structured, semi-structured, or unstructured. The data is stored in its original format, and no changes are made to it.

Silver layer
The silver or validated layer is the second layer of the lakehouse. It’s where you’ll validate and refine your data. Typical activities in the silver layer include combining and merging data and enforcing data validation rules like removing nulls and deduplicating. The silver layer can be thought of as a central repository across an organization or team, where data is stored in a consistent format and can be accessed by multiple teams. In the silver layer you’re cleaning your data enough so that everything is in one place and ready to be refined and modeled in the gold layer.

Gold layer
The gold or enriched layer is the third layer of the lakehouse. In the gold layer, data undergoes further refinement to align with specific business and analytics needs. This could involve aggregating data to a particular granularity, such as daily or hourly, or enriching it with external information.

Data ingestion

We used Data pipelines and Dataflows Gen2 for retrieving data into Fabric and store it in our bronze lakehouse. Below is a image of the items we use to ingest data into the platform.

Dataflows Gen 2

Using Dataflows Gen2 to retrive data from another dataverse tenant using a Service Principle that allows “Accounts in any organizational directory (Any Microsoft Entra ID tenant – Multitenant)” to connect. Since there was an error creating a free Fabric capacity in our CDX tenant. The data gets stored in our bronze lakehouse.

Example:

Steg 1: Retrieving data using the Dataverse connector in Dataflow Gen2.

Steg 2: Storing the data in our Bronze lakehouse.

Data pipeline

Where using Data pipeline for retrieving data from open Harry Potter API’s and storing them as well in our bronze lakehouse.

Example:

Step 1: Using a copy data step in Data Pipeline to retrieve data form external data:

Step 2: Retrieves data from external source using the endpoint,
https://api.potterdb.com/v1/potions

Step 3: Stores the data in our lakehouse in a table called Potions.

Step 4: Mapps all fields from the external source and to the destinations table.

Retrives data from the potions table in our lakehouse that has been updated from an external source and later used to import to Dataverse.

Medallion implementation

We are using Notebooks and Pyspark to implement the data transformation between the medallions. Below we go through some examples for each layer.

Bronze layer

The bronze layer consist of raw data. This is data imported “as is” without any transformation. In the image below we can see how the data retrieved from the Harry Potter API looks without any transformation.👇

Silver layer

In the silver layer, we are removing and renaming columns and a cleaner table in the Silver Lakehouse. Below is the code for transforming the Potions table from the bronze layer in the picture above. The rest of the tables are using the same structure to transform the columns and data.

# Table Potions

import pyspark.sql.functions as F

# 1. Read Bronze table
bronze_df = spark.read.table("Lakehouse_Bronze.potions")

# 2. Flatten & rename columns (and remove unneeded ones)
silver_df = bronze_df.select(
    F.col("`data.id`").alias("Id"),
    F.col("`data.type`").alias("Type"),
    F.col("`data.attributes.name`").alias("Name"),
    F.col("`data.attributes.slug`").alias("Slug"),
    F.col("`data.attributes.difficulty`").alias("Difficulty"),
    F.col("`data.attributes.effect`").alias("Effect"),
    F.col("`data.attributes.ingredients`").alias("Ingredients")
)

# 3. Write to Silver
silver_df.write \
    .format("delta") \
    .mode("overwrite") \
    .saveAsTable("`Lakehouse_Silver`.`potions`")

After the transformation the Silver Lakehouse and the Potion table looks like this👇

Gold layer

In the gold layer we are creating dimention tables, fact tables and aggregated data.

Below is an image of the code for creating the fact table for the Potions table as seen in previous examples:

Fact Table

# Create a Fact Table (FactPotions)
# 1. Join potions data to dimDifficulty so each potion references a numeric DifficultyID.
# 2. (Optional) add a PotionKey if you want a unique fact table key.
# 3. Write the result to a Gold “fact” table.

import pyspark.sql.functions as F
from pyspark.sql.window import Window

# Re-read silver potions to keep original schema
silver_potions_df = spark.read.table("Lakehouse_Silver.potions")

# Read the newly created dimension to get DifficultyID
dimDifficulty_df = spark.read.table("Lakehouse_Gold.dimDifficulty")

# 1) Join on the Difficulty column
factPotions_df = (silver_potions_df.join(dimDifficulty_df, on="Difficulty", how="left"))

# 2) (Optional) add an auto-increment surrogate key for each row
factPotions_df = factPotions_df.withColumn("PotionKey", F.row_number().over(Window.orderBy("Id")))

# Reorder columns for clarity
factPotions_df = factPotions_df.select( \
    "PotionKey", \
    "Id", \
    "Name", \
    "Slug", \
    "DifficultyID", \
    "Difficulty", \
    "Effect", \
    "Ingredients", \
    "Type" \
)

# 3) Write to Gold as a fact table
factPotions_df.write \
    .format("delta") \
    .mode("overwrite") \
    .option("overwriteSchema", "true") \
    .saveAsTable("Lakehouse_Gold.factPotions")

Aggregated Table

# Aggregated Summary Table

import pyspark.sql.functions as F
from pyspark.sql.window import Window

# Read Silver Potions
silver_potions_df = spark.read.table("Lakehouse_Silver.potions")

agg_potions_df = factPotions_df.groupBy("DifficultyID").agg(F.count("*").alias("CountOfPotions"))

# Join to the dimension to get the difficulty name
agg_potions_df = agg_potions_df.join(dimDifficulty_df, on="DifficultyID", how="left")

# Write as a separate summary table
agg_potions_df.write \
    .format("delta") \
    .mode("overwrite") \
    .option("overwriteSchema", "true") \
    .saveAsTable("Lakehouse_Gold.potionsByDifficulty")

Difficulty Dimension

# Create a Difficulty Dimension (DimDifficulty)
# 1. Read the silver potions.
# 2. Extract unique difficulty values.
# 3. Assign a numeric DifficultyID.

import pyspark.sql.functions as F
from pyspark.sql.window import Window

# Read Silver Potions
silver_potions_df = spark.read.table("Lakehouse_Silver.potions")

# 1) Create a distinct list of difficulties
dimDifficulty_df = (silver_potions_df.select("Difficulty").distinct().filter(F.col("Difficulty").isNotNull()))

# 2) Generate a numeric key (DifficultyID) using row_number
windowSpec = Window.orderBy("Difficulty")
dimDifficulty_df = dimDifficulty_df.withColumn("DifficultyID", F.row_number().over(windowSpec))

# 3) Reorder columns so the ID is first
dimDifficulty_df = dimDifficulty_df.select("DifficultyID", "Difficulty")

# 4) Write to Gold dimension table
dimDifficulty_df.write \
    .format("delta") \
    .mode("overwrite") \
    .option("overwriteSchema", "true") \
    .saveAsTable("Lakehouse_Gold.dimDifficulty")

After the transformation we have dimension, facts and aggregated tables👇

Semantic Model

The semantic model for reporting is built on the Gold layer. The best result would be a star model, but that was not implemented fully unfurtunately.

Aaand the report with the aggregated data and the transformed columns look ended up like this.

Let’s show and tell about our KnowItAll chat

Let’s be honest—Harry and Ron wouldn’t have made it through their years at Hogwarts without Hermione. Her knowledge of spells and magical theory saved the day more times than we can count. But let’s not forget, it’s not just about knowing the spell—it’s about saying it correctly too.

Here comes the KnowItAll chat available to all students at Hogwarts. Now everyone can have their own Hermione at hand in desperate, and not so desperate times.

We have utilized the Azure OpenAI endpoint to deliver the user’s message to the “magificial” Hermione, and sending the response through to the Azure AI Text to Speech model. The speech model provides the voice returned to the canvas app, in a British lady-like, Hermione voice. Explained more in detailed here.

While a lot already work, we reeeeeally wanted to get audio in the Canvas App so that the user was told how to pronounce the spells. However, we must admit our defeat and we haven’t made it work. Maybe if the Hackathon was a little bit longer…

PRO CODE category. Azure Speech recognition in Power Pages

Conducting surveys often involves tedious typing, which can be challenging, especially for students. To make the process easier, we’re leveraging Azure Speech Recognition in Power Pages to transcribe spoken responses directly into text fields. Students can simply speak their answers instead of typing them:

How It Works

Connecting Azure Speech SDK
To enable speech recognition, we connect the Azure Cognitive Services Speech SDK to our Power Pages using this script.

    HTML Setup for Speech Input
    We added a microphone button and a text area to capture and display the transcribed response. Here’s the code for the interface:

    Clicking the microphone button starts recording.

    The spoken response is transcribed into the text area.

    Saving Responses and Navigating
    Once a student provides their answer, clicking the Save & Next button saves the response and moves to the next question. Here’s how it works:

      Benefits

      • Ease of Use: Students can focus on their answers without worrying about typing.
      • Efficiency: Responses are saved automatically, and the survey flows smoothly.
      • Accessibility: Ideal for students with typing difficulties or those who prefer speaking.

      By combining Azure Speech Services with Power Pages, we’re simplifying the survey process and improving the overall experience for users. Speech technology makes surveys faster, easier, and more engaging!

      Nasty Hacker: The Art of Superdirty Hacks for Super Cool Maps

      🕵️‍♂️ Nasty Hacker: The Art of Superdirty Hacks for Super Cool Maps

      Some call it hacking; we call it creative problem-solving. With our interactive map powered by Azure Blob Storage, we’ve embraced the Nasty Hacker badge by employing ingenious, slightly “dirty” hacks to achieve unparalleled awesomeness. 🚀

      Here’s the story of how we turned Power Apps, Azure Blob Storage, and polling magic into a real-time tracking solution for our villainous missions.


      🗺️ The Interactive Map: Where the Magic Happens

      Our interactive map isn’t just another pretty visual—it’s a real-time tracker that:

      • Displays the location of targets and hitmen.
      • Shows reward information for completed missions.
      • Updates seamlessly across PCs, tablets, and mobile devices.

      But the secret sauce lies in how we use Azure Blob Storage to make it all work.


      📂 Hacking Azure Blob Storage

      Here’s where things get delightfully dirty:

      1. Location Overwrites 🖊️
        Every time a target or hitman moves, the Power App writes their updated location to a JSON file stored in Azure Blob Storage.
        • The JSON file is overwritten on every update, ensuring that only the latest information is available.
        • By replacing rather than appending, we avoid bloating the file and keep things lightweight.
      2. Polling for Updates 🔄
        The interactive map uses a polling mechanism to continuously fetch the latest JSON file from Blob Storage.
        • The map checks for updates every few seconds to maintain near real-time accuracy.
        • This allows users to track movements live without needing a complex back-end setup.
      3. Dirty Hack Magic 🧙‍♂️
        • Instead of a sophisticated event-driven architecture, we rely on frequent overwrites and polling.
        • Is it the cleanest solution? No.
        • Does it work like a charm? Absolutely.

      🔧 How It All Fits Together

      Power Apps:

      • Sends location data (targets, hitmen, rewards) to Azure Blob Storage via Power Automate flows.
      • Provides the front-end for mission tracking and user interaction.

      Azure Blob Storage:

      • Acts as the central repository for real-time location data.
      • Efficiently handles frequent overwrites without breaking a sweat.

      Interactive Map:

      • Fetches the JSON file from Blob Storage at regular intervals.
      • Renders the updated data using Google Maps API, ensuring a visually stunning and responsive experience.

      💡 Why This Hack Works

      Our “nasty” solution solves a business need while keeping development quick and lightweight:

      • Real-Time Updates: Track movements and rewards without complex integrations.
      • Scalability: Azure Blob Storage handles frequent writes and reads effortlessly.

      ⚠️ Lessons from the Dark Side

      While we’re proud of our hack, it’s not without its caveats:

      • Polling is Resource-Intensive: Frequent polling can increase resource usage.
      • Overwrite Risks: A race condition could occur if multiple users update the same JSON file simultaneously.

      These risks remind us that even the best hacks require thoughtful implementation.


      🏆 Why Nasty Hacks Rock

      Sometimes, the dirtiest hacks lead to the most magical solutions. By combining Azure Blob Storage, Power Apps, and a dash of creativity, we’ve created a real-time interactive map that delivers on both functionality and flair.

      “Hacks might be nasty, but the results are downright awesome.” 🧙‍♀️✨