Automating the Magic: PowerPotters’ Shell-Powered ESP32 Firmware Deployment

Greetings, fellow tech wizards and hackathon enthusiasts! ✨

At Team PowerPotters, we understand the power of automation—especially when it comes to managing hardware like the ESP32 microcontroller in our potion-making IoT system. To streamline firmware updates and ensure seamless operation, we’ve created a GitHub Actions workflow that automates the entire process of flashing firmware to the ESP32.

This submission demonstrates how the Power of the Shell has enabled us to simplify complex processes, saving time and reducing errors while showcasing our technical wizardry.


🪄 The ESP32 Firmware Deployment Workflow

Our GitHub Actions workflow, named “Deploy ESP32 Firmware,” automates the process of flashing firmware to an ESP32 microcontroller whenever code changes are pushed to the main branch. Here’s how it works:

1. Triggering the Workflow

The workflow kicks off automatically when a commit is pushed to the main branch of our repository. This ensures that the firmware on the ESP32 is always up to date with the latest code.

2. Code Checkout

Using the actions/checkout@v3 action, the workflow pulls the latest code from the repository and prepares it for deployment.

3. Installing Dependencies

The workflow installs the esptool Python package, a critical tool for communicating with the ESP32 and flashing firmware. This step ensures that the deployment environment is ready for the flashing process.

4. Flashing the ESP32

The magic happens here! The workflow runs the following shell command to upload the firmware to the ESP32:

on:
push:
branches:
- main

jobs:
deploy:
runs-on: self-hosted
steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Install Python dependencies
run: python -m pip install esptool

- name: Flash ESP32
run: |
python -m esptool --chip esp32 --port COM3 --baud 115200 write_flash -z 0x1000 .pio/build/esp32dev/firmware.bin
  • Chip Selection: Specifies the ESP32 chip for flashing.
  • Port Configuration: Uses the COM3 port to communicate with the ESP32.
  • Baud Rate: Sets the communication speed to 115200 for efficient data transfer.
  • Firmware Location: Specifies the path to the firmware binary to be flashed onto the device.

The process ensures the firmware is deployed quickly, accurately, and without manual intervention.


Why This Workflow is a Game-Changer

  1. Automation: By automating the firmware deployment, we’ve eliminated manual steps, reducing the risk of errors and saving time for potion production magic.
  2. Reliability: The workflow runs on a self-hosted runner, ensuring direct access to the ESP32 hardware and a stable deployment environment.
  3. Efficiency: The esptool command streamlines the flashing process, enabling us to quickly update the firmware as new features or fixes are developed.
  4. Scalability: This workflow can be adapted for other IoT devices or expanded to handle multiple ESP32 units, making it a versatile solution for hardware automation.

🧙‍♂️ Why We Deserve the Power of the Shell Badge

The Power of the Shell badge celebrates the creative use of shell scripting to automate technical processes. Here’s how our workflow meets the criteria:

  • Command-Line Expertise: We leveraged the power of esptool and shell scripting to automate the firmware flashing process, showcasing our technical proficiency.
  • Automation Innovation: The integration with GitHub Actions ensures that every code push triggers an immediate, accurate firmware update.
  • Hardware Integration: By running on a self-hosted runner with direct hardware access, our solution bridges the gap between software and IoT hardware seamlessly.

🔮 Empowering Potion Production with Automation

From potion-making to IoT innovation, our ESP32 firmware deployment workflow ensures that our hardware is always ready for the next magical task. We humbly submit our case for the Power of the Shell badge, showcasing the efficiency and power of shell scripting in modern hackathon solutions.

Follow our magical journey as we continue to innovate at ACDC 2025: acdc.blog/category/cepheo25.

#ACDC2025 #PowerOfTheShell #ESP32 #AutomationMagic #PowerPotters

The benefits of mixing magic with ancient magic

Our OwlExpress solution needs to get the list of possible future wizards from a different set of data sources. We are merging these data into a PowerBI report so that the segmentation can be easily done using the slicers in a more intuitive way.

This Report is integrated into an MDA using a PCF control that House Elves Limited created that can save the segment chosen by the user using the Export Segment command.

This will trigger a Power Automate flow that will create the applications for all future students of the chosen segment.

To integrate the PowerBI report the following API’s are being consumed:

The PCF component has been developed using Test Driven Development and Storybook which allows you to fail fast and often and reach success faster.

The component is a react/virtual component, which simplifies a lot the separation of concerns into different components.

The code has been structured using the best practices for PCF components. We are using a Solution Project to simplify the build of the Solution that contains the component.

To standardize the build and deployment process and to ensure that the PCF Version is always kept in sync with the containing Solution version a couple of PowerShell scripts have been written. This helps also with the Azure DevOps pipelines because these can be simplified to execute the exact scripts that the developer executes on their machine.

CI/CD like a boss

What’s better than automatic deploy pipelines? Not that much really. When merging to main, our React App automatically deploys to Azure Static Web Apps. Pretty neat! All secrets are of course stored in Github Secrets.

Our github workflow:

name: Azure Static Web Apps CI/CD

on:
  push:
    branches:
      - main
  pull_request:
    types: [opened, synchronize, reopened, closed]
    branches:
      - main

jobs:
  build_and_deploy_job:
    if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
    runs-on: ubuntu-latest
    name: Build and Deploy Job
    steps:
      - uses: actions/checkout@v3
        with:
          submodules: true
          lfs: false
      - name: Set up API Key
        run: echo "API_KEY=${{ secrets.DATAVERSE_API_KEY }}" >> $GITHUB_ENV
      - name: Build And Deploy
        id: builddeploy
        uses: Azure/static-web-apps-deploy@v1
        with:
          azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_THANKFUL_HILL_0CC449203 }}
          repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
          action: "upload"
          ###### Repository/Build Configurations - These values can be configured to match your app requirements. ######
          # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
          app_location: "/" # App source code path
          api_location: "" # Api source code path - optional
          output_location: "build" # Built app content directory - optional
          dataverse_api_key: ${{ secrets.DATAVERSE_API_KEY }}
          ###### End of Repository/Build Configurations ######


  close_pull_request_job:
    if: github.event_name == 'pull_request' && github.event.action == 'closed'
    runs-on: ubuntu-latest
    name: Close Pull Request Job
    steps:
      - name: Close Pull Request
        id: closepullrequest
        uses: Azure/static-web-apps-deploy@v1
        with:
          azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_THANKFUL_HILL_0CC449203 }}
          action: "close"

Simplifying Backend Deployment with Terraform: Seamless Updates and Feature Implementations

In today’s fast-paced software development environment, managing infrastructure and deploying new features seamlessly is critical. As applications grow, so do the complexities of deploying and maintaining them. Fortunately, tools like Terraform provide a powerful solution for managing backend deployments in a consistent and automated way.

Using Terraform to deploy and manage the backend of our solution we are enabling seamless updates and feature implementations.

Security is one of the most crucial components of any solution. Azure KeyVault serves as the centralized service for storing sensitive information such as API keys, secrets, and certificates. Using Terraform, we can automate the creation and management of KeyVault, making it easy to maintain and secure our application’s secrets.

Once the KeyVault is in place, the next service we need to deploy is our web app service. This service hosts the main web application of our solution. Using Terraform, we can ensure that the latest version of our web application is deployed automatically whenever new changes are committed to the code repository.

Faruk the Fabricator Fun with Fabric

Do you remember the Fabricator and his efforts described in the earlier post?

So now the Fabricator created deployment pipelines to always keep his changes across multiple workspace. No change will go unnoticed, no pipeline will flow alone, no developer will overwrite another. That was the promise The Fabricator gave.. And he delivered.

We are using pipelines inside fabric to deploy fabric components between workspaces.

PS: this article is the part 1 of the Power Of The Shell badge claim, due to us using Fabric pipeline. We started working on CI/CD pipelines for Dataverse solutions and Power Pages. Second article is coming when we finish the remaining of the ALM solutions.

Real-time Dataverse data for real-time business overview

What is it good that you can have improved customer communication with chatbots and forums, if the plumbers can’t get notified in realtime of relevant cases? Moreover, Mario and Luigi as CEO and CTO respectively want real-time data for improving decision support (e.g. plumber allocation) and PlumbQuest trends for further analysis.

Dataverse Webhook on Incident reports

To extract real time data, we created a Web hook using the plugin tool box for Dataverse, which calls our Azure Function whenever a new PlumbQuest is made.

XRMToolbox to add a Web hook to Dataverse for real time PlumbQuest analysis

To ensure safe access, function level authentication is applied, where the toolbox allows for HTTP Query parameters, safely accessing our Function which uses a traditional HTTP-trigger:

However – Here is the hacky part. The Web hook payload is too large, which makes the traditional JSON-payload corrupted with highly dynamic lengths and content of each PlumbQuest. Therefore we had to do some custom string manipulation to extract the values of most business and de-corrupt the JSON and preparing it for analysis – Almost a Complete ETL-pipeline (*cough*)!

But to access this real-time data in an Analytics environment – Fabric is the way to go (as by Microsoft huge Hype-wave). We created a Custom app Source for an Event Stream in Fabric with an EventHub output binding, which then can map to many different destinations, including a Lakehouse for historisation and trend analysis, as well as Data Factory Reflexes for reactive actions in real-time.

With Data Activator’s Reflexes directly on the stream, one can e.g. trigger additional flows for highly acute PlumbQuest from members in distress, or highlight plumbers who did not provide proper service according to the PlumbQuest review.

Our Fabric Event Stream with the Custom app as Source and the Lakehouse for historisation and down-the-line processing and analysis

In addition, we set up a Dataverse Shortcut (Link) to Fabric, allowing for direct access to Dataverse without ETL or ingestion, providing ease of access and down-the-line deeper analysis on key business metrics, trends and community engagement.

Our PlumbQuests in Fabric Lakehouse using a Dataverse Connection for e.g. a more complete 365 customer view using Fabric items

Reproducible deployment

Although we are nasty hackers, we are reproducible hackers. As these were the only Azure resources used (directly), we deployed them using bicep and the Azure CLI. Sensitive variables are marked as secure and not included in the scripts, but parameterised.

The main bicep deployment definitions for our Azure Function app and related resources, the resource group naturally had a separate BICEP definition.

So if you want to do it hacky, at least make it traceable.

Automating potential uncomfortable situations and (green) shells

Automation galore! A lot of processes are jumpstarted when a banana is detected and the poor first banana responders need all the help they can get. Notifying next of kin can be a tough task, and for Road Authority personell with limited people skills automation is key.

We have set up a workflow that works like this. When a banana is detected we receive a http request. When then register the incident in Dataverse while also looking up next-of-kin data, and send that to outlook to automatic notifications.

Our solutions have a lot of bits and pieces, having control on how these are deployed are essential. All our Azure resources are handled nicely using ARM template deployment.

Code controll

We have implementet piplines for azure infrastructure and publishing azure function.

YAML and PowerShell scripts to ensure CI/CD.

Infrastructure as code: 

We have created a pipeline for our Bicep code to handle the infrastructure in Azure. 

Currently the infrastructure consists of: 

  • Storage account 
  • Webapp 
  • Function App 
  • Application insights 

Pipeline to build and deploy the Azure Functions: 

We have created a pipeline in Azure devops to build and publish the functions we are using in the canvas app. The build is triggered on code push to the “main” repository 

Empowering Seamless Updates: Unveiling Our Project’s Azure Pipelines Integration!

We claim badges:

  • Power of the Shell – for building shell script for Azure DevOps pipeline for Azure Function deployment, Dataverse solutions, Power Pages and PCF control

We’re thrilled to share a major stride in our project’s evolution! Building upon our commitment to seamless communication, we’ve successfully implemented a CI/CD process for deploying updates to Azure Functions through Azure Pipelines.

This enhancement ensures that our real-time communication capabilities remain at the forefront of efficiency and reliability. With the newly established CI/CD pipeline, deploying updates to our Azure Functions is now a smooth and automated process. This means quicker turnaround times for improvements and features, allowing us to adapt swiftly to the dynamic needs of our users.

The Azure Pipelines integration not only amplifies the agility of our development process but also guarantees a consistent and reliable deployment of updates. As we continue to innovate and refine our project, this CI/CD implementation represents a pivotal step towards maintaining a cutting-edge and user-friendly experience.

We also covered the CI/CD for the dataverse solutions, portal and PCF control: