CI/CD OCD

Badge claim: Power of the Shell

With a dashboard (and QR code gathering) web app running as an Azure Static Web App, linked to a separate Azure Functions app for the backend that handles all communication with Dataverse, we needed a proper setup for efficient development, collaboration and deployment.

Using Azure DevOps to host both repos, we used Azure Pipelines for CI/CD.

The backend runs .NET 10 in Linux, using the DotNetCoreCLI@2 and AzureFunctionApp@2 tasks for build and deployment.

trigger:
  branches:
    include:
      - main

pr:
  branches:
    include:
      - main

variables:
  buildConfiguration: "Release"
  azureSubscription: "Microsoft Azure Sponsorship(30b24b6e-ef03-42c4-bba5-20a33afd68e4)"
  functionAppName: "itera-scope-creepers-api"

stages:
  - stage: BuildAndDeploy
    displayName: "Build & Deploy"
    jobs:
      - job: Build
        displayName: "Build API Function App"
        pool:
          vmImage: "ubuntu-latest"

        steps:
          - checkout: self

          - task: UseDotNet@2
            displayName: "Use .NET SDK 10.x"
            inputs:
              packageType: "sdk"
              version: "10.x"

          - task: DotNetCoreCLI@2
            displayName: "Restore NuGet packages"
            inputs:
              command: "restore"
              projects: "API.csproj"

          - task: DotNetCoreCLI@2
            displayName: "Build"
            inputs:
              command: "build"
              projects: "API.csproj"
              arguments: "--configuration $(buildConfiguration)"
              publishWebProjects: false

          - task: DotNetCoreCLI@2
            displayName: "Test (all *Tests.csproj projects if present)"
            inputs:
              command: "test"
              projects: "**/*Tests.csproj"
              arguments: "--configuration $(buildConfiguration)"
              publishTestResults: true

          - task: DotNetCoreCLI@2
            displayName: "Publish function app"
            inputs:
              command: "publish"
              projects: "API.csproj"
              arguments: "--configuration $(buildConfiguration) --output $(Build.ArtifactStagingDirectory)/publish"
              publishWebProjects: false
              zipAfterPublish: true

          - task: AzureFunctionApp@2
            displayName: "Deploy Azure Function App"
            inputs:
              azureSubscription: "$(azureSubscription)"
              appType: "functionAppLinux"
              appName: "$(functionAppName)"
              package: "$(Pipeline.Workspace)/**/*.zip"

The frontend is a client-side rendered React app, using Vite as the bundler and pnpm as the package manager for increased security, and is both built and deployed using the AzureStaticWebApp@0 task.

trigger:
  branches:
    include:
      - main

pool:
  vmImage: ubuntu-latest

variables:
  NODE_VERSION: '22.22.0'

steps:
  - task: NodeTool@0
    displayName: 'Use Node.js $(NODE_VERSION)'
    inputs:
      versionSpec: '$(NODE_VERSION)'

  - script: |
      corepack enable
      pnpm config set node-linker hoisted
      pnpm install --frozen-lockfile
    displayName: 'Install dependencies with pnpm'

  - script: pnpm build
    displayName: 'Build app'

  - task: AzureStaticWebApp@0
    displayName: 'Deploy to Azure Static Web App (itera-scope-creepers)'
    inputs:
      azure_static_web_apps_api_token: '$(AZURE_STATIC_WEB_APPS_API_TOKEN)'
      app_location: '/'
      api_location: ''
      output_location: 'dist'

For local development, the SWA CLI is used to emulate the linked backend.

CI/CD FTW! 🤓

Power of the Shell: Scripting a Complete Minecraft Infrastructure

How we used Azure DevOps, Docker Compose, and Bash to create a fully automated deployment pipeline for a cloud-connected Minecraft server.

The Challenge

We needed to deploy and manage a multi-service Minecraft ecosystem that includes:

  • A Fabric Minecraft server with custom mods
  • A Node.js backend API
  • An AI-powered Mineflayer bot
  • An Azure Service Bus listener for cloud integration

All of this running on a Raspberry Pi, with zero manual intervention after code commits.

Infrastructure as Code

Docker Compose: The Foundation

Our entire infrastructure is defined in a single docker-compose.yml file. No clicking through portals, no manual container creation—just declarative YAML:

Every service, network, volume, and environment variable is version-controlled. Need to spin up the entire stack? One command:


docker-compose up -d

Environment-Driven Configuration

Secrets and configuration are externalized through environment variables and .env files:

This separation means the same compose file works across development, staging, and production—only the environment changes.

CI/CD Pipeline: Azure DevOps

Our Azure DevOps pipeline automates the entire deployment lifecycle in three stages:

Key automation features in our Bash scripts:

Automatic Backups with Rotation:




Health Checks with Retry Logic:






Stage 3: Verification

Post-deployment verification ensures everything is running correctly:

The Complete Picture

Benefits of This Approach

AspectTraditionalOur Scripted Approach
DeploymentManual SSH, copy filesAutomatic on git push
RollbackHope you have a backupLast 3 backups auto-retained
ConfigurationScattered across serversVersion-controlled in git
Reproducibility“Works on my machine”Identical every time
Audit TrailWho changed what?Full git history

Key Takeaways

  1. Everything is Code: Docker Compose defines infrastructure, YAML defines pipelines, Bash scripts handle orchestration.
  2. Self-Hosted Agents: Running an Azure DevOps agent on the Raspberry Pi eliminates SSH complexity and firewall issues.
  3. Defensive Scripting: Every script handles failures gracefully with continueOnError, retry loops, and fallback options.
  4. Preserve What Matters: The rsync exclusions protect world data, databases, and secrets during deployments.
  5. Automated Verification: Don’t just deploy—verify. Health checks and resource monitoring catch issues before users do.

With the power of the shell, our entire Minecraft infrastructure deploys itself. Push to main, grab a coffee, and come back to a running server.

Automating Solutions ALM with Github Actions and AI

Our developers should focus their time as little as possible on repeating tasks like, deployment, release notes, updating technical documentation, who did what at what time, and the list goes on. It’s a really important job but it keeps eating our valuable time…

Our solution to this is a solution designed to streamline Dataverse solution management while enforcing ALM best practices. It combines Dataverse Pipelines, GitHub Actions, AI-powered documentation & Teams notifications to deliver fully automated, auditable, and governed deployments.

Deployment Stage record in Power Platform Pipelines

Automated Solution Management

  • Power Platform Pipelines – Developer triggers the deployment for the respective solution.
  • Cloud flows – Triggered on pre-deployment step & integrates with Github & Microsoft Teams.
  • GitHub Actions export, unpack, commit & create pull requests for the solution.
  • PR outcome triggers a cloud flow in order to notify users and continue/stop the deployment.
Triggered on pre-deployment step
#StairwayToHeaven

Governance & Deployment Control

  • Github PR review acts as a pre-deployment approval step, giving teams control over which solutions that can reach the target environments.
  • Deployment outcomes are sent back to Dataverse and Teams, providing real-time feedback.
  • Branch strategy (dev for development, main for production) keeps production stable and auditable.
Triggered from Github Action (pr-feedback-to-dataverse.yml)
Deployment Stage Run is updated with link to Github PR for more details

AI-Powered Documentation

  • GitHub Copilot analyzes every PR and generates technical documentation automatically.
  • Changelogs, impact analysis, and test recommendations are included, making knowledge transparent and up-to-date.
  • Documentation is versioned and stored alongside solutions for easy reference.


Benefits

  • Faster Deployments: Automation reduces manual steps and errors.
  • Full Governance: PR workflow enforces approvals and branch protection.
  • Better Transparency: Teams see real-time deployment status and AI-generated documentation.
  • Audit-Ready: Every change, approval, and deployment is logged and version-controlled.

Power of the shell: Automating Minecraft Plugin Deployment with Azure DevOps

Deploying Minecraft plugins for PaperMC servers manually gets tedious fast. You need to build your code, copy the .jar file and add it to the servers plugin folder. Then lastly you need to restart the server by killing the java process and then starting it back up.

This can create pain points for logged in users that gets disconnected. Because lets be real, if you’re doing this manually, you don’t want to send a server message that you are restarting in X minutes and wait before uploading the new version of the plugin.

You probably already use some sort of version control for managing the plugin code.

So in this post I will talk about setting up a a build pipeline using Azure DevOps to build and store the .jar artifact.

First some details about the plugin and server host:
The plugin is written using Java and it’s using Gradle as its build system. I’m using git as VC and pushing the code to an Azure DevOps repo.
The server is hosted on a Windows virtual machine in Azure that is protected using a virtual network.

That is why this was the architecture I landed on:

  • Azure DevOps Pipeline – Builds the plugin JAR on every push to master
  • PowerShell Script – Runs on your Windows server, polls for new builds
  • mcrcon – Sends RCON commands to the PaperMC server announcing shutdowns to players.

Java project setup

First I created an Azure pipelines file to run the Gradle build command. This also needs to be set up so it can build using JDK 21 and triggering when a new version is pushed to the master branch, and uploading the artifact.

# azure-pipelines.yml
trigger:
- master

pool:
vmImage: 'ubuntu-latest'

variables:
GRADLE_USER_HOME: $(Pipeline.Workspace)/.gradle

steps:
- task: JavaToolInstaller@0
inputs:
versionSpec: '21'
jdkArchitectureOption: 'x64'
jdkSourceOption: 'PreInstalled'
displayName: 'Set up JDK 21'

- task: Cache@2
inputs:
key: 'gradle | "$(Agent.OS)" | **/build.gradle*'
restoreKeys: |
gradle | "$(Agent.OS)"
path: $(GRADLE_USER_HOME)
displayName: 'Cache Gradle packages'

- task: Gradle@3
inputs:
gradleWrapperFile: 'gradlew'
tasks: 'clean build'
publishJUnitResults: false
displayName: 'Build with Gradle'

- task: CopyFiles@2
inputs:
sourceFolder: '$(Build.SourcesDirectory)/build/libs'
contents: '*.jar'
targetFolder: '$(Build.ArtifactStagingDirectory)'
displayName: 'Copy JAR to staging'

- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: 'plugin-jar'
publishLocation: 'Container'
displayName: 'Publish JAR artifact'

Configuring the windows server

I needed to set a couple of system environment variables. The first one AZURE_DEVOPS_PAT that stored an access token to Azure DevOps to be able to download the file. Then RCON_PASSWORD which is used to connect to the PaperMC server.

Which reminds me that these properties in the server.properties file needs to be set

enable-rcon=true
rcon.port=25575
rcon.password=your-secure-password

Then I created the deployment script. The script downloads the artifact from Azure and checks for changes in the build version. If found it announces a shutdown in 120 seconds, stops the server gracefully, downloads the plugin artifact, copies it to the correct folder and restarts.


# poll-artifacts.ps1
$org = "pt"
$project = "ACDC%202026"
$pipelineId = "000"
$pat = $env:AZURE_DEVOPS_PAT
$lastBuildFile = ".\lastBuild.txt"

# Server configuration
$serverDir = "C:\Minecraft\Server"
$pluginsDir = "$serverDir\plugins"
$rconHost = "localhost"
$rconPort = 25575
$rconPassword = $env:RCON_PASSWORD
$shutdownDelaySeconds = 120

# Function to send RCON command to PaperMC server
function Send-RconCommand {
    param([string]$Command)
    
    # Using mcrcon or similar tool (install separately)
    # Alternative: Use a PowerShell RCON module
    & mcrcon -H $rconHost -P $rconPort -p $rconPassword $Command
}

function Announce-Shutdown {
    param([int]$Seconds)
    
    Send-RconCommand "say §c[SERVER] Restarting in $Seconds second(s) for plugin update!"
    
    for ($i = $Seconds; $i -gt 0; $i-=5) {
        if ($i -le 15) {
            Send-RconCommand "say §c[SERVER] Restarting in $i seconds!"
        }
        Start-Sleep -Seconds 5
    }
    
    Send-RconCommand "say §c[SERVER] Restarting NOW!"
    Start-Sleep -Seconds 5
}

function Stop-MinecraftServer {
    Send-RconCommand "save-all"
    Start-Sleep -Seconds 3
    Send-RconCommand "stop"
    
    # Wait for server process to exit
    $timeout = 60
    $elapsed = 0
    while ((Get-Process -Name "java" -ErrorAction SilentlyContinue | Where-Object { $_.Path -like "*$serverDir*" }) -and $elapsed -lt $timeout) {
        Start-Sleep -Seconds 2
        $elapsed += 2
    }
    
    # Force kill if still running
    Get-Process -Name "java" -ErrorAction SilentlyContinue | Where-Object { $_.Path -like "*$serverDir*" } | Stop-Process -Force
    Get-Process -Name "cmd" -ErrorAction SilentlyContinue | Where-Object {
        $_.MainWindowTitle -match "paper|minecraft"
    } | Stop-Process -Force
}

function Start-MinecraftServer {
    Set-Location $serverDir
    Start-Process -FilePath "cmd.exe" -ArgumentList "java -Xms2048M -Xmx2048M -jar paper.jar nogui" -WorkingDirectory $serverDir
}

# Main polling logic
$headers = @{
    Authorization = "Basic " + [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$pat"))
}

$buildsUrl = "https://dev.azure.com/$org/$project/_apis/build/builds?definitions=$pipelineId&statusFilter=completed&resultFilter=succeeded&`$top=1&api-version=7.0"
$response = Invoke-RestMethod -Uri $buildsUrl -Headers $headers

if ($response.count -gt 0) {
    $latestBuild = $response.value[0]
    $buildId = $latestBuild.id

    $lastProcessed = if (Test-Path $lastBuildFile) { Get-Content $lastBuildFile } else { "0" }

    if ($buildId -ne $lastProcessed) {
        Write-Host "New build detected: $buildId"
        
        # Announce and shutdown
        Announce-Shutdown -Seconds $shutdownDelaySeconds
        Stop-MinecraftServer
        
        # Download new artifact
        $artifactUrl = "https://dev.azure.com/$org/$project/_apis/build/builds/$buildId/artifacts?artifactName=plugin-jar&api-version=7.0"
        $artifact = Invoke-RestMethod -Uri $artifactUrl -Headers $headers
        
        Invoke-WebRequest -Uri $artifact.resource.downloadUrl -Headers $headers -OutFile "plugin.zip"
        Expand-Archive -Path "plugin.zip" -DestinationPath $pluginsDir -Force
        Remove-Item "plugin.zip"
        
        # Start server
        Start-MinecraftServer
        
        $buildId | Out-File $lastBuildFile
        Write-Host "Update complete, server restarted"
    }
}

Then it was just a matter of scheduling the script in windows to run every 15 minutes. And now we have a fully automated build pipeline using Azure, PowerShell and ALM.

Output of PaperMC server console. The messages showing “[SERVER] Restarting…” are visible by the people logged in to the server.

Things are coming together

It’s been quiet from us for a while now, but that doesn’t mean we haven’t been working. Behind the scenes, the solution is slowly starting to take shape.

We’ve set up an agent for automatically generating reports based on soil types, rocks, and minerals.
To store the report in SharePoint, we’ve created an Azure Function (provisioned using Bicep templates) that transforms the agent’s output into a format that the somewhat picky SharePoint connector in Power Automate will accept.

Bicep template for Azure function deployment

Azure function, PowerShell flavoured

In-agent flow

Did anyone say “I want 50 agents in 2 min”?

Sometimes i think to myself: “Cant just code understand what i want to make happen”. In that i mean that i want å folder that reflects the truth i want in my entire infrastructure. One folder that just have sub folders for services i want to be hosted in azure, and another folder containing my JSON definitions of the agents, AI models, and Foundry workflows.

Then i realized i know Terraform, Powershell, and how to use a computer. So i made exactly that.

Lets say i now want a Container app running some spesific API i made in Python. the only thing i need to now do is to make a new sub folder with a requirements, Docker, and the python. The rest is handled by Terraform (including Docker push). but if you need custom configuration on your container app there is a easy to manage YAML for that.

If i want to deploy a AI model that i don’t already have i just add a JSON in the models folder, and Terraform handles the rest.

the same goes for the agent constructions (including connections to fabric agents).

Now the crazyest part of this is that if i want to make a Microsoft foundry workflow and host a API in a container app i have a script that automatically figures out how to configure the container app to host the new work flow as a API.

The result is that through just creating the YAML definition of a Microsoft foundry workflow it can be provided to me as a API through just doing terraform apply –auto-approve

Scalable AI madness just through config files.

Automating the Magic: PowerPotters’ Shell-Powered ESP32 Firmware Deployment

Greetings, fellow tech wizards and hackathon enthusiasts! ✨

At Team PowerPotters, we understand the power of automation—especially when it comes to managing hardware like the ESP32 microcontroller in our potion-making IoT system. To streamline firmware updates and ensure seamless operation, we’ve created a GitHub Actions workflow that automates the entire process of flashing firmware to the ESP32.

This submission demonstrates how the Power of the Shell has enabled us to simplify complex processes, saving time and reducing errors while showcasing our technical wizardry.


🪄 The ESP32 Firmware Deployment Workflow

Our GitHub Actions workflow, named “Deploy ESP32 Firmware,” automates the process of flashing firmware to an ESP32 microcontroller whenever code changes are pushed to the main branch. Here’s how it works:

1. Triggering the Workflow

The workflow kicks off automatically when a commit is pushed to the main branch of our repository. This ensures that the firmware on the ESP32 is always up to date with the latest code.

2. Code Checkout

Using the actions/checkout@v3 action, the workflow pulls the latest code from the repository and prepares it for deployment.

3. Installing Dependencies

The workflow installs the esptool Python package, a critical tool for communicating with the ESP32 and flashing firmware. This step ensures that the deployment environment is ready for the flashing process.

4. Flashing the ESP32

The magic happens here! The workflow runs the following shell command to upload the firmware to the ESP32:

on:
push:
branches:
- main

jobs:
deploy:
runs-on: self-hosted
steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Install Python dependencies
run: python -m pip install esptool

- name: Flash ESP32
run: |
python -m esptool --chip esp32 --port COM3 --baud 115200 write_flash -z 0x1000 .pio/build/esp32dev/firmware.bin
  • Chip Selection: Specifies the ESP32 chip for flashing.
  • Port Configuration: Uses the COM3 port to communicate with the ESP32.
  • Baud Rate: Sets the communication speed to 115200 for efficient data transfer.
  • Firmware Location: Specifies the path to the firmware binary to be flashed onto the device.

The process ensures the firmware is deployed quickly, accurately, and without manual intervention.


Why This Workflow is a Game-Changer

  1. Automation: By automating the firmware deployment, we’ve eliminated manual steps, reducing the risk of errors and saving time for potion production magic.
  2. Reliability: The workflow runs on a self-hosted runner, ensuring direct access to the ESP32 hardware and a stable deployment environment.
  3. Efficiency: The esptool command streamlines the flashing process, enabling us to quickly update the firmware as new features or fixes are developed.
  4. Scalability: This workflow can be adapted for other IoT devices or expanded to handle multiple ESP32 units, making it a versatile solution for hardware automation.

🧙‍♂️ Why We Deserve the Power of the Shell Badge

The Power of the Shell badge celebrates the creative use of shell scripting to automate technical processes. Here’s how our workflow meets the criteria:

  • Command-Line Expertise: We leveraged the power of esptool and shell scripting to automate the firmware flashing process, showcasing our technical proficiency.
  • Automation Innovation: The integration with GitHub Actions ensures that every code push triggers an immediate, accurate firmware update.
  • Hardware Integration: By running on a self-hosted runner with direct hardware access, our solution bridges the gap between software and IoT hardware seamlessly.

🔮 Empowering Potion Production with Automation

From potion-making to IoT innovation, our ESP32 firmware deployment workflow ensures that our hardware is always ready for the next magical task. We humbly submit our case for the Power of the Shell badge, showcasing the efficiency and power of shell scripting in modern hackathon solutions.

Follow our magical journey as we continue to innovate at ACDC 2025: acdc.blog/category/cepheo25.

#ACDC2025 #PowerOfTheShell #ESP32 #AutomationMagic #PowerPotters

The benefits of mixing magic with ancient magic

Our OwlExpress solution needs to get the list of possible future wizards from a different set of data sources. We are merging these data into a PowerBI report so that the segmentation can be easily done using the slicers in a more intuitive way.

This Report is integrated into an MDA using a PCF control that House Elves Limited created that can save the segment chosen by the user using the Export Segment command.

This will trigger a Power Automate flow that will create the applications for all future students of the chosen segment.

To integrate the PowerBI report the following API’s are being consumed:

The PCF component has been developed using Test Driven Development and Storybook which allows you to fail fast and often and reach success faster.

The component is a react/virtual component, which simplifies a lot the separation of concerns into different components.

The code has been structured using the best practices for PCF components. We are using a Solution Project to simplify the build of the Solution that contains the component.

To standardize the build and deployment process and to ensure that the PCF Version is always kept in sync with the containing Solution version a couple of PowerShell scripts have been written. This helps also with the Azure DevOps pipelines because these can be simplified to execute the exact scripts that the developer executes on their machine.

CI/CD like a boss

What’s better than automatic deploy pipelines? Not that much really. When merging to main, our React App automatically deploys to Azure Static Web Apps. Pretty neat! All secrets are of course stored in Github Secrets.

Our github workflow:

name: Azure Static Web Apps CI/CD

on:
  push:
    branches:
      - main
  pull_request:
    types: [opened, synchronize, reopened, closed]
    branches:
      - main

jobs:
  build_and_deploy_job:
    if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
    runs-on: ubuntu-latest
    name: Build and Deploy Job
    steps:
      - uses: actions/checkout@v3
        with:
          submodules: true
          lfs: false
      - name: Set up API Key
        run: echo "API_KEY=${{ secrets.DATAVERSE_API_KEY }}" >> $GITHUB_ENV
      - name: Build And Deploy
        id: builddeploy
        uses: Azure/static-web-apps-deploy@v1
        with:
          azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_THANKFUL_HILL_0CC449203 }}
          repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
          action: "upload"
          ###### Repository/Build Configurations - These values can be configured to match your app requirements. ######
          # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
          app_location: "/" # App source code path
          api_location: "" # Api source code path - optional
          output_location: "build" # Built app content directory - optional
          dataverse_api_key: ${{ secrets.DATAVERSE_API_KEY }}
          ###### End of Repository/Build Configurations ######


  close_pull_request_job:
    if: github.event_name == 'pull_request' && github.event.action == 'closed'
    runs-on: ubuntu-latest
    name: Close Pull Request Job
    steps:
      - name: Close Pull Request
        id: closepullrequest
        uses: Azure/static-web-apps-deploy@v1
        with:
          azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_THANKFUL_HILL_0CC449203 }}
          action: "close"

Simplifying Backend Deployment with Terraform: Seamless Updates and Feature Implementations

In today’s fast-paced software development environment, managing infrastructure and deploying new features seamlessly is critical. As applications grow, so do the complexities of deploying and maintaining them. Fortunately, tools like Terraform provide a powerful solution for managing backend deployments in a consistent and automated way.

Using Terraform to deploy and manage the backend of our solution we are enabling seamless updates and feature implementations.

Security is one of the most crucial components of any solution. Azure KeyVault serves as the centralized service for storing sensitive information such as API keys, secrets, and certificates. Using Terraform, we can automate the creation and management of KeyVault, making it easy to maintain and secure our application’s secrets.

Once the KeyVault is in place, the next service we need to deploy is our web app service. This service hosts the main web application of our solution. Using Terraform, we can ensure that the latest version of our web application is deployed automatically whenever new changes are committed to the code repository.