ISV Package – the missing link

We’ve shown you the concept. The roles. The flows. The tech stack. Now let’s talk about how this actually lands in enterprise.

The simple Steve logs into an external portal. Rents recipes. Publishes tenders to print from manufacturing vendors. Monitors production and delivery status. All from the portal interface.

Works great.

But enterprise Steve, the one working at Equinor, IKEA, or Siemens -+doesn’t just browse marketplaces. Enterprise Steve has:

  • An ERP system
  • A CRM system
  • Procurement workflows
  • Approval chains
  • Compliance requirements
  • IT policies

Enterprise Steve needs CraftPortal connected to his world. His tenant. His systems. His processes.

The ISV package bridges these two worlds. Customer data stays in customer tenant. CraftPortal handles the marketplace, IP Owners, Manufacturers, the recipe catalog.

No workflow fragmentation. No copy-paste between systems. No “let me check the other portal.” One flow. Connected.

What Customer Get

Power Platform Components

ComponentCraftPortal Examples
TablesTenders, Recipes, Vendors, Manufacturers, Bids, Projects, Parts
DashboardsTender Overview, Production Status, Vendor Performance
PCF WidgetsRecipe Viewer, 3D Model Preview, Status Tracker
Power Automate templates“New Tender → Notify Vendors”, “Bid Awarded → Create Project”, “Part Printed → Update Inventory”
Security ModelRoles: Procurement Manager, Vendor, Manufacturer, Viewer

What We Get

  • AppSource listing = discoverability, Microsoft co-sell, enterprise credibility
  • Do this right and CraftPortal becomes invisible infrastructure — always there, impossible to replace

That’s not a customer. That’s a permanent relationship.

The tech stuff

Our ISV package includes two data integration flows. Custom Connector for on-demand requests — direct calls to CraftPortal API when you need real-time actions. Cached Core Data for near real-time sync — we push core data into the customer’s environment via Azure EventGrid into Azure SQL and Dataverse. Why both? Cached data enables full delegation in Power Apps. No query limits. Instant performance. Citizen developers query local tables instead of external APIs. CraftPortal data that feels like their own.

The Code

Azure Function Trigger & Interface

Trigger Type

1) HTTP-triggered Azure Function (API Gateway)

  • Used for synchronous operations and for publishing events
  • Secured via Azure AD authentication

2) Event Grid–triggered Azure Function (Event Processor)

  • Subscribes to Event Grid topic events
  • Processes vendor integration asynchronously
  • Updates Dataverse with final status/result

Interface Characteristics

  • REST-style endpoints
  • JSON request/response payloads
  • Versioned route (example):

/api/v1/vendor/operation

The Release

Power Platform Environment Strategy

Source control strategy

Repo structure (example)

  • /solutions/<SolutionName>/ (exported source using Power Platform CLI/PAC)
  • /pipelines/ (YAML for CI/CD)
  • /tests/ (integration test scripts, Postman collections, Playwright scripts, etc.)
  • /docs/ (release notes templates, runbooks)

CI Pipeline

CD Pipeline

Building the ISV package deliverable

An ISV-style deliverable usually includes:

Managed solution ZIP(s)

  • Core solution (managed)

Installation guide

  • Required licenses and prerequisites
  • Import steps
  • How to set environment variables
  • How to create/bind connections
  • Security roles to assign

Configuration workbook

  • List of env vars, defaults, required values
  • Connection references mapping
  • Any URLs/endpoints

Release notes + known issues

  • What changed, what to verify

Support / troubleshooting

  • Common import errors
  • How to re-run failed flows
  • Health check steps

Infrastructure as a code

In our solution, we are using both managed power platform ISV and dedicated cloud infrastructure for them.

It means that each customer should deploy his own Azure infrastructure to unlock Power Platform solution distributed via the AppSource.

We are introducing it via the one click Azure infra deployment process. Model driven app has dedicated admin App which is Allowing user run call the deployment by himself after the main package installation.

Referance: https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-azure-button?WT.mc_id=IoT-MVP-5002324

Deployment

Note: The selection of the resourcegroup is not part of the BICEP / ARM template. It expects a resource is available. This selection is provided by the portal.

To support that scenarios, We need to provide the all the required bicep files, which are the blueprints for the Azure Services.

Benefits of Bicep

Bicep provides the following advantages:

  • Support for all resource types and API versions:
  • Orchestration
  • Repeatability

The following examples show the difference between a Bicep file and the equivalent JSON template. Both examples deploy a storage account:

Directory Structure

Deployment.

We are using couple of Docker compose files for prod and dev deployment.

It allow us simplify the infrastructure for the local development setup.

Keep in mind that the main rule of using docker-compose: No sensitive data should be hardcoded inside, all environment specific details must be placed to the .env file.

We are following a classic approach with private azure docker registry (ACR) to store frontend and backend docker images.

How to convince IP Owners?

The idea behind is receive all the benefits of owning the infrastructure + extra features of the Platform owner(LogiQraft) like dedicated, specifically designed AI services, Proactive monitoring system, etc…

To make it work, we are using Azure Lighthouse + Managed Identity.

How it works:

  1. The customer delegates a scope (subscription / resource group) to you via Azure Lighthouse. 
  2. Your identity in your tenant (this can be:
    • a Managed Identity, or 
    • a service principal/app)

is granted a role on their resources through that delegation.   

  1. You then query their Application Insights / Log Analytics data using your identity, and Azure enforces the delegated RBAC.

Why Lighthouse is ideal:

  • No need to create/maintain identities in every customer tenant
  • Customers can revoke access easily 
  • Scales across many tenants
  • We assume that IP Owners would be careful to share their IP and by following this approach we are addressing their concerns
Categories: Redstone Realm , Governance & Best Practices , Code Connoisseur , Digital Transformation

Badges:  Power Of The Shell, ACDC Craftsman, Plug N’ Play

Building EcoCraft the ACDC Way

Badge Claimed: ACDC Craftsman

Version 2 – Further detail added on our versioning, building, deployment and testing approaches.

It’s been a fun, interesting few days for our team at ACDC, as we dived into new and familiar topics. But the importance of the fundamentals to ensure our development and deployment process was as smooth as possible carried us through the days and long nights. Here are a few highlights we can point towards:

  • All developers need clearly defined task and development workflow and, although some may scoff at our retro take to manage it, the results speak for themselves with our very complete looking Kanban board:
  • We made sure to setup our foundational elements for effective application lifecycle management; not only within the Power Platform, but also in ensuring we have an Azure DevOps Git repository, and the associated deployment pipelines configured. Find out more about what we have in Azure here and see below examples of what we’ve configured in the Power Platform.
  • Separating out our environments between development, testing and production ensures we can develop safely, test things in the proper way and not affect our live users intentionally. And our Power Platform Pipelines ensures we can deploy everything out automatically, and also handle proper versioning of our solution:
  • We don’t hardcode things. Especially for important things like API keys and secrets. For this reason, we are using Azure Key Vault to store all of our secrets and then have them accessed directly via the Function App. Thanks to Managed Identities, this process becomes so much more easier and secure as well:
  • A continuous feedback loop ensures a more effective development workflow. Thanks to our incorporation of Application Insights capabilities, developers have a rich array of datapoints to monitor the performance of our solution, and we can even use this to initiate new work items directly into Azure DevOps
  • Artificial Intelligence presents a huge opportunity, but without considering best practice approaches when building system prompts, things can go awry very quickly. As you’ll see from the work we’ve done with our custom agent, we’ve ensured our prompts have been tested and hardened appropriately, to prevent potential misuse.
  • Our YAML pipeline handles all of all builds centrally, to ensure our Bicep templates function as we expect and to also ensure we are handling versioning correctly:
  • Our plug-ins have incorporated both logical and integration testing at the class level, to ensure things can be properly tested before it reaches our end users:

These are just a few examples from our 3 day experience, where we have all learned a lot and collaborated well as a group. And we think this close teamwork represents what ACDC is all about – because developers can only do things well when they are working together openly and collaboratively in a team. We hope to back again next year!

ALM implemented

We have implemented ALM to deploy between solutions using GitHub Actions. We have three workflows:
– Export solutions PR: Exports chosen solutions from dev and creates a PR to main.

– Deploy Solutions (Test): Triggers manually or on merge to main. On merge, it decides which solutions to deploy based on the contents of the PR.

– Export Solutions (Prod): Runs manually on selected solutions and deploys to Prod

For the Power Of The Shell badge, we’ve leveraged powershell to determine the deploy package based on the triggering PR. This script reads the contents of the last PR, decides which solutions have been changed, and packages solutions and deploys them based on this information:

Automating Solutions ALM with Github Actions and AI

Our developers should focus their time as little as possible on repeating tasks like, deployment, release notes, updating technical documentation, who did what at what time, and the list goes on. It’s a really important job but it keeps eating our valuable time…

Our solution to this is a solution designed to streamline Dataverse solution management while enforcing ALM best practices. It combines Dataverse Pipelines, GitHub Actions, AI-powered documentation & Teams notifications to deliver fully automated, auditable, and governed deployments.

Deployment Stage record in Power Platform Pipelines

Automated Solution Management

  • Power Platform Pipelines – Developer triggers the deployment for the respective solution.
  • Cloud flows – Triggered on pre-deployment step & integrates with Github & Microsoft Teams.
  • GitHub Actions export, unpack, commit & create pull requests for the solution.
  • PR outcome triggers a cloud flow in order to notify users and continue/stop the deployment.
Triggered on pre-deployment step
#StairwayToHeaven

Governance & Deployment Control

  • Github PR review acts as a pre-deployment approval step, giving teams control over which solutions that can reach the target environments.
  • Deployment outcomes are sent back to Dataverse and Teams, providing real-time feedback.
  • Branch strategy (dev for development, main for production) keeps production stable and auditable.
Triggered from Github Action (pr-feedback-to-dataverse.yml)
Deployment Stage Run is updated with link to Github PR for more details

AI-Powered Documentation

  • GitHub Copilot analyzes every PR and generates technical documentation automatically.
  • Changelogs, impact analysis, and test recommendations are included, making knowledge transparent and up-to-date.
  • Documentation is versioned and stored alongside solutions for easy reference.


Benefits

  • Faster Deployments: Automation reduces manual steps and errors.
  • Full Governance: PR workflow enforces approvals and branch protection.
  • Better Transparency: Teams see real-time deployment status and AI-generated documentation.
  • Audit-Ready: Every change, approval, and deployment is logged and version-controlled.

Deployment: ACDC Craftsman

By day three of ACDC, craftsmanship is no longer about proving that something works, but about proving that it works the right way. For our Minecraft Production Order module, our deployment zip file shows most clearly in how the solution is deployed to Dynamics 365 Finance & Operations.

In Finance & Operations, the Application Object Server is the source of truth. If a module is not present under AosService\PackagesLocalDirectory, it simply does not exist from the platform’s perspective. Because of this, our deployment approach follows the same principles Microsoft uses internally: the module is delivered as a ZIP and installed directly into the PackagesLocalDirectory as its own folder.

The PowerShell installer script is intentionally simple but deliberate. It runs relative to its own location, automatically detects where Finance & Operations is installed, and extracts the module into a folder named exactly after the ZIP file. This removes environment-specific assumptions, avoids hard-coded paths, and ensures the module identity is always consistent.

Just as important, the script is safe by default. It will not overwrite an existing module unless this is explicitly requested with -Force. This reflects production thinking: destructive actions should be intentional, not accidental.

Its deployable, repeatable, and trustworthy. It behaves like a native Finance & Operations module, can be installed the same way across environments, and is ready for real DevOps scenarios.

When it comes to deploying to the raspberry Pi:

We treat it exactly like any other production environment by using Azure DevOps pipelines instead of manual SSH or copy-paste workflows. The pipeline validates the Docker configuration, connects to the Raspberry Pi as a self-hosted agent, stops any running containers, deploys the updated files, and then rebuilds and starts the containers in a controlled sequence.

Posted on saturday 😉

Owl Express – Application Lifecycle Magic

Howl to do ALM.

OwlExpress365 – ALM Manifest 

Development and Deployment Approach  

We do have some rules defined how development should happen and be planned in general, all around ALM (Application Lifecycle Management). There can be some differences for each product in terms of their complexity and circumstances. 

Technical and development-specific rules: 

  1. Solution and Components should all have the same publisher.  
    There should be only one Publisher for all Products that should be used across all components and solutions. 
     
  1. The technology approach and implementation are crucial for the scalability of the product and should be chosen wisely at the beginning of the project. 
    Deprecated functionality needs to be replaced as soon as possible and not even considered in the first place.  
     
  1. Every solution must be available as unmanaged in an own environment which is part of the tenant. Each solution should only have changed and newly developed components and no dependencies on non-product-related solutions.  
    The Environment must contain access for global Administrators and the Development Team.  
  1. Solution components must follow internal specific naming conventions. (e.g., icons: hel_*name of the icon’_icon.svg) 
  1. Solutions contain a changelog providing the latest changes and updates made to the solution  
  1. The solution is connected to a git repository. Changes are tracked automatically, while commits happen after crucial changes and before updates.  
    All commits must contain a speaking comment to provide context of recent changes.  

Deployment 

Our target is to continuously improve our deployment process with fewer manual steps. We distinguish today in three different processes of deployment: 

  • Code Component / Custom Control Deployment 
  • Solution Deployment  
  • Release Distribution 

Each of these processes comes with a different scope, restrictions, and requirements. But all of them should be based on the ALM (Application Lifecycle Management) Approach.  

Code Component / Custom Control Deployment 
Accessed by: Developers 
Purpose: Deploy changes to Dev
Functionality: Build the desired branch version of the code repository (e.g., Custom Control, JavaScript, Plugin, Application) and update the target dev environment with the selected version. 

Solution Deployment 
Accessed by: Maker, Developers, Product Owners 
Purpose: Build and deploy the desired Solution version to UAT And Production Environments 
Handled by: Power Platform Pipelines  
Functionality: Check-in changes in the Repository, Export the latest version from DEV Environment, Quality Check on Export (Solution Checker) , Build Solution via Build Environment (unmanaged), potential Automated Testing, Export Solution as managed, potential Quality Checks, Store Artifact as managed in Repository, Deploy managed Solution to QA / Test Environment  

(Example pot. Solution Deployment)  

Release Deployment 
Accessed by: Product Owner 
Purpose: Build and store release-candidate Solution version  
Handled by: Build and Power Platform Pipelines  
Functionality: Build Solution via Build Environment (unmanaged) / direct by code stored in repository, Automated Testing, Export Solution as managed, Quality Checks, Store Artifact as managed in Repository 
Long-Term: Build AppSource Update Package and deploy 

Automated deploying of solutions minimizes the risk of human errors during that precise process. Especially with a larger team and multiple versions and branches, we want to be more than sure that all the functionality, features, and fixes we brought into the product are part of the solution we deploy.  
It opens accessibility as well for Testers and QA’s who do not need to have a full understanding of the manual deploying process.  

Product Artifact 

The final artifact we produce at the end of each development and deployment cycle is not only the technical product but our idea of how to leverage all the different and innovative aspects of the Power Platform for our customers and their end-users.  
 

OwlExpress365 – Tech Stack & Environments 

Power Platform 
Environments 

  • HOST
    Owl Express Host Environment: https://owlhost.crm.dynamics.com/ 
  • DEV 
    Owl Express Dev: https://owlexpressdev.crm.dynamics.com/ 
  • UAT 
    Owl Express UAT: https://owlexpressuat.crm.dynamics.com/

All Environments are organized in the same Environment Group “Owl Express.” They are also managed and organized with Security Groups to organize access.  

With a security group and role-specific permissions as security roles, we have full control about  the access to each environment and don’t interfer with other environments on the tenant.   

Solution Concept 

We currently drive a single-solution approach for customizations and development, that includes all custom components and changes we perform on top of Dynamics and Power Platform. AI and Copilot development is not included here, there is a separate solution to enable deployments and development in parallel.  

All changes are tracked with the Git integration to Azure DevOps and pushed to the repository with major changes and deployments.  

https://dev.azure.com/owl-express/_git/Owl%20Express

Changes are deployed to separate Dev branch. Via Pull Request and additional review by at least one other person than the author, changes can be merged into the main branch for productive usage.  

Deployment  

All dev solutions are transported automatically from the dev environment as managed to the UAT and Production environment. With the Power Platform Pipelines, we can trigger a UAT only deployment or a full deployment, which covers UAT and Production.  

The pipeline is triggered from the Dev Environment. During the validation and publishing, we generate a new artifact in the Host environment and increase the version number. The final deployment happens through SPN, which requires additional approval (posting to Teams) for the production environment.  

The version number is set up to meet the following criteria: 

MajorVersion.MinorVersion.Date.NumberOfDeployments 

Release Notes are AI-generated while we maintain custom release note via Configuration Page, this provide functional context and technical details.

Further Solution Distribution 

The Power Platform Catalog is an additional source to distribute our solutions within the tenant. With each crucial change, the developers are publishing a new update to the catalog. The request will be verified and approved if all criteria are met.  

Afterwards the new update is available in the catalog for makers and low code developers.  

Dependent solution deployment 

OwlExpress uses some third-party solutions. These will be deployed as managed solutions to the different environments. The deployment happens via Azure DevOps Pipelines: 

After discussing internally between Azure DevOps Pipelines and GitHub Actions, we decided to stay centralized in Azure DevOps.  

The following third-party solutions are used: 

  • dhino 
    Purpose: Authentication of external users against Dataverse Data (OwlExpress app) 
  • Solutions 
    – dhino_API_Core_1_0_0_23_managed.zip 
    – dhino_Authenticate_1_0_0_27_managed.zip 
    – dhino_Fetch_Core_1_0_0_58_managed.zip 
    – dhino_Forms_Core_1_0_0_2_managed.zip 
  • PCF Tag Control 
    Purpose: Add Characteristics to a Student via a tag control on the form directly  

     
  • Solutions 
    – OPC.PowerApps.PCFControls.zip 
  • Resco PCF Control: File Uploader 
    Purpose: Used to attach Files to an Student Application  

  • Solutions
    – pcf-file-uploader-1.0.0.0.zip 

PCF Development 

PCFs are being created using a Test-Driven Development approach. We are also using cdsproj based solutions to simplify the build and deployment to the target environments. The cdsproj enables us to create the solution zip file easily and import that using the “pac cli” or the Power Platform Tools from Azure DevOps. 

The full source code is also stored in the used Azure DevOps Repo, to aim for a full centralized development surronding  

Frontend / Static Web Apps 

The frontend app “Owlexpress.app” is developed as a custom static web app using HTML / CSS / JSX. It is checked in to a GitHub repository with automated deployment triggers for the connected Netlify sites. Netlify was chosen because it offers the best support for static web apps when using them in a hybrid mode with server-side rendering which we need for the authentication part. 

Tech Stack 

  • Astro Framework 
    Base Framework for static web apps 
  • TailwindCSS 
    CSS compiler 
  • Daisy UI 
    TailwindCSS component library 
  • Dhino  
    ntegration with Dataverse with authenticated external users 

Environments 

  • DEV 
    Locally on machine 

Deployments 

Deployments are triggered by pull-requests on the main branch and are executed within the Netlify deployment system. This was chosen because Netlify supports all the server-side rendering capabilities needed for the authentication part of the static web app. All secrets are set up as environment variables to keep them outside the repository. 

Fabric / Power BI 

We maintain alignment with all Dataverse environments by having the following Power BI workspaces configured: 

All development activities are carried out within the DEV workspace. We then leverage Pipelines in Fabric to deploy changes into each downstream environment. This allows us granular control when deploying components and to view potential “diffs” between environments: 

AI and Copilot  

Copilot related components are structured in a separate solution to the already existing customizations. Components used across multiple agents are organized in solution-aware component collections.  

Because creating a new agent generates a various number of components, we decided to split. Also to potentially deliver our solutions without AI for different Regions. Europe, looking at you.  

As all components are part of the Power Platform solution that is on the Dev / Test and Production structure of the Power Platform we can use existing Power Platform Eco-System including Pipelines for the deployment and also publish the AI components to the Power Platform Catalog.  

Virtual potion ingredients: XR in PCF components

We already have access to MR (aka. Magic Reality) components in Canvas Apps. Implementation is straight forward, but as expected they come with a limited set of features. The components are based on Babylon.js, and make for a quick and easy way to place and view a 3D model in an augmented reality context.

For our solution, we wanted the user to also be able to interact with the virtual objects in front of them, which is not OOB features, so by expressing our power user love, we decided to explore the possibilities around custom XR enabled PCF components.

Being ACDC craftsmen, knowing the potential issues of going too far down the wrong path, we decided to do some proof of concepts, creating custom PCF components with third party XR libraries, acting like proper thieving bastards on the way.

First off, we had a look at AR.js, which is built on ARToolkit, a relatively old library. This library could provide us with wide device support, which really didn’t have that much value, considering the component would be running inside the Power Apps mobile app. We would also be forced to use either image target or marker tracking, with no modern AR spatial tracking.

Looking closer at the OOB components, we tried to find a way to leverage the OOB Babylon.js logic, hopefully being able to hook into the React Native part of the implementation, which would give great benefits in terms of access to device specific XR features (ARCore for Android and ARKit for iOS). We did, however, decide to leave this path, and focus elsewhere.

Crafting Excellence: Weasley Twins’ Development Best Practices

Environments

Our solution utilizes three distinct environments within Power Platform: Development, Testing, and Production. The Development environment is where initial coding and development take place, supported by robust version control to ensure code integrity and traceability. The Testing environment is used for rigorous testing to ensure code quality and expected functionality. Finally, the Production environment hosts the live code that is actively used by students.

Power Platform Pipelines

To streamline the movement of code between these environments (DEV – TEST – PROD), we have implemented Power Platform Pipelines. This ensures a smooth and efficient transition of code through each stage, maintaining consistency and reducing the risk of errors. Automation tools and scripts are used to facilitate this process, enhancing efficiency and minimizing manual intervention.

Environment variables

We leverage environment variables to manage our SharePoint sites and lists across different environments. This approach provides us with precise control and flexibility, ensuring that each environment operates with the correct configurations. Environment variables support dynamic configuration, allowing for easy updates and changes without altering the codebase.

Connectors

Our solution employs a smart naming standard for connectors. This naming convention simplifies the process of tracking and reusing the appropriate connectors, enhancing maintainability and clarity. Comprehensive documentation of the naming standards and usage guidelines ensures consistency and ease of understanding for new team members.

Service Account

We have established a dedicated service account for the twins to use with connectors, enhancing security by isolating permissions and reducing credential exposure risks. It simplifies auditing and ensures consistent configuration across environments. Adhering to the principle of least privilege, it grants only necessary permissions, providing clear accountability and preventing identity spoofing. Monitoring tools track the service account’s activities to ensure compliance with security policies.

ALM CATEGORY: With the ALM Lumos spell, we illuminate a path to error-free, efficient lifecycle management. 

“Building this solution has been a journey of passion and precision, where every element has been designed with purpose and care. We’re excited to showcase our work as a testament to quality and innovation in pursuit of the ACDC Craftsman badge.

We have three environments, DEV, TEST, and PROD, with different approval flows. 

So, the production environment can be deployed only after the release manager has approved it. 

We implemented an Export pipeline to simplify the contribution process.  

Users can decide which solution to export and place in the GIT to track the change’s history. 

For the functional consultants, we implemented the following flow: 

The export procedure includes exporting the managed and unmanaged solution packages. All changes from the selected solution will be included in the PR, and the Solution Checker will start the validation process. A clean report from the Solution Checker is a prerequisite for the next step of the PR review, which requires the real human review step. 

In the case of pro code customization, each component type has its steps for the PR validation, such as: 

  • Run build to produce the artifacts: 
  • Run unit test 
  • Scan the code base with SonarQube (Quality Gate) 

The Import pipeline will deploy the package with a specific type based on the environment, so deployment to PROD will always include only the managed version of the packages. 

The import pipeline also includes extra steps to activate the flows after deployment, including calling the custom Power Shell script to activate the disabled flows in specific solutions. 

We also use a custom version that reflects the build date and the build number at the end: 2025.01.25.13; from our previous experience, it looks more meaningful to the users. 

Branching strategy: 

We are using a trunk-based branching strategy. Short-lived branches contain as small changes as possible to make the review and validation process simple. 

Crafting Perfection: PowerPotters’ Quest for the ACDC Craftsman Badge

Greetings, wizarding developers and tech sorcerers! ✨

At Team PowerPotters, we take pride in our meticulous craftsmanship. From modular code design to robust CI/CD pipelines and seamless hardware integration, every aspect of our solution reflects a dedication to excellence. Today, we unveil the full spectrum of our development practices as we humbly submit our case for the ACDC Craftsman badge.


🪄 The Cornerstones of Craftsmanship

1. Modular Python Code for Seamless Integration

Our potion production system relies on a trio of well-structured Python scripts, each dedicated to a specific task:

  • sensor_script.py: Captures real-time data from ultrasonic sensors, seamlessly integrating with our IoT platform.
  • voice_script.py: Powers AI-driven voice recognition using the OpenAI Whisper API, enabling potion masters to command the system verbally.
  • integration_script.py: Acts as the conductor, tying sensor data and voice commands into cohesive workflows with Power Automate.

This modular structure ensures clarity, maintainability, and scalability, allowing individual components to be refined or extended independently.

Best Practices in Python Development:

  • Error Handling and Logging:
    Detailed try...except blocks ensure stability by gracefully handling errors, while comprehensive logging captures critical information for troubleshooting.
  • Mocking for Testability:
    Using a GPIO mock module, we simulate sensor behavior during testing, enabling rapid iteration without needing access to physical hardware.

🧙‍♂️ 2. Automating the Magic with GitHub Actions

To ensure our hardware remains ready for action, we created a GitHub Actions workflow that automates ESP32 firmware updates.

ESP32 Firmware Deployment Workflow:

  • Triggering the Workflow: Commits pushed to the main branch automatically initiate the firmware update process.
  • Code Checkout: The workflow uses the actions/checkout@v3 step to pull the latest code onto the runner.
  • Installing Dependencies: The esptool Python package is installed to enable communication with the ESP32.
  • Flashing the ESP32: The firmware is flashed using a shell command:
on:
push:
branches:
- main

jobs:
deploy:
runs-on: self-hosted
steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Install Python dependencies
run: python -m pip install esptool

- name: Flash ESP32
run: |
python -m esptool --chip esp32 --port COM3 --baud 115200 write_flash -z 0x1000 .pio/build/esp32dev/firmware.bin

This process ensures fast, accurate, and automated firmware updates, reducing the risk of errors and saving time.

Why This Workflow Matters:

  • Automation: Eliminates manual steps, ensuring reliable and repeatable firmware updates.
  • Hardware Integration: Bridges the gap between software and IoT devices using a self-hosted runner for direct ESP32 access.
  • Scalability: Adaptable for multiple ESP32 units, making it a powerful tool for future growth.

3. CI/CD Pipelines: From Development to Deployment

Our CI/CD pipelines exemplify best practices in automation, ensuring smooth transitions from development to production:

  • Pre-Build Pipeline:
    A smart script starts the build box and waits for the service to become available before initiating the express build, which performs a simplified build without syncing.
  • Full Build Pipeline:
    If the express build succeeds, the full build pipeline is triggered. Success activates a webhook to:
    • Notify a Teams channel about the new release.
    • Trigger the upload of the package to the target environment.
    • Automatically deploy the package, ensuring rapid access to new features.

This pipeline streamlines the entire development lifecycle, ensuring efficiency and reliability at every step.


🧙‍♂️ 4. ALM Magic: Consistency in Customizations

Our Application Lifecycle Management (ALM) practices ensure clarity and structure across all customizations:

  • Naming Standards: We prefix all artifacts with Ad (Application Development) to maintain consistency. Examples:
    • AdCustGroup_Frm_Extension: Form extension for CustGroup.
    • AdCustGroup_Frm_dsCustGroup_fldGroupId_Extension: Field extension for CustGroup data source.

These conventions create an organized and easily navigable codebase, critical for large-scale projects.


🔮 Why We Deserve the ACDC Craftsman Badge

Our development practices embody the principles of the ACDC Craftsman badge:

  1. Modular, Maintainable Code: Python scripts are well-structured and adhere to best practices, ensuring long-term scalability and clarity.
  2. Automation Excellence: CI/CD pipelines and GitHub Actions workflows eliminate manual steps, streamline deployment, and reduce errors.
  3. Consistency and Standards: Rigorous ALM practices, including naming conventions and structured customization, reflect our commitment to professionalism.

🐍 Crafting a Legacy of Excellence

From clean code to seamless deployment workflows, we’ve poured our hearts into creating a solution that reflects true craftsmanship. With every detail carefully considered, we humbly submit our case for the ACDC Craftsman badge.

Follow our journey as we continue to innovate and enchant ACDC 2025: acdc.blog/category/cepheo25.

#ACDC2025 #ACDCCraftsman #PowerPotters #AutomationMagic #CI/CDPipelines