Real-time Dataverse data for real-time business overview

What is it good that you can have improved customer communication with chatbots and forums, if the plumbers can’t get notified in realtime of relevant cases? Moreover, Mario and Luigi as CEO and CTO respectively want real-time data for improving decision support (e.g. plumber allocation) and PlumbQuest trends for further analysis.

Dataverse Webhook on Incident reports

To extract real time data, we created a Web hook using the plugin tool box for Dataverse, which calls our Azure Function whenever a new PlumbQuest is made.

XRMToolbox to add a Web hook to Dataverse for real time PlumbQuest analysis

To ensure safe access, function level authentication is applied, where the toolbox allows for HTTP Query parameters, safely accessing our Function which uses a traditional HTTP-trigger:

However – Here is the hacky part. The Web hook payload is too large, which makes the traditional JSON-payload corrupted with highly dynamic lengths and content of each PlumbQuest. Therefore we had to do some custom string manipulation to extract the values of most business and de-corrupt the JSON and preparing it for analysis – Almost a Complete ETL-pipeline (*cough*)!

But to access this real-time data in an Analytics environment – Fabric is the way to go (as by Microsoft huge Hype-wave). We created a Custom app Source for an Event Stream in Fabric with an EventHub output binding, which then can map to many different destinations, including a Lakehouse for historisation and trend analysis, as well as Data Factory Reflexes for reactive actions in real-time.

With Data Activator’s Reflexes directly on the stream, one can e.g. trigger additional flows for highly acute PlumbQuest from members in distress, or highlight plumbers who did not provide proper service according to the PlumbQuest review.

Our Fabric Event Stream with the Custom app as Source and the Lakehouse for historisation and down-the-line processing and analysis

In addition, we set up a Dataverse Shortcut (Link) to Fabric, allowing for direct access to Dataverse without ETL or ingestion, providing ease of access and down-the-line deeper analysis on key business metrics, trends and community engagement.

Our PlumbQuests in Fabric Lakehouse using a Dataverse Connection for e.g. a more complete 365 customer view using Fabric items

Reproducible deployment

Although we are nasty hackers, we are reproducible hackers. As these were the only Azure resources used (directly), we deployed them using bicep and the Azure CLI. Sensitive variables are marked as secure and not included in the scripts, but parameterised.

The main bicep deployment definitions for our Azure Function app and related resources, the resource group naturally had a separate BICEP definition.

So if you want to do it hacky, at least make it traceable.

Automating potential uncomfortable situations and (green) shells

Automation galore! A lot of processes are jumpstarted when a banana is detected and the poor first banana responders need all the help they can get. Notifying next of kin can be a tough task, and for Road Authority personell with limited people skills automation is key.

We have set up a workflow that works like this. When a banana is detected we receive a http request. When then register the incident in Dataverse while also looking up next-of-kin data, and send that to outlook to automatic notifications.

Our solutions have a lot of bits and pieces, having control on how these are deployed are essential. All our Azure resources are handled nicely using ARM template deployment.

Code controll

We have implementet piplines for azure infrastructure and publishing azure function.

YAML and PowerShell scripts to ensure CI/CD.

Infrastructure as code: 

We have created a pipeline for our Bicep code to handle the infrastructure in Azure. 

Currently the infrastructure consists of: 

  • Storage account 
  • Webapp 
  • Function App 
  • Application insights 

Pipeline to build and deploy the Azure Functions: 

We have created a pipeline in Azure devops to build and publish the functions we are using in the canvas app. The build is triggered on code push to the “main” repository 

Empowering Seamless Updates: Unveiling Our Project’s Azure Pipelines Integration!

We claim badges:

  • Power of the Shell – for building shell script for Azure DevOps pipeline for Azure Function deployment, Dataverse solutions, Power Pages and PCF control

We’re thrilled to share a major stride in our project’s evolution! Building upon our commitment to seamless communication, we’ve successfully implemented a CI/CD process for deploying updates to Azure Functions through Azure Pipelines.

This enhancement ensures that our real-time communication capabilities remain at the forefront of efficiency and reliability. With the newly established CI/CD pipeline, deploying updates to our Azure Functions is now a smooth and automated process. This means quicker turnaround times for improvements and features, allowing us to adapt swiftly to the dynamic needs of our users.

The Azure Pipelines integration not only amplifies the agility of our development process but also guarantees a consistent and reliable deployment of updates. As we continue to innovate and refine our project, this CI/CD implementation represents a pivotal step towards maintaining a cutting-edge and user-friendly experience.

We also covered the CI/CD for the dataverse solutions, portal and PCF control:

The Orange Bandits – Approach to CI/CD

With the rise of Citizen development, and low code/no code approach in Power Platform, organizations are facing new challenges regarding correct Application Lifecycle Management.

Citizen developers, often not having professional IT background, are often lacking needed knowledge to comfortably work with Power Platform solutions, not mentioning GIT source control.

In our team, we decided to implement Center of Excelence Application Lifecycle Management.

This allowed us, in a span of few hours, to have robust, controlable and reviewable history and version tracking for our solution.

We have decided to implement simplified version, with only two environments, Development and Production.

Instalation of CoE ALM kit is quite well described on MS Learn platform – https://learn.microsoft.com/en-us/power-platform/guidance/coe/setup-almacceleratorpowerplatform-preview, however in few points the documentation was a little bit outdated. This is expected, as this solution is still in Preview.

Also, you need to pay special care to any manual changes you make in DEvOps repository / configuration, as this can lead to unexpected results when running the pipelines.

One limitation that we faced, is that MS assumes the Azure Build agents are to be used, and does not allow one-place customization of Agent Pool used.

CoE ALM solutions installs two applications in Power Platform environment:

Second one is used for setting up the process and first one is used for managing the releases. Both of them offer user friendly visual interface to configure ALM process:

Triggering of solution deployment is as well user-friendly and hides the complexity of processes happening in the background:

In the background, multiple DevOps Pipelines are created and flow of solution is following:

Unpacked solution is available and versioned in GIT source control:

For each Solution Deployment Request, the PR is created and changes can be reviewed.

If this topic seems to be interesting for you, feel free to stop by our table for a chat! 🙂

Our deployment solution uses YAML and PowerShell scripts to ensure CI/CD

🤖 Automatically Priming our SPA cannons with React and .NET ⛵️

Sailing is though, so you need to rigg your ship to your advantage, preferably automating your sales and having people walk the plank for making bad code. We chose to created a project with ASP.NET Core application running with React.js. The application is running on .NET version 7 and uses several technologies, both hip and retro.

Screen capture of the .net SPA application setup

Mapbox for Charting the Seven Seas

Our application uses Mapbox – a provider of custom online maps for websites and applications. The company provides APIs and SDKs to create custom maps, add markers and other map controls, and to handle user interaction with maps. This ease of use and their clean and well-structured documentation (docs.mapbox.com) made Mapbox our top dog for create, edit and manipulate maps in our service.

Map created using Mapbox’s API

Protecting your Shared Booty form the Crew

Any pirate needs a secure lair, a pirate code and ability to help other pirates in need. For this reason we created a repository in GitHub and implemented best practices for collaborating as developers, like proper branching, build pipelines, linting and pre-deploy checks. Like blocking ourselves from bloat because we can’t have unused code in the repository.

Snipet showing lint error for unused variable

Branching Strategy for Division and Conquering

The master branch contains the production code. All development code is merged from develop into master. Cutting corners in true hacker style we opted to not use feature branches, since, to be frank, the entire thing is a feature at this point.

What we all said when discussing feature branches

All branches are protected with protection rules. Due to the limited size of the project we can automatically merge features and bug branches directly to develop, but we can’t push directly to develop. Even that is too risky for this group of mad lags and girl. If you want to merge code from develop into master you need approval from at leaste one other developer and no changes requested to your merge, if all flags are green Github forcefully collides the code together and see what sticks (it’s actually more sophisticated than this, but you Judges know a lot more about this than we ever will, so we humbly just call it “magic” – just like magnets).

Parchment showing the Law of Protection for our Master Branch

Feel free to have a gander at our code repository to appreciate all the shiny details: github.com/TommyBakkenMydland/CaptaIN-Hack-Sparrow

Smooth Rigging with CI/CD and Github Actions

Github Actions is a now a CI/CD platform that allows developers to automate their software development workflows. This, for us, is a huge win because we want to do as little configuration as possible!

CI/CD github actions

The actions reduce the time and effort required for manual processes and enables us to release new features and bug fixes quickly and confidently – Harr harr, got you matey, you thought we will fix bugs! … But … for the sake of argument … if we were to fix bugs, this powerful tool for streamlining development workflows, enables us to deliver high-quality software more efficiently and effectively at lower cost – Better use of time, reduce waste of time (WoT) and better business value for the client. We used yaml to tell what GitHub actions should create:

Yaml file describing what GitHub actions should create

Creating Overly Complex Gadgets to Avoid Simple Tasks
(aka How we used ChatGTP to give us documentation on Pulumi)

Nobody likes to know stuff. Everyone loves AI. What about AI that knows stuff?! Sold!! We used the AI service to provide us with documentation on how to deploy infrastructure in Azure with code.

Asking the oracle about stuff we probably already should know

From this we were able to write code that could be executed in our CI/CD

Pulumi snippet for creating our environment: Infrastructure as code #retro

Resulting in a beautiful environment and ample time for coffee ☕️

Screen grab showing the employed resources in Azure
Screen grab showing ChatGTP making a meme about making memes about using ChatGTP. Very meta.
Mame created from ChatGPD’s instructions

⚒️Dev and tooling ⚒️

Dev and tooling 

As part of our solution we are developing a Teams App for staffing raids, which uses an Azure Function App to get dataverse data. The toolchain setup for doing this consists of Visual Studio Code, Azure Devops Pipelines and Azure Services. 

To speed up our delivery we have implemented a CI/CD pipeline that deploys the infrastructure and backend code to an Azure Function App. The process works like this 

  • We develop a new feature with Visual Studio Code 
  • We push the feature to the main branch  
  • The Azure pipeline is triggered and executes two stages 
  • Build stage
  • The .NET code is built as a deployable zip 
  • Dev stage 
  • Then infrastructure is deployed with Azure Bicep 
  • The zip is deployed to the newly created Azure Function App 

We have modularized the stage jobs in templates, so we only need 9 lines of code to add another environment to our application. 

To deploy our infrastructure with Bicep we use a Devops Service Connection to authorize operations with the Azure CLI.

Take a look at line 49 – here we read the output variables from the Bicep deployment in order to retrieve the name of the function app. We later retrieve and use this Devops pipeline variable in the job that deploys the code to the Azure Function App.  

The infrastructure is defined in the .bicep file above which also utilizes other Bicep modules. In order to avoid explicitly including secrets we feed it from a Azure Key Vault. A link to the Key Vault Secret is passed to the Azure az group create command in a json file. 

We deploy a frontend-app as well. The build is made and artifacts are created for the spfx-app and for the teams-manifest. We then have a release pipelines that utilize npm and m365 cli for deploying the app to the sharepoint app catalog and to Teams. The scripts are written with powershell. 

For now it only displays basic pirate data from Dataverse 

Branch policies, infrastructure as code with Pulumi and CICD as code with yaml

👾🤓✋🏻😎✌🏻👾

Det er opprettet to brancher i vĂĄrt github repo.
Develop branch som holder kode for dev miljøet.
Master branch som holder kode for test og prod miljøet.
Det er implementert regler som krever at man setter opp PR,
og for ĂĄ kunne merge koden til develop mĂĄ en annen utvikler godkjenne koden.
Det er ogsĂĄ implementert build check som starter automatisk nĂĄr man setter opp PR, slik at man unngĂĄr ĂĄ merge inn kode som vil feile i CICD.

Github branch policy
PR to master with build check

Mappestruktur

/nINjas

Inneholder vĂĄr SPA applikasjon med React og .NET6

/nINjas.Infrastructure

Inneholder ASP.NET med Pulumi

/python

Inneholder pyton kode for ĂĄ ta bilde med rasbery pi og analysering av bilde

Infrastructure as code with Pulumi

Pulumi with ASP.NET

CICD as code with yaml


Deployment with the power of the shell

Vi har jobbet med å kjøre CICD i azure devops men har noen rettighetsproblemer så deployment er gjort med powershell:

Deploy infrastructure with pulumi commands in powershell
Failed deployment in AzureDevops but with more time it will work!

Kjører på en app service i azure: https://ninjas.azurewebsites.net/