Dash It Out: In the spirit of the Marauder’s Map, we have conjured a dashboard that is both visually stunning and incredibly informative. This dashboard is not just a collection of graphs and KPIs; it is a powerful tool designed to provide valuable insights.
Current Total Points Handed Out for the Semester: Much like the House Points Hourglasses in the Great Hall, this graph shows the total points awarded throughout the semester, giving us a clear view of the academic achievements and contributions of our students.
Who Awarded the Most Points: This chart reveals the professors who have awarded the most points. It highlights the dedication and encouragement provided by our esteemed faculty members, fostering a competitive yet supportive environment.
Sum of Points Awarded by House: This graph, reminiscent of the House Cup standings, displays the total points awarded to each house. It provides a visual representation of the friendly rivalry between Gryffindor, Hufflepuff, Ravenclaw, and Slytherin, motivating students to strive for excellence.
Statistics About the Professors Who Awarded Points: This report, much like the meticulous notes of Hermione Granger, details the statistics of the professors who have awarded points. It includes insights into their teaching styles, frequency of awarding points, and the impact of their encouragement on student performance.
Creating this dashboard was we utilized our preferred data visualization framework, leveraging its capabilities to build a solution that is both robust and user-friendly.
We used a Resco Kanban board PCF to visualize the Activities assigned to the students in the Model-Driven Apps.
A Kanban board PCF control visualizes workflow progress by displaying tasks in different stages of completion. It can be used for project management, process optimization, and enhancing team collaboration.
We added the new tab to the form Activity Overview.
Advantages of Kanban Board PCF Components:
Visual Task Management
A Kanban board provides a clear and intuitive visual representation of tasks and their statuses. This helps users quickly understand workload distribution and progress at a glance.
Drag-and-Drop Functionality
PCF components support interactive features like drag-and-drop, making it easier to update task statuses without manually editing fields or navigating between forms.
Real-Time Updates
The Kanban board can fetch and display data in real-time from the Dataverse, ensuring that users are always working with the latest information.
Improved Collaboration
Teams can use the board to assign, track, and prioritize tasks collaboratively, leading to better alignment and accountability.
Increased Efficiency
By reducing the need for context-switching (e.g., switching between forms or views), a Kanban board improves task management efficiency within the Power Apps environment.
Enhanced User Experience
The interactive and user-friendly interface of a PCF-based Kanban board enhances user engagement and adoption, especially for non-technical users.
Task Prioritization and Tracking
The ability to sort tasks into columns (e.g., “To Do,” “In Progress,” “Done”) helps prioritize work and ensures nothing falls through the cracks.
Supports Agile Methodologies
Ideal for teams using Agile or Scrum methodologies, allowing them to visualize backlogs, sprints, and task progress directly in the Dataverse.
Although we need stuff to happen after our build, it is more complicated. We have a server running that needs to receive a message and then make sure all new dependencies are installed and the code is restarted.
We have our usual deploy.yaml in our Github Actions. It makes a post request to our own API that needs to run some OS commands on-device for everything to update properly.
But we do in addition need our IOT device to capture this request and ensure it is updated. Also, please don’t take more time then what is required!
House Points for Slytherin: Helping “The Name Who Must Be Named” with Link Mobility 📞⚡
In the dark corridors of tech, where even the most powerful wizards and witches face challenges, it’s not just about pulling off the most daring magic, but about offering a hand to those in need. Sometimes, the most unexpected of allies can step in to help, and in this case, Slytherin has found itself not only achieving greatness but helping others do the same.
One of the lesser-known struggles for The Name Who Must Be Named was obtaining the right phone number from Link Mobility—a crucial piece for triggering their desired process. But as with many such hurdles, it’s not about the problem but how you solve it. That’s where we, the Slytherins, stepped in.
Lending a Helping Hand
When we discovered that The Name Who Must Be Named did not have the necessary phone number from Link Mobility, we saw this as an opportunity to lend a hand. Rather than just letting them continue their journey without the proper tools, we reached out to Link Mobility and provided our own phone number and code number. This seemingly small act paved the way for something much larger.
The Magic Behind the Flow
But, of course, magic can’t just stop there. Once we shared the phone number and code number, the next step was ensuring the flow worked seamlessly in their environment. This is where the true magic happened. 🔮
We designed and triggered a cloud flow within our tenant. Here’s how it works:
Triggering the Process: Once the phone number is triggered, our cloud flow takes over. It acts as a proxy between Link Mobility and The Name Who Must Be Named’s flow, ensuring that all data is properly transmitted.
Sending the Body: The body of the received message from Link Mobility is forwarded to their flow via the cloud flow, making sure everything runs smoothly and automatically.
Achieving Greatness: With the data now flowing effortlessly, they are able to achieve what they set out to do. What once appeared to be a stumbling block has now been transformed into an opportunity for success. ⚡
A Lesson in Collaboration
This small act serves as a reminder that the world of tech and magic isn’t just about competition or looking out for yourself. It’s about collaboration, and lending a helping hand when others need it. By sharing our phone number and providing the cloud flow proxy, we helped The Name Who Must Be Named overcome their challenge and achieve their goals. After all, Slytherin doesn’t just win—it helps others win too.
House Points for Slytherin! 🐍✨
In the end, it wasn’t just about solving a problem. It was about showing that even in a world where magical and technological solutions reign supreme, it’s always the people who make the most difference. By using cloud flows and proxy magic, we’ve not only enhanced our own capabilities but helped our fellow wizards and witches soar.
Just remember: It’s not about the victory, but about making sure everyone’s journey is as seamless as possible. And Slytherin is always here to help. 🖤
We already have access to MR (aka. Magic Reality) components in Canvas Apps. Implementation is straight forward, but as expected they come with a limited set of features. The components are based on Babylon.js, and make for a quick and easy way to place and view a 3D model in an augmented reality context.
For our solution, we wanted the user to also be able to interact with the virtual objects in front of them, which is not OOB features, so by expressing our power user love, we decided to explore the possibilities around custom XR enabled PCF components.
Being ACDC craftsmen, knowing the potential issues of going too far down the wrong path, we decided to do some proof of concepts, creating custom PCF components with third party XR libraries, acting like proper thieving bastards on the way.
First off, we had a look at AR.js, which is built on ARToolkit, a relatively old library. This library could provide us with wide device support, which really didn’t have that much value, considering the component would be running inside the Power Apps mobile app. We would also be forced to use either image target or marker tracking, with no modern AR spatial tracking.
Looking closer at the OOB components, we tried to find a way to leverage the OOB Babylon.js logic, hopefully being able to hook into the React Native part of the implementation, which would give great benefits in terms of access to device specific XR features (ARCore for Android and ARKit for iOS). We did, however, decide to leave this path, and focus elsewhere.
In our solution, users will be gathering ingredients using object detection in a Canvas App. The AI model used for this has been trained on objects around the conference venue, and so we wanted to enhance the connection between the app and the real world. Already having access to the users geo location through the geolocation web API inside the Canvas App and any PCF components, we decided to these data to place the active users on a 3D representation of the venue, expressing our power user love by merging 3D graphics with the OOB Canvas App elements.
We were able to find a simple volume model of the buildings on the map service Kommunekart 3D, but these data seem to be provided by Norkart, which is not freely available.
Like the thieving bastards we are, we decided to scrape the 3D model off of the site, by fetching all the resources that looked like binary 3D data. We found the data was in B3DM format and we found the buildings in one of these. We used Blender to clean up the model, by removing surrounding buildings and exporting it to glTF 3D file format, for use in a WebGL 3D context.
The representation of the 3D model, we decided to do with Three.js, which let us create an HTML canvas element inside the PCF component and using its WebGL context to render out the model in 3D. The canvas is continuously rendered using requestAnimationFrame under the hood, making it efficient in a browser context. The glTF model was loaded using a data URI, as a workaround for the web resource file format restrictions.
The coordinates from the user’s mobile device comes in as geographical coordinates, with longitude, latitude and altitude. The next step was to map these values relative to a known coordinate in the building, which we chose to be the main entrance. By using the main entrance geographical coordinates, we could then convert that to cartesian coordinates, with X, Y and Z, do the same to the realtime coordinates from the user, and subtract the origin, to get the offset in meters. The conversion from geographic to geocentric coordinates were done like so:
// eslint-disable-next-line @typescript-eslint/consistent-type-definitions
export type CartesianCoordinates = { x: number; y: number; z: number };
// eslint-disable-next-line @typescript-eslint/consistent-type-definitions
export type GeographicCoordinates = { lat: number; lon: number; alt: number };
// Conversion factor from degrees to radians
const DEG_TO_RAD = Math.PI / 180;
// Constants for WGS84 Ellipsoid
const WGS84_A = 6378137.0; // Semi-major axis in meters
const WGS84_E2 = 0.00669437999014; // Square of eccentricity
// Function to convert geographic coordinates (lat, lon, alt) to ECEF (x, y, z)
export function geographicToECEF(coords: GeographicCoordinates): { x: number; y: number; z: number } {
// Convert degrees to radians
const latRad = coords.lat * DEG_TO_RAD;
const lonRad = coords.lon * DEG_TO_RAD;
// Calculate the radius of curvature in the prime vertical
const N = WGS84_A / Math.sqrt(1 - WGS84_E2 * Math.sin(latRad) * Math.sin(latRad));
// ECEF coordinates
const x = (N + coords.alt) * Math.cos(latRad) * Math.cos(lonRad);
const y = (N + coords.alt) * Math.cos(latRad) * Math.sin(lonRad);
const z = (N * (1 - WGS84_E2) + coords.alt) * Math.sin(latRad);
return { x, y, z };
}
This gave us fairly good precision, but not without the expected inaccuracy caused by being indoors.
In our solution the current position is then represented by an icon moving around the 3D model based on the current GPS data from the device.
To connect this representation to realtime data from all the currently active users, we decided to set up an Azure SignalR Service, with an accompanying Azure Storage and Azure Function App for the backend, bringing it all to the cloud, almost like a stairway to heaven. With this setup, we could use the @microsoft/azure package inside the PCF component, receiving connection, disconnection and location update message broadcast from all other users, showing where they are right now.
In our journey to streamline our development and deployment processes, we embraced the principles of low-code development. Our goal was to align our CI/CD processes with these principles, and we found the perfect solution in Microsoft Power Platform Pipelines, enhanced by the new Git integration feature. This approach provided us with business value, offering an easily maintainable ALM solution that ensured compliance, security, and privacy protection.
Setting Up Our Environment
To begin, we created a dedicated production environment for hosting our pipeline app. This step was crucial in segregating our production workloads from development and testing activities, ensuring a stable and secure environment for our live applications.
Next, we installed the installed the Power Platform Pipelines Model-Driven App and started configuring our pipeline:
Once the pipeline was fully configured, we verified our solution by checking its connection to the pipeline. This verification step was essential to ensure that our setup was correct and that we could initiate the first deployment to our TEST environment without any issues.
Leveraging Source Control
One of the standout features of our implementation was the integration with Git. By utilizing the source control menu, we could easily view the change log and track modifications made to our solutions. This transparency was invaluable for facilitating change validation and code reviews among our developers.
In Azure DevOps, we created a new project with branches related to the respective environments.
Establishing Naming Conventions
To maintain clarity, we used prefixes on our canvas app screens and for other components within the solution. This practice helped ensure that each component was easily identifiable and organized, facilitating better management and reducing confusion.
For example, as illustrated in the below image we aimed to standardize naming convetion for screens, containers and other controls in general. The purpose of this was to make it easier when referencing these components later using Power FX. We applied this practice to our canvas app as it is a common best pratice to use elsewhere when working with Dynamics 365 modules and related components (etc, forms, security roles).
Adhering to Best Practices
Throughout this process, we adhered to several best practices for ALM in Power Platform:
Environment Strategy: We used separate environments for development and production, ensuring that changes were tested in DEV before deployment.
Solutions: We utilized managed solutions for production environments and unmanaged solutions for development, aligning with industry guidelines.
Source Control: Our integration with Git and the implementation of a branching strategy ensured effective version control and collaboration.
Automation: By configuring Power Platform Pipelines, we automated our deployment processes, reducing manual errors and ensuring consistency.
Governance and Security: We implemented role-based access control and ensured compliance with security protocols, protecting our data and applications.
14:41: Updated to include additional information around embedding the map in a mobile app and searching in multiple ways.
You want to see if other professors are around in the school? We found a magical map in the School form of our OwlExpress app. The Marauder’s Map is using Websockets magic to track everyone on the school premises Right Now.
It also is using device embedded voice recognition to understand your spells, you need to say: “I solemnly swear that I’m up to no good.” if you want to see the map.
When you are done and want to hide the map you must say: “Mischief Managed!”.
Do you like the glossy pixels of this map?
Mobile Map
We have also managed to embed this into a canvas app for a mobile delivery for sneaky students upto no good.
Searching in many ways
We have also introduced a glossy new feature for searching our student database in multiple magical ways, searching via standard text boxes, searching by scanning a business card, drawing a students name and utlising Copilot.
It’s EVIDIosa, not Leviosaaaa thoughts on ALM and the way of work.
Claims the badge ACDC Craftsman.🚀
Claims the badge Power Of The Shell.🚀 – Please have a look at the “Azure DevOps Pipelines for CRM Solution” section
Workflow – Azure DevOps Boards
Using boards to keep track of badges and tasks. For better collaboration, progress and overview.
Scrum
Given the very short length of the project, there has been a need for tight collaborations and check-ins. In a way we have practised a super compressed variation of Scrum, with several daily standups each day to secure progress. In a project like this, it is particularly important to avoid banging your head against the wall for too long time on one task. Regular meetings has made sure that the team has been able to maintain a best possible progress at all times, while also being a platform for contributing with new ideas.
Naming conventions
Tables (Entities)
General
Change tracking should be enabled by default for all “business” entities where history might be needed.
This is because change tracking is often indispensable for troubleshooting.
Change tracking can be disabled for specific fields if necessary, e.g., when automation frequently changes irrelevant fields.
Forms
Never modify the standard form. Clone it (save as) and hide the original form from users.
Standard forms can be updated by the vendor, which may overwrite changes or disrupt updates.
Having the original form for comparison is also useful for troubleshooting.
This does not apply to custom tables.
Ensure fields like status, status reason, created by/on, modified by/on, and owner are visible on the form.
These are useful for both administrators and end-users.
Option Sets
Do not reuse “Default” option set values by giving them new names.
When Should I Use Configuration, Business Rules, Processes, Flows, JavaScript, Plugins, or Other Tools?
It is important to choose the right tool for the job.
Considering administration, maintenance, and documentation, we prioritize tools in the following order:
Standard functionality/configuration
Many things can be solved with built-in functionality, e.g., setting a field to read-only doesn’t require a Flow. 😉
Business Rules
Processes/Workflows
Can run synchronously.
JavaScript
Flows
Suitable for querying Dataverse data.
Plugins
Solutions
Unmanaged in development:
We have not yet decided on a solution model but will likely use a base package initially and then move to feature-based solutions.
Managed in test and production.
Deployment
All changes are deployed via pipelines in DevOps.
NO MANUAL STEPS/ADJUSTMENTS IN TEST OR PRODUCTION.
All data referenced by automation (e.g., Flow) must be scripted and inserted programmatically (preferably via pipeline) to ensure GUID consistency across environments.
Application Lifecycle Management
Why ALM?
Application Lifecycle Management (ALM) enables structured development, testing, and deployment processes, ensuring quality and reducing risks.
It is a consistant way of deploying features
You get equal environments
Better collaboration
History
Common way of working
The whole team is always up to date with changes in developmen
Overall solution
A diagram showing the overall deployment cycle for our CRM Solution.
Solution Strategy
Using a single solution strategy. All ready changes is added to the solution. Fewer dependencies and less complexity is preferable when working within such a time span, and allows for smoother collaboration. The solution is not too complex still, so it makes sense to gather all components in a single solution. As mentioned earlier, it should be considered over time to move to feature-based solutions.
Service Principles
Service Principals are used for between Azure DevOps each environment to ensure industry standard connection to Dataverse without exposing credentials in the code. These are the names of our Service Connections.
hogverse-dev
hogverse-validation
hogverse-test
hogverse-prod
In honor of the great ALM Wizard 👉Benedik Bergmann👈the Service Connections are configured with the new feature of Workload Identity federation for ADO Connection. This eliminates the need for managing and renewing secrets, reducing the risk of pipeline failures due to expired credentials. Credentials are not exposed in code or stored in repositories.
Setup of Workload Identity federation for ADO Connection
App Registration: Registering app registrations in Microsoft Entra ID.
Dataverse: Adding the app registration as an application user in our target Dataverse environment, assigning it System Administrator role.
ADO Service Connection: Creating a service connection in Azure DevOps, linking it to the Dataverse instance using Workload Identity Federation.
Adding Federated Credentials: Configuring the app registration to recognize the ADO service connection by setting up federated credentials with the correct issuer and subject identifier.
Entra ID:
Environments
DEV The Room of Requirement The development environment for Hogverse.
Validation – The Chamber of Truth The validation environement to check changes before merging to our main branch.
TEST: The Restricted Section The UAT test environment for Hogverse.
PROD: The Great Hall The production environment for Hogverse.
Pipelines
Our ALM solution for deploying changes in Power Platform and CRM is the following:
Export The Export Pipeline is retrieving the changes from the “DEV The Room of Requirement” environment and creates a pull request.
Build The Build Pipeline packs the changes in the pull request branch and deployes it to “Validation – The Chamber of Truth” to see if something breaks before mergin it to our main branch. When this completes successfully it creates a Release and can be deployed with the Release Pipeline.
Release The Release Pipeline deployes the changes to “TEST: The Restricted Section” and “PROD: The Great Hall” after a user has in the team has approved the release, by using Approval and checks in Azure DevOps Pipelines.
Approval and checks for “TEST: The Restricted Section”
Approval and checks for “PROD: The Great Hall”
Repository Settings:
Settings for the Build Service User.
Setting and requirements for merging changes to our main branch that goes to production.
Azure DevOps Pipelines for CRM Solution
Claims the badge Power Of The Shell.🚀
Export Pipeline
The Export Pipeline is retrieving the changes from the “DEV The Room of Requirement” environment and creates a pull request. Below is the yml code for export pipeline.
trigger: none
pr: none
resources:
pipelines:
- pipeline: Build # Reference to the build pipeline
source: Build # Name of the build pipeline to trigger from
trigger:
branches:
include:
- main
stages:
- stage: DeployTest
displayName: "Deploy to Test"
jobs:
- deployment: deployTest # Use deployment job for environment reference
environment: "The Great Hall - Test" # Reference the 'The Great Hall - Test' environment
pool:
vmImage: "windows-latest"
strategy:
runOnce:
deploy:
steps:
- checkout: self
- download: Build # pipeline resource identifier.
artifact: drop
- task: PowerPlatformToolInstaller@2
displayName: "Power Platform Tool Installer"
- task: PowerPlatformImportSolution@2
inputs:
authenticationType: "PowerPlatformSPN"
PowerPlatformSPN: "hogverse-test"
SolutionInputFile: '$(Pipeline.Workspace)\Build\drop\Solutions\HogverseBasis.zip'
AsyncOperation: true
MaxAsyncWaitTime: "60"
- task: PowerPlatformPublishCustomizations@2
inputs:
authenticationType: "PowerPlatformSPN"
PowerPlatformSPN: "hogverse-test"
AsyncOperation: true
MaxAsyncWaitTime: "60"
- stage: DeployProd
displayName: "Deploy to Production"
dependsOn: DeployTest # Depends on successful deployment to Test
condition: succeeded()
jobs:
- deployment: deployProd # Use deployment job for environment reference
environment: "The Restricted Section - Prod" # Reference the 'The Restricted Section - Prod' environment
pool:
vmImage: "windows-latest"
strategy:
runOnce:
deploy:
steps:
- checkout: self
- download: Build # pipeline resource identifier.
artifact: drop
- task: PowerPlatformToolInstaller@2
displayName: "Power Platform Tool Installer"
- task: PowerPlatformImportSolution@2
inputs:
authenticationType: "PowerPlatformSPN"
PowerPlatformSPN: "hogverse-prod"
SolutionInputFile: '$(Pipeline.Workspace)\Build\drop\Solutions\FE360Basis.zip'
AsyncOperation: true
MaxAsyncWaitTime: "60"
- task: PowerPlatformPublishCustomizations@2
inputs:
authenticationType: "PowerPlatformSPN"
PowerPlatformSPN: "hogverse-prod"
AsyncOperation: true
MaxAsyncWaitTime: "60"
Issues
We had this problem during the event and unfortunately we did get to run hour pipelines the way we wanted.
And we tried a work around but that ended up with… Self Hosted Agent.
Looks good all the way in the installation aaaaand in the final step was blocked by admin rights at the company…. You need admin rights…
ALM for Fabric
We have implemented deployment pipelines for deploying changes between workspaces so that the reports can be tested and verified before going into production.
Hogverse Deployment Pipelines for deploying items between workspaces.
ALM for Svelte and Front end
The Svelte project is for now hosted in a private Github repository shared between the developers. Each developer creates their own branch for each new feature added. When a feature is ready to commit, a pull request is created and approved from others on the team. On approval, the branches are merged and the feature branch is normally deleted to ensure a clean project at all times.
With more time on our hands, we would have preferred to import the repository to Azure DevOps and created pipelines for Dev, Validation, Test and Prod as for the CRM solution.
Bicep honourable mention and hot tip
The Azure resources used in the project has mostly been created on the fly as part of experimentation, but we would most definitely have created Bicep files for deployment of them for each environment as well. Microsoft MVP 👉Jan Vidar Elven👈 have created a super useful public repository with templates for deploying resources on his Github account: https://github.com/JanVidarElven/workshop-get-started-with-bicep
As part of the gamification experience of our app, we wanted to incorporate the ability to cast spells. The purpose of these spells were to gain an advantage over what was seemingly a superior opponent. Using spellcasting in a creative way enabled the user to be creative and test various spells to see their impact on the game.
This feature was solved by creating a Power Automate flow. The flow utlized the Azure Cognitive Services API and its Text-to-speech service. The user would then be prompted to use the phone microphone to record a spell, and the flow would then send back a text version and response based on the chosen spell. In addition, the flow also utilized an open API for converting the sound file to the appropriate format for further processing. Below is the outlined flow:
Below is also a demo video, illustrating the feature in effect: