Conducting surveys often involves tedious typing, which can be challenging, especially for students. To make the process easier, we’re leveraging Azure Speech Recognition in Power Pages to transcribe spoken responses directly into text fields. Students can simply speak their answers instead of typing them:
How It Works
Connecting Azure Speech SDK To enable speech recognition, we connect the Azure Cognitive Services Speech SDK to our Power Pages using this script.
HTML Setup for Speech Input We added a microphone button and a text area to capture and display the transcribed response. Here’s the code for the interface:
Clicking the microphone button starts recording.
The spoken response is transcribed into the text area.
Saving Responses and Navigating Once a student provides their answer, clicking the Save & Next button saves the response and moves to the next question. Here’s how it works:
Benefits
Ease of Use: Students can focus on their answers without worrying about typing.
Efficiency: Responses are saved automatically, and the survey flows smoothly.
Accessibility: Ideal for students with typing difficulties or those who prefer speaking.
By combining Azure Speech Services with Power Pages, we’re simplifying the survey process and improving the overall experience for users. Speech technology makes surveys faster, easier, and more engaging!
Part 1: Automating Azure Function App with Durable Functions and CI/CD Pipelines
In our cloud infrastructure, we have designed and implemented an Azure Function App that utilizes Azure Durable Functions to automate a checklist validation process. The function operates in a serverless environment, ensuring scalability, reliability, and efficiency.
To achieve this, we:
–Use Durable Functions for long-running workflows and parallel execution. –Implement a timer-triggered function that regularly checks for missing documents. –Deploy using Azure DevOps CI/CD Pipelines for automated deployments and testing.
This post covers Azure Function App architecture, Durable Functions, and our CI/CD pipeline implementation.
🔹 Azure Durable Functions: Why We Chose Them
Our workflow involves:
–Retrieving all checklists from SharePoint. – Processing them in parallel to check for missing documents. – Updating the checklist if documents are missing.
We use Azure Durable Functions because: Stateful Execution – Remembers past executions. Parallel Execution – Checks multiple users simultaneously. Resilient and Reliable – Handles failures gracefully. Scales Automatically – No need to manage servers.
How Our Durable Function Works
Timer-Triggered Function: Initiates the Orchestrator
This function triggers every 5 minutes, calling the orchestrator.
Each activity function is responsible for a specific task.
📌 Get All Checklists
Retrieves all checklists from SharePoint.
📌 Process Individual Checklist Items
What It Does:
Retrieves missing documents for a user.
Updates the SharePoint checklist accordingly.
Handles errors and retries if needed.
PART 2: Automating Deployments with Azure DevOps CI/CD Pipelines
To ensure seamless deployment and updates, we use Azure DevOps Pipelines.
📌 CI/CD Pipeline Breakdown
–Build Stage – Runs dotnet build and dotnet test. –Deploy Stage – Uses Bicep templates (main.bicep) for infrastructure-as-code deployment.
🔹 Azure DevOps Pipeline (azure-pipelines.yml)
We use Azure CLI and Bicep for automated Azure Function deployment.
main.bicep
By leveraging Azure Durable Functions, we transformed a manual checklist validation process into an automated, scalable, and highly resilient system.
With Azure DevOps CI/CD, we now have a fully automated deployment pipeline, ensuring high reliability and faster releases. 💡 Next, we will discuss a new business logic, SharePoint interactions, and integrations in a dedicated post. Stay tuned!
After the student is allocated to the new Faculty, the Wayfinder Academy provides a hyper care by allowing students to verbally communicate with the voice digital twin of their mentor.
From the technology part, we are using: 1. LiveKit – to handle real-time audio communication. Students join room via the LiveKit SDK embedded in the Next.js frontend.
2. ChatGPT Voice Assistant – to process voice inputs using a pipeline: – ASR (speech-to-text)
LLM (GPT-4) to generate intelligent responses
TTS text-to-speech
STT to process audio streams and convert them to text
3.Next.js application – serves as the frontend:
SSR ensures fast loading and API integration
connects students to LiveKit rooms and the assistant, displaying responses or playing them as audio
Here are more details on the backend part:
Transcription Handler:
an entrypoint function to connect to LiveKit Room, track subscriptions and initialise the assistant:
When working on Power Pages, we created a Core JavaScript file to streamline development. This file contains reusable methods used across different pages.
For example,
we are using the Send function across the pages to send a get request
“SharePoint: The Room of Requirement for files—sometimes it’s there, sometimes it’s not, and sometimes it’s full of things you didn’t ask for”
So our Oleksii the Oracle, following his dream to follow the footsteps of one of our honorable judges and become the “World Champion of SharePoint” (We admire your work, @mikaell ).
went to expore, learn and share.
In our Wayfinder academy, we collect a lot of data, we manage it. And everyone knows how difficult it is not to forget to provide to faculty administration some important stuff or remember where it is, etc. To fix this problem for students of Hogwarts we are using a sharepoint which acts as a back office for faculty administrators and our academy workers.
For this purpose, we have a checklist that controls the completeness of the provision of documents by students required by faculty administrators.
Also we have a docset that stores data from students. Files from students are uploaded to corresponding folders from out of the box integration with Dataverse and SharePoint.
CheckList processing handled by a timer triggered Azure durable function that crawls all the students from checklist and searches for the missing documents in the library. If app found any missed documents it updates checklists, so faculty administrators could see which documents are missing and act accordingly (exclude student, joking!)
Azure function is written with .Net and hosted in the Azure. Variables for azure function are also stored in the Azure Function App.
Also we would like to setup an Azure devops project for repository and CI/CD of our Azure Functions. Stay tuned, more is coming!! Woo hooooo! This is so much fun!!!!
So, with Yurii the Wise, we are connecting our Bluetooth device to the internet.
This article describes implementing an IoT device: a Bluetooth to MQTT gateway. The idea is to implement a device with both Bluetooth and Wi-Fi connectivity. Then, all captured data from the BLE can be streamed to the IoT Gateway.
The controller is M5Stick C plus2, based on the ESP32 System on the Module.
We implemented the firmware in the Arduino IDE, which can be used to create a fallback Wi-Fi access point to initialize the Wi-Fi and MQTT connection details. The firmware was written in C++ language using the Arduino framework. We also flipped the bit to turn on the Bluetooth proxy oximeter automatically and turned on the screen to render average values of the SpO2 and BPM.
How does it work?
We used the following external library:
PulseOximeterLib library to interact with the Pulse Oximeter device. This lib provides the parsed pulse data as well.
WiFiManager facilitates the onboarding process when the device creates the Fallback Access point to configure the Wi-Fi and MQTT connections.
We are building the firmware with the Single-responsibility principle to encapsulate the logic of interaction with different peripheral devices and make the logic of main.ino file as clean as possible. Unfortunately, making it work in that couple of days was impossible, and the result was unstable. But stay tuned; we will release the repo with the worked code soon.
In a previous article, we mentioned that our Logiquill platform aims to help students unleash their potential using insightful information from their background, current aspirations and vision on the future. This type of data gives us the opportunity to build preliminary suggestions, but how can we make it more precise?
For our creative minds, the first and obvious idea was to use wearable devices like smart rings and smartwatches to catch the physical metrics during the review sessions
Which metrics are available to fetch from the wearable devices?
Daily physical activity (Sleep quality, number of steps, etc… )
Bio: Heart Rate, EKG, Blood pressure, Oxygenation, Let me add a few details regarding Bluetooth devices:
To demonstrate that type of integration during the event, we used a portable Pulse Oximeter to capture real-time health data )come and try it out by the way :))
This device has a built-in Bluetooth interface that exposes all available sensors, such as isFIngerExists, isCalibrated, BPM, Oxygenation, and Pleth. So, we can use HTML 5 API to make it work with our portal app to capture the health data during the students’ review session. The device uses the BCI protocol:
Let us add a few details regarding Bluetooth devices:
According to the specifications of the Bluetooth, each device must implement at least one service (Audio, UART, MediaPlayer Control, etc.). Each service can have its characteristics (Volume, Speed, weight, etc..).
In our case, the Pulse Oximeter device has Service ID: 49535343-fe7d-4ae5-8fa9-9fafd205e455 and characteristic ID: 49535343-1e4d-4bd9-ba61-23c647249616. Here is an example of the JS code how to parse the Bluetooth packet:
The final result looks like this and we are loving it! We hope you too 🙂