Today we tried to solve the head 2 head power apps challenge. We could not come with solution in time, but we managed to learn and prepare a lot about ML and power apps. While exploring the possibilities, we found the following option:
Lobe is a very easy to use product hat easily generates models based on images that you provide to the application. It has a direct integration with power apps. Here you can see a little bit of the UI:
Once the model was created and trained we exported it to power apps studio and got it available as a model under AI builder modules:
The next step as actually pretty simple, by just adding the model to a power app and running the Predict function, we got answer from the model to identify the type of pizza selected on an image picker.
Thanks to Ahmad Najjar for the tips and the challenge, new things learned 😊
Rafael is starting to get happy with our product and recommended this to his grandpa. The issue is that he doesn’t have “BankID for Mobil” to sign the deal. This makes it necessary to add functionality for him to sign a paper, scan, and send.
Using Word we created a document that Rafael’s grandpa could sign. Using AI Builder with text recognizing we could bring out this data and add the operational data to DataVerse and store the document in a safe place.
We started by creating an AI Builder model to get fields from the document
After training this model based on our document, we have created a flow that has an input of a document. This is where we will be sending the PDF in, to analyze and get the values from it. After getting the data from the document that is needed on the contact card, the PDF is sent to Azure Blob for storage.
So far so Good 😊, we have done some modifications to our solution and have now established a more attractive business value and some better user interfaces and data integrations.
First of all the new idea is to reduce the monitoring space and range, so we came up with changing the scope and customer type to: ZOOs. Our new brand is: SmartZoo.
In each ZOO we can find different species of animals living together in controlled habitats, they have very different needs and behaviors, that is why we needed an adaptable system that could receive information from different IoT devices measuring different types of data.
Our platform is “device ready”, it means that we can include new devices in an easy way by using the concept of “device groups” which allows us to differentiate animals by grouping IoT devices based on species.
Our final goal is to implement Smart ZOO in multiple locations around the world in different ZOOs and get Insights from all of them so we can identify anomalies, improve facilities and procedures and last but not least increase sustainability by optimizing the use of resources like electricity, food and water.
We created a web page to be presented to possible customers, it is responding very well to different screens and devices and it is a good entry point to get in touch with us. The URL is: https://smartzoo.webflow.io/
We have created a dashboard, in IoT Central, with data from the IoT devices (smarthphones + IoT DevKit). IoT DevKit provides data of temperature, pressure and humidity.
We are tracking the turtles locations in and showing it in a map. We want to use this information to send alerts if turtles escape their cage. We also want to generally gather information about their movement, which can be used to tell if a turtle is sick or hurt.
Smartphone (turtle) devices provides gyroscope (x,y,z) which tell how the rotation is for the smartphone (turtle). We are using this to provide information if the turtle is turned upside down (the worst thing that can happen to a turtle…). If the turtle (smarphone in our case) is upside down a rule is trigged and an email is sent to the zoo keeper and the light on the smarphone is turned on.
Logic app:
Email:
Light turned on the smartphone:
Machine Learning
We are currently working on implementing machine learning such as anomaly detection. If there is any spikes or dibs in the temperature value want to store this information in CosmosDB and send an alert message. This can detect any abnormal behavior in any of data we are collecting, and automate the job of alerting relevant people of what is happening.
We are gathering temperature information from the Event hub, sending it to Stream Analytics Jobs to detect anomaly and sending output result to Cosmos DB:
CosmosDB:
WebApplication
We have implemented Geolocation with SignalR so the markers are shown in the map in real time. When the devices are in motion the markers will move at the same time.
The front end calls the Hub so it does not need to call the controllers each time with a common Http call.
Excellent user experience
Here we can include all changes to our branding and webpage. IoT central got also a new brand and it is much easier to identify the different habitats per species in the dashboards.
We have also improved the responsiveness of some of the visuals in the IoT central by implementing signalr and React.
Most extreme business Value
We tried to explain it in the overview of this POST, but as we mentioned before, the real business values comes when getting data from different Zoos and produce Insights and make decisions based on it.
Rock Solid Geekness
Here is an overview of technology used:
Killer App We find it difficult to explain how it is a killer app, but we hope it is enough with all the previous text 😊
Vi i Munchmuseet er veldig glade for å kunne lansere en ny funkjsonalitet i Munch besøksappen: GiveMeMoreInfo!
Syns du at maleriene våre vekker interesse, men du skulle gjerne hatt enda mer informasjon om maleriet? Da har vi løsningen for deg! GiveMeMoreInfo! gir deg mer informasjon om maleriet du ser på gjennom applikasjonen. Det eneste du trenger å gjøre er å ta et bilde av maleriet, og informasjonen hentes inn automatisk!
Vi bruker Ai Builder for å gjenkjenne ulike malerier. Denne har blitt trent ved bruk av ca. 15 bilder fra hvert maleri. (Litt Nasty??)
Vi bruker en Power Automate for å motta bilde fra Canvas Appen, sender bildet videre til AI modellen som returnerer hvilket bildet det kjenner igjen i fotografiet tatt fra Canvas Applikasjonen.
I fremtiden håper vi å kunne bruke appliasjonen for å:
Utvide til å gi tilpasset informasjon baser på alder f.eks
Gi informasjon om lignende mallerier du kunne ha likt som Museet også har inne eller i sin virtuelle samling
Gi forslag til besøk i andre museer basert på dine interesser.
Vise hvordan bildet faktisk så ut da den ble laget ( mallerier er ofte gule grunnet at coaten over har decomponert grunnet UV lys og ligende)
Etter mye klabb og babb har vi endelig fått satt opp Raspberry Pie’en vår med Python, Vs code, ssh-tilkobling til git-repo og selvfølgelig et fungerende webkamera!
Ved hjelp av et lite bash-script, Azure’s egne pythonmoduler får vi lastet opp og analysert bildene innen få sekunder, med en liste over alle objekter i bildet. Etter litt testing er vi veldig imponert over presisjonen, selv om Azure insisterer på at klementin vår er et eple. Svaret sendes videre til en Power Automate flow som oppdaterer data verse.
Arbeid utført. Pull request til godkjenning
Når arbeidstempoet er såpass høyt, er det fort gjort å glemme skikkelig testing eller linting der det trengs, så før det merges inn i develop branchen, må endringene godkjennes av en av teammedlemmene. Konfigfiler og nøkler skal for eksempel ikke inn i kildekoden.
Vår løsning som gir bonden anledning til å sende inn bilder til analyse, for å identifisere ugress, inkluderer bruk av Cloudmersive sin ML/AI-baserte bilde-API’er.
Utsnitt fra deres dokumentasjon for API’ene:
Vår bruk av API’et skjer via Logic App som kommer med en hendig ferdig connector.
Responsen fra Cloudmersive kommer som RGB arrays med de 5 mest fremtredende fargene. Dette oversetter og forenkler vi til fargenavn (Red, Purple, Blue, Green, Yellow, Cyan) via Hue i HSL fargemodell. Denne oversettingen gjøres i Function App RGBtoHSL, ref screenshot under.
Ever wondered where your dog are when there’s no food around? How about asking Cortana, Siri, Alexa, or even better Boten Anna?
We have created an integrated solution, with CI/CD in Azure Devops, Cognitive Services, Azure Functions and Power Virtual Agent.
The solution starts with an Azure Function written in Visual Studio.
When code is completed, committed and Pull Requested into the main branch, our CICD pipeline (YML) fires and pushes the solution to Azure Functions
The Azure function is running with a set of pre-captured surveillance images (Due to GDPR we are not using live video). Each folder consists of three photos, one from the kitchen, one from the hallway and one from the livingroom.
Azure Blob storage
Each of these photos will be evaluated with the cognitive service vision functionality. Each of the evaulated photos will be returned with an url for the user to check, and with a textual answer to where the dog is actually located.
Gjennom å integrere oss mot SharePoint, function app og ACS (Azure Cognitive Services)
I den tekniske løsningen vår benytter vi en integrasjon mot SharePoint, for å detektere at en ny fil har blitt lagt til, samt uthenting av denne filens innhold.
Videre har vi en integrasjon mot Function App som vi kjører et POST-kall mot tjenesten som gir oss 64 objekter tilbake.
Disse 64 objektene sendes til Azure Cognitive Services for analysering av innholdet.
Deretter blir dataen(Sjakkbrettet) prosessert og sendt til DataVerse ved en create.
Flow:
Ved å bruke Power Automate sin Cloud Flow har vi opprettet en Flow som trigges ved opprettelse av en ny fil i SharePoint i en mappe vi har kalt “Moves”. Ved å bruke en standard funksjon i Flow som henter innholdet i filen, får vi en string som vi kan sende til function app.
CRM / Dataverse
CRM gir oss en visuell fremstilling av sjakktrekkene og gir oss et digitalt sjakkbrett for å verifisere at AI har plukket opp riktig sjakktrekk.
Visuell fremstilling av sjakktrekkene Digital fremstilling av sjakkbrettet som vil vise siste gjennomført sjakktrekk
Function-app
Function appen mottar en string gjennom HTTP protokollen og deretter kutter opp bildet i 64 objekter, samt legger metadata på hver enkelt objekt for å identifisere hvor plasseringen er. Deretter blir disse sendt tilbake til flowen gjennom HTTP.
Power BI
Vi har påbegynt Power BI rapporten som henter ut data fra hvert sjakkparti. Fra rapporten under ser dere hvem som spiller mot hverandre, vinnersjanse, sjakktrekk og lokasjonen til Fredrik og Mathias.
Rapporten er under utvikling
Lego-armen
Vi har begynt å bygge armen som skal flytte legobrikkene / Sjakkbrikkene. Armen er enda ikke komplett som dere ser, men vi har fått armen til å bevege seg.