Okey, we do agree that solution that won is a bit better than this, but that’s why we think this is a dirty hack! Instead of doing what you should do with the challenge, we ended up creating a canvas app that could run offline and add that to the Power Page. It still solves the challenge, doesn’t it?
And, you might as how we got this brilliant idea.. What is more innovative use of AI if it’s not asking these questions you don’t have the answer for yourself? So we asked ChatGPT how we could solve this challenge, and this is the answer we got:
We get satellite images from the Sentinel Hub API where we can get images from the coordinates we want. This makes it easy to compare images with AIS data from the same time period and coordinates.
Sattelite image of shipsCounting ships
To identify ships from the satellite images, we need more eyes than we have available. So what’s better than letting the machines do the work. We have developed our own ML code in Python to recognize ships. This will make it much more efficient than manually reviewing all the images. Currently, the machine has 99% confidence in recognizing ships. It then creates a heat map that shows where it thinks (with 99% probability) there are ships. Then it counts the number of ships and sends it on.
Training the model to gain confidence
When we know how many ships are within a given area at a given time, it is easy to compare this with data from AIS. Here we search through the API of the Norwegian Coastal Administration, where we can count the number of ships within the same time and area.
JSON from AIS data
Using external data in this way provides great value for our customers as the threat at sea is significantly lower 💾🏴☠️
The reason we are calling this the jackpot app because we believe that it’s hitting multiple score-points!
The idea of this app is to enlist pirates’ fleet without a single character punch!!! Aided and infused by AI Builder, external API’s and Power Automate Flows; this Power Apps canvas app is state of the art geekiness!
Although, registered vehicles are the last of our worries (as pirates), but hey we thought of doing it anyways 🙂 The app starts by taking a picture of the registration plate of the vehicle. the app will send the picture to the AI Builder through a power automate flow that process the image and read the registration plate and show it on the app for confirmation.
Once the you are satisfied with the result, you can check the vehicle against the national registry and search for more registration information. This indeed does check the car against the Norwegian car registry (Statens vegvesen) show if the vehicle is approved for going on the European roads (EU-godskjenning). Also least of our concerns!
Actual data from Statens VegvesenCrawling and datamining data from external sources
Now, after confirming and inserting car data into our system. We need to identify the vehicle type and compare with our collected data in our backend system. However, identify the care is a simple process that doesn’t need any text input from the user. One picture will do while the AI Builder will take care of identifying the type of the vehicle and insert it into our backend system.
Vehicle plate identification flow (calls another flow through HTTP)AI Builder made as an endpoint (all AI Builder models and flow are on separate environment, as AI Builder can’t be included in a solution for deployment)
We are using a mix of built-in and custom AI models to achieve efficiency and productivity in an innovative way.
Bacground: When recruiting potential pirates we want to give them proper pirate names. Pirates 365 is a service oriented platform, and the recruitment frontend posts potential new pirates through a Pirate
Our solutions is using Power Automate and the HTTP Trigger to accept new pirates
We are using the OpenAI (independent publisher) step found in Power Automate (https://learn.microsoft.com/en-us/connectors/openaiip/) since it is yet not so easy to get access to Azure Open AI which probably would be better.
We had to do some tuning to get a little bit different answers – at first it just returned “Captain firstname lastname”, but after some tuning by setting down the temperature from 1 to 0,7 and explicity saying not to use the word “captain” we got the desired result.
We are creating the new pirate recruits as Leads in Dataverse so the recruitment team can contact them for possible empolyment
Here is the lead created in Dataverse shown in the “Pirate 365 Manage”, ready to be qualified as a proper Pirate!
We aim for the badges “go with the flow” for using Power Automate Cloud Flows and “The Extstential Risk” for innovative use of AI
Som del av prosjektet med å kartlegge trusler i Oslofjorden er det også viktig å kartlegge artsmangfoldet i fjorden for å vurdere hvilke arter som potensielt er truet og på den måten kartlegge risiko. I sensor-hubene har Bombshells satt inn kamera som trigges av sonarsensoren for å ta bilder av dyrelivet i området. Ved å bruke AI gjenkjenner vi ulike arter av skillpadder som potensielt vil være sårbare for vannforurensning og heve alvorlighetsgraden ved en hendelse.
Vi har trent opp en AI model i Power Apps til og identifisere tre ulike typer skilpadder, landskilpadder, havskilpadder og sumpskilpadder. Vi har matet AI med bilder av tre typer skillpadder i sine respektive miljø.
Vi bruker bilder for og trene modellen minimum 256 x 256 piksler som er kravet for og trene modellen. Bildet nedenfor viser hvordan bildene med ulike typer markeres og kategoriseres.
Etter vi har markert alle bildene vi ønsker og bruke som treningsgrunnlag, trener vi modellen og får en ytelse score. Her er den publiserte modellen etter den er ferdig trent.
Til slutt for å teste systemet har vi bygget en PowerApp som holder på en enkel komponent som laster inn et bilde og sender det til AI for identifisering. Resulatet vises i form av at et gjenkjent objekt markeres og betegnes med riktig kategori i grensesnittet, sammen med en score på hvor sikker AI er på at resultatet er riktig.
Kilde til risikoanalyse
Med disse dataene knyttet opp mot lokasjonsdata vil man kunne hente inn og sammenstille beregninger som brukes i risikovurdering av en hendelse.
Who can pick up the mantel after we retire? Who is truly a ninja!? We need to recruit the future of TMNT and the best to do it is through AI. We will use AI Builder and a custom model to identify our future ninjas, in return we offer cheap pizza, a full 5% discount on your purchase!
Only the customers masked as a TMNT will be offered the precious discount!
NÃ¥ har vi bygget videre og kommet opp med et mye mer robust kjøleskap. Flytteesker kan brukes til sÃ¥ mangt 😎 Fortsatt en del testing med forskjellige frukter (man tager hva man haver). Responsen er sÃ¥ som sÃ¥, men vi er pÃ¥ god vei. Klementin ble tidligere eple (ref. https://acdc.blog/in2022/kjoleskapet-kan-se/), nÃ¥ er appelsin appelsin, og eple er eple 🎉
Her har vi en appelsin innendørs i et kjøleskap med både vegger og gulv. Imponerende? Yes!!!Og her er hva som ble tatt bilde av. 1-0 til AI
Oppsettet er som følger: Webkamera montert i kjøleskapsdør, koblet til Raspberry PI. Arduino med knapp (les mye frustrasjon med KNAPP!!!!) som trigger kamerabilde ved Ã¥pning av kjøleskapsdør. Arduino styrer i tillegg en trinnmotor som spinner en kul 3D-printet bling pÃ¥ toppen av kjøleskapet (bling kommer, men se bilde). Alt pÃ¥ Arduino styres av den samme Raspberry PI’en ved hjelp av Python. NÃ¥r bilde er tatt, sendes det til Azure Cognitive services for analyse, og responsen kommer som oppdagede objekter. Deretter matcher vi dette med vÃ¥r egen ingrediensdatabase i Dataverse via en Power Automate flow (kjøres via en HTTP request trigger i flowen) som oppdaterer ingredienser med attributtet “In fridge” = True/False.
Bildet under viser flowen som trigges fra kjøleskaps-raspberryen. Deretter oppdateres status på hva vi har i kjøleskapet. Dette er deilig å slippe og gjøre manuelt.
Her har vi integrert rubbel og bit, og det fungerer!
Det vi frykter nå er at maskinene skal vende seg mot oss, og si at kjøleskapet er tomt til en hver tid. Da kan vi risikere å sulte i hjel foran kjøleskapsdøren. Men inntil videre velger vi å stole på vår kjære Fridgitoid 9000
Today we tried to solve the head 2 head power apps challenge. We could not come with solution in time, but we managed to learn and prepare a lot about ML and power apps. While exploring the possibilities, we found the following option:
Lobe is a very easy to use product hat easily generates models based on images that you provide to the application. It has a direct integration with power apps. Here you can see a little bit of the UI:
Once the model was created and trained we exported it to power apps studio and got it available as a model under AI builder modules:
The next step as actually pretty simple, by just adding the model to a power app and running the Predict function, we got answer from the model to identify the type of pizza selected on an image picker.
Thanks to Ahmad Najjar for the tips and the challenge, new things learned 😊
Rafael is starting to get happy with our product and recommended this to his grandpa. The issue is that he doesn’t have “BankID for Mobil” to sign the deal. This makes it necessary to add functionality for him to sign a paper, scan, and send.
Using Word we created a document that Rafael’s grandpa could sign. Using AI Builder with text recognizing we could bring out this data and add the operational data to DataVerse and store the document in a safe place.
We started by creating an AI Builder model to get fields from the document
After training this model based on our document, we have created a flow that has an input of a document. This is where we will be sending the PDF in, to analyze and get the values from it. After getting the data from the document that is needed on the contact card, the PDF is sent to Azure Blob for storage.