CHUVASH.eu

CHunky Universe of Vigourous Astonishing SHarepoint :)

Category Archives: azure

Flashing Trådfri lights on Azure Alerts

What if you put together Work From Home and Home Automation? Well, removing the common denominator (HOME) would mean Work Automation (sic!). I want to tell you about a tiny hobby project I have had at home, still related to work of mine: Whenever an Azure alert is triggered, my Trådfri smart light from IKEA flashes for a couple of seconds.

Summary (if you want to skip the long story below): The solution is a tiny web application. The publicly accessible url, exposed using ngrok, is registered as a webhook in an Azure Alert. It’s on Github, you’re welcome to use it as you please 😎:

How I did this (the long story)

The github repo (linked above) is self-expaining, but here comes the story. I used the same setup for Azure Alerts as described in my previous blog post:

When I was done setting up an alert, I thought: besides a notification in a Teams channel, I thought: what if I could show the alert visually using some LED or similar? Then I thought about Home Automation and a Trådfri RGB bulb I’ve got. That’s the beauty of the mentioned equation: Work From Home and Home Automation. We can pick the best parts of it and combine to something unique.

Since I have a kit from IKEA containing a gateway, a remote, and an RGB lamp, I wanted to do something with that. Unfortunately I didn’t find any routines (Google Home), applets (IFTTT) or automations (Home app in iOS) that could do it.

Luckily, there is a way of controlling the Trådfri lights, best described in this tutorial:

As in this tutorial I also used a Raspberry Pi Zero W, and it went very well, except one thing: Trådfri team introduced a change for the security code, I needed an additional step that was missing, more on that later.

The flow from an Azure Alert to the flashing light.

The tutorial says: the world is your lobster. My “lobster” is a webhook that makes lights flash on an alert, so I needed to have a simple web server (http.server) and a tunnel to my network (ngrok). It was best to take one step at a time.

Step 1. Connect

First, I wanted to make sure I could have a simple web server that could host my webhook. I followed the advice from that tutorial and used http.server python module:

I didn’t need to install any additional modules, you have this already on the Raspberry Pi OS. Just create a simple file like this:

from http.server import BaseHTTPRequestHandler, HTTPServer
host_name = '192.168.0.193'
host_port = 8000
class MyServer(BaseHTTPRequestHandler):
def do_HEAD(self):
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.end_headers()
def do_GET(self):
self.do_HEAD()
self.wfile.write("hej".encode("utf-8"))
if __name__ == '__main__':
http_server = HTTPServer((host_name, host_port), MyServer)
print("Server Starts – %s:%s" % (host_name, host_port))
try:
http_server.serve_forever()
except KeyboardInterrupt:
http_server.server_close()

Start it:

python3 alert-step1-server.py
view raw alert-step1-start.sh hosted with ❤ by GitHub

I opened that page, (192.168.0.193:8000), and I could see “hej”, time to go further.

Step 2. Connect World

Next step was to open up this “web app” for the world, to make it accessible from outside my local network. ngrok is the best solution for that. I followed that guide to install ngrok on my Raspberry Pi Zero W.

The installation process was pretty straight forward, for the record I tried to install ngrok as a snap, it did not work.

cd ~
wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-arm.tgz
tar -xvzf ngrok-stable-linux-arm.tgz
view raw alert-step2-ngrok.sh hosted with ❤ by GitHub

I also fetched the authtoken and registered it locally

Then I started the ngrok tunnel:

And my web app went online:

Step 3. Harness the lights

Now to the core of this hobby solution: controlling Trådfri lights.

I installed, configured and built the libcoap client, as described in the blog post I already mentioned:

But I also installed git, because my Raspberry Pi OS installation didn’t have it.

sudo apt-get install build-essential autoconf automake libtool git -y
git clone –recursive https://github.com/obgm/libcoap.git
cd libcoap
git checkout dtls
git submodule update –init –recursive
./autogen.sh
./configure –disable-documentation –disable-shared
make
sudo make install

Next, I found the IP Address and the security code of the IKEA Trådfri Gateway, using my router:

Then I created a new preshared key (that’s the news I mentioned above). With just the security code, you will get 4.01 “Unauthorized” when you try to control the lights, as described:

# -k = Security Code, that you can find on the back of the gateway
# 9090: xxx, your new client identity, you decide, in my case TOLLERASP0
# coaps: the ip address is the one of your gateway
coap-client -m post -u "Client_identity" -k "OHsfKxV0UaJu81" -e '{"9090":"TOLLERASP0"}' "coaps://192.168.0.120:5684/15011/9063"

I got the pre-shared key that I saved for later use:

With this information you can harness the IKEA lights:

# off
coap-client -m put -u "TOLLERASP0" -k "{presharedkey}" -e '{ "3311": [{ "5850": 0 }] }' "coaps://192.168.0.120:5684/15001/65537"
# on
coap-client -m put -u "TOLLERASP0" -k "{presharedkey}" -e '{ "3311": [{ "5850": 1 }] }' "coaps://192.168.0.120:5684/15001/65537"
view raw alert-step3-onoff.sh hosted with ❤ by GitHub

5850:0 is off, 5850:1 is on. Easy-peasy, right?

Want to know how to control the brightness, the colors etc, just check this documentation (already mentioned):

Step 4. Put everything together

When I knew I could have a simple webhook service, locally (step 1) and on the WWW (step 2), and that I could control the smart light I’ve got from IKEA using code running on my raspberry pi, then connecting everything was easy. I created a repo for that and you can see that it is a very simple one:

The main part is in the server.py. When it gets invoked, it calls the flash function. It uses os.system to call the libcoap-client and time.sleep for delay parts needed in the flash action. The configuration is parsed using configparser and the server is a simple http.server.

In the end I registered the ngrok endpoint in my Azure Alert Rule Action Group:

Then I triggered my test logic app that failed reliably 🙂

After 1-2 minutes my smart light started flash:

Success 🎯🎯🎯🎯

Words of caution and Tips

Security

http.server does not provide the right level of security, it’s most for prototyping. For this tiny hobby project I have, it’s exactly what I need. Don’t use it as it is for production.

Treat the security code your preshared key appropriately, you don’t want to be hacked.

Flashing lights reacting to alerts is cool, but think about the work-life balance. Don’t have it in your bedroom 😎.

Inspect ngrok from other computer

By default the ngrok web inspect interface is only available from localhost (127.0.0.1), make it available across your network by configuring ngrok:

# ip address of your raspberry pi
echo "web_addr: 192.168.0.193:4040" >> ~/.ngrok2/ngrok.yml

Reserve your local ip addresses

The router can assign new ip addresses to your devices. Reserve the ip addresses of your raspberry pi and your IKEA Trådfri Gateway. It will make your life easier.

Start ngrok closer to you and in the background

EU is closer to me, but also running the background is nice when you only have one terminal:

# -region eu
# >/dev/null & for running in the background
~/ngrok http 192.168.0.193:8000 -region eu > /dev/null &

Replay ngrok calls

This is a game changer: rather than wait for an alert to be triggered, you can just Replay it over and over again while you mickle-muckle your python code locally.

Keep running your server after logout

You just need to to have “nohup” when you start your server, ngrok has already what’s needed: nohup python3 server.py. With that the server will run even when you log out or, your ssh connection disappears.

Next steps

I’d like to end this post also by saying: The world is your lobster. Try out the flashing lights on Azure Alerts, or why not to replace Azure Alerts with Exoprise Alarms, or some triggers in Power Automate, perhaps, when a new site has popped up 🙂 Or maybe you want to elaborate the flashing behaviour, why not to use Morse code to send a message? Or maybe color-code the different types of alarms/alerts. Once again, the world is your lobster 🦞(or oyster 🦪, well whatever) .

Setting up a HelloWorld Azure Alert

Azure Alerts are awesome for monitoring of solutions in Azure. If you are about to set up your first Alert Rules in Azure, then it’s a guide for you. Configuring alert rules can be quite intimidating at first, with all the options, metrics, evaluation times, etc.

Here is a very very simple setup that can serve as a teaser and help you get started with the Azure Alerts.

I’ll use Teams as an easy way to set up notifications.

The core solution (alert handler) will be an Azure Function, also because it’s fast and easy to set up.

A reliable failing resource

“Reliable failing”, huh? Yes, this oxymoron is the best description of what we are looking for: a resource in Azure that can fail reliably (“fail faster”), so that we can trigger our alerts while developing.

To do that an easy way, we’ll just create a logic app and let it fail all the time.

Run it, and you’ll see how it fails as intended.

The runs history of my failing logic app

When you’re done setting up the alerts, you can remove the failing logic app.

Communication channel

On the other end we need a reliable communication channel.

Let’s pick a channel in a team and create an incoming webhook. I call my webhook alert-hook. (Just to make it easier to follow this guide, it will appear here and there)

Why incoming webhook? Because it is easy to create and send messages to, and also with the right notifications on that channel and the Teams mobile you’ll get the smoothest way of setting up push notifications! Isn’t it great to get your custom alerts directly on you mobile in real time?

Just to make one step at a time, already try the incoming webhook by calling it using PowerShell. Verified small steps make it easier to troubleshoot future potential issues.

$body = "{'text':'hello world'}"
$ct = 'application/json; charset=utf-8'
$uri = "https://outlook.office.com/webhook/6c15f246-4ad8-47a7-ac3c-8c3e4ff96e08@21a772a0-3ad8-483b-bef2-d9c28cfe5dff/IncomingWebhook/7d88ca6b0f7d417bb87d4f8ae8816760/1522ad47-6712-4e3b-b454-d1198e0287a8"
Invoke-RestMethod Method POST ContentType $ct Body $body Uri $uri
view raw call-webhook.ps1 hosted with ❤ by GitHub

When you see the “hello world” from alert-hook in your Teams channel, then you’re ready to proceed with the next step.

Alert Handler

Now it’s time to set up the core of that solution – a handler that will receive alerts and pass it to the Teams channel.

Why do we need an Alert Handler? Well, because you can’t send the alerts directly to a Teams channel (or whatever communication channel you choose), they have different schemas. But also, an Alert Handler is an opportunity to make an alert more readable (e.g. by formatting it as an adaptive card), and even filtering out some alerts or parts of them (e.g. in some scenarios only Fired Events (not Resolved) are relevant for notifications).

For the sake of simplicity, let’s just create PowerShell Azure Function in the Azure Portal. Just choose everything latest (in my case it was PowerShell Core 7.0, Consumption Plan, West Europe). If you uncertain, check this post:

Create a new function alert-hook, and paste the same hello-world code snippet from the above step.

Test and run it once to verify this step. If you see the hello-world again, then it works.

Alert Rule

Now it’s time to set up an Alert Rule, the most intimidating part 🙂 Let’s just get this over with.

Open the Failing Resource, and navigate to its Alerts.

We’ll create an alert rule as simple as possible. For reference you can check this out:

We need those 4 steps. I created a simplified diagram of the properties that you need have in mind:

Alert Rule Scope

When you click on “New alert rule”, the Scope will be already defined, it will point to the “Failing Resource”.

Alert Rule Condition

There are so many signals and possibilities. In this guide, just choose “Runs Failed” as a Signal.

In the Alert Logic, select “Greater than or equal”, 1; 5 minutes Granularity Period and 1 minute Frequency of Evaluation. No comment on them right now. Just do it.

When we’re done, re-visit this page and try other things, right now we just want to have an alert directly when our failing resource fails.

Action Group

Action Group is what gets triggered. It is billed and that’s why it is connected to a Resource Group (it might be another resource group, it does not need to be in the same place as our “Failing Resource”). Just create a new one:

Here is a simplified diagram of an action group:

Action Group Action – Webhook

There are a couple of options for Notifications and Actions to try out. Let’s focus on the Webhook in this guide. In the picture it is called GenericTolleAlertHook.

Copy the Uri from the your function (“Get function url”) and paste it into the Webhook URI.

Important: enable the common alert schema. That will save much of the pain.

Common Alert Schema

The payload in the alert may vary. To make it more predictable for parsing in the alert handler, we just need to enable the common schema, which will be crucial when we will extact and send some data to the Teams channel.

Action Group Bonus Tip: It might be not obvious when you set up it in the Azure Portal, but an alert rule can have 1 or multiple action groups (!). And the other way around: An action group can be used in multiple Alert Rules.

That makes it very flexible, we could create one generic Action Group that notifies Teams and reuse it across alert rules.

Alert Rule Details

The last step is to give the rule a name and description. Keep the Severity as it is right now.

Alert Handler Improvements

We need one more thing to call this guide complete: rather than saying Hello World, we need to have “Alert Fired” and what alert (alert rule name), to make it useful for real.

Let’s re-visit the alert-hook function and make some improvements. Remember the common alert schema? Make sure you enable it in the Alert Rule -> Action Group -> Action. When you do that you will get payloads like these I get:

When you look at them you can see some attributes that we can make use of:

  • $Request.Body.data.essentials
    • .alertRule (name)
    • .monitorCondition (“Fired” vs. “Resolved”)
    • .firedDateTime (vs. resolvedDateTime)
    • .description
    • .severity
    • .alertTargetIDs (https://portal.azure.com/#@/resource + alertTargetIDs) (the logic app)
view raw rule-props.md hosted with ❤ by GitHub

We’ll use the alertRule and monitorCondition properties, that we’ll send in the body of the incoming webhook to Teams:

Let’s test and run. Copy and paste a sample alert payload (with the common alert schema). The links are above.

It should result in a new post in your Teams channel:

Further improvements

A simple alert rule is configured. Enjoy! Discover more and if you would like new challenges here are some tasks that you can try:

Adaptive Card

Try to update the payload in the Teams incoming webhook to make an adaptive card.

Fired vs. Resolved

It might be good to have different paths for Fired and Resolved. I find it confusing when Resolved Notifications appear alongside with Fired Events. It’s better to suppress the Resolved notifications, or at least format them differently, or maybe even post them as answers to an existing posts (the original Fired posts)?

Summary

Azure Alerts are great. Start with a simple set up, see it working and improve continously. An ActionGroup can be reused, you can have a generic Action Group. That makes it easy to set up new alert rules and and you only need to update the action in one place only. Of course, the alert rules can have their specific actions as well, you can connect more than one action group to an alert rule. Use Common Alert Schema to avoid parsing errors and to achieve a generic action group.

Teams is a good notifications destination, especially for your first Alert Rule, it’s easy to set up, does not mean additional costs and (best of all), you and your colleagues can enable notifications the destination channel (channel with your incoming webhook), that way you will be immediately notified when something fails in your Azure Applications, – both on Desktop and in your mobile! Good DevOps, isn’t it?

Using secrets in Logic Apps in a secure way

This is a guide for how to handle secrets in a logic app in a secure way. It combines three resources:

First, enable a Managed Identity for your Logic App:

In the KeyVault, add a new Access Policy for the new Managed Identity (from the previous step). Use the least priviliges. In my case it is just enough with GET for secrets.

Next add an HTTP action to the key vault.

The values should be:

Next, open the Settings of the “Get Client Secret” action and tick the “Secure Outputs (Preview)”

To get the secret we need to parse the http response. Only the value is needed.

{
"properties": {
"value": {
"type": "string"
}
},
"type": "object"
}

Now let’s call the Graph API and authenticate using this secret:

In the run history we can see now, that the password is not shown anymore.

Neither it is visible in the next http call:

Note that the run history is kept for a while, if you have used secrets in plain text, it is a good practice to change them.

Filtering Azure Table Data directly in the Azure Function Binding

Instead of filtering values from an Azure Storage Table, you can do it directly in the bindings. It might not be a solution for everything, but in the right place, it is fantastic. I was very surprised to see how little code was needed after this binding change:

module.exports = async function(context, req) {
context.res = {
body: context.bindings.inputTable
};
};

For that to work, define the filter attribute in the bindings: “filter”: “(PartitionKey eq ‘{package}’)”

{
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": ["options", "get"]
},
{
"type": "http",
"direction": "out",
"name": "res"
},
{
"type": "table",
"name": "inputTable",
"tableName": "metadata",
"connection": "AzureWebJobsStorage",
"direction": "in",
"filter": "(PartitionKey eq '{package}')"
}
]
}

To try it out, add a new row in a table defined in the bindings (“metadata” in my case):

Start the function app and navigate to your function:

Just a quick tip today. I hope this helps you in your work. The raw material comes from stackoverflow:

Azure Key Vault vs. Pipeline Variables

Using Azure Key Vault in a Pipeline is cool, but it is less secure.

The Key Vault setup

Have you tried the Key Vault Step in an Azure DevOps Pipeline? If you haven’t, please follow these awesome guides:

The steps described in these guides are easy, but that effort made me think about the first pair of pros and cons.

A pipeline variable is faster to configure

A variable in a pipeline takes zero time to set up. Also, A secret variable remains as a secret, since no one can read it in plain text. To configure the Key Vault way of getting secrets requires admin time. Unless you have Admin rights in your Azure Active Directory and your Azure Subscription, you might need to request and argument for one or more of the following:

  • A service principal (an App registration) with a secret.
  • An Azure Key Vault (and maybe a resource group) with an Access Policy for the service principal
  • A service connection in your Azure DevOps Project

Of course, most of it is one-time-job. But still, in many organizations it will require good preparation. The pros for an Azure Key Vault secrets in a pipeline is that

  • Admins can manage the secrets centrally from Azure
  • It is easier to audit the Key Vault Access.
  • Set it up once and let Azure DevOps people use it and re-use it in many pipelines, but still you need to set up a new Service Connection in every Azure DevOps Project

The fact that it is easier to reuse lets me think of my second pair of pros and cons.

A pipeline secret variable is more secure

Let’s say you need a password to a service account that will upload something important, e.g. an account that will upload a new SPFx package to the SharePoint App Catalog.

Doing it the pipeline variable way means that it remains as a secret on that particular release pipeline. Only release administrators of that project can alter the pipeline steps. No one else.

A tip for productivity: Use Group Variables to share the secrets within a project in Azure DevOps.

Doing it the Key Vault way, means that you must watch out on every part of that chain:

  • Users who have access to the Key Vault in Azure.
  • Service Principal that can read the secrets through access policy. Who has access to the secret?
  • Service Connection in an Azure DevOps Project. Who can use this Service Connection – to add and modify release pipelines? By default, all Release Administrators can do that. To do it more secure, you need to limit the count of Release Administrators. But it means less flexibility in a team and more admin effort for the allowed Release Administrators.

Also, the service principal used for getting the secrets and in the service connection, should not be reused across projects in Azure DevOps. Dedicated Service Principals will make it more secure because misuse can be more easily discovered and stopped – and thats on a project level, not for all service connections.

Summary

In flat small organizations, using a Key Vault for using secrets in Azure DevOps Pipelines is great, it saves you time. But it is less secure, and requires time and effort for an appropriate security.

Trust gulp-connect certificate from Visual Studio Online on Mac OS

I have read and followed this awesome post:

Getting SPFx working in Visual Studio Online by SPDavid.

I got my fingers and tried that guide out. This worked good, I spent some time, though, googling (binging) around to get rid of the SSL Warnings for the remote “localhost” on my Mac.

I would like to share this simple instruction on how to trust a self signed certificate from gulp-connect on Mac OS. The implication is that the certificate is on the remote linux machine (on the Visual Studio Environment), that you are connected to through the Visual Studio Code extension.

The first step (after you have connected and set up a project) is to download the certificate. It can be found in the following directory:

/workspace/<your-spfx-project>/node_modules/gulp-connect/certs/server.crt

Choose a folder (like Desktop or whatever) to save it to. Then double click server.crt to open it in the Keychain Access.

In the Keychain Access, locate the certificate, it will have the name “gulp-connect”. Open it and enter the “Trust” section. Under “When using this certificate” select “Always Trust”.

Keychain Access – certificate – Trust – Always Trust

After that you might need to restart the browser. But then it should stop warning you.

This certificate is trusted for this account

Tips and Trick for Azure Functions

These are my favourite tips and tricks. These are only those who me and my colleguages have tried out.

Architecture tips

Keep it slim

Functions should do one thing and they should do it well. When you develop it in C# and Visual Studio, it is so tempting to develop a “microservice” in a good way, you add interfaces, implement good patterns, and all of a sudden you get a monolith packaged in a microservice. If your function grows, stop, rethink. Better to see how it input and output bindings can be used. Also orchestration with Logic Apps or Durable Functions can help.

Automated Deployment

It might be an obvious one, but it is super easy to setup CI/CD for Azure Functions in Azure DevOps. Set it up as early as possible. Don’t rely on publishing from Visual Studio.

Environments

Different environments like Production and Staging (Test, UAT, QAT, verification), and DEV are not straight forward anymore, when everything is reactive and micro. But it is good to have at least two setups: one for Production and one for Staging. Especially separating the storage accounts has been proven to be a success story. You can have the same queue name, but different connections. Deploying to Staging and Production will be easier. The functions in different “environments” will write/read a queue with the same name but in different storage accounts.

I also find it convenient to have postfix in the azure function names, like collect-shipments-staging and collect-shipments-production.

If it is possible, use separate resource groups for the “environments”.

Tips for performance

One instance at a time

Use host.json to prevent parallelization

{
  "queues": {
      "batchSize": 1,
      "newBatchThreshold": 0
    }
}

Add messages to a queue as output

Instead of adding queue messages in code, define it as an output. You can even add multiple messages. This saves you instantiating of CloudStorageAccount which is a good thing for performance.

Take Last Run into account

Just check the timer parameter: timer.Schedule.Last for the time when your Azure Function ran last.

Reuse HttpClient

This tip is from CloudBurst in Malmö in September 2019. Eventhough your function runs on a consumption plan, the chance is big that your code will run on the same server, which means that you can reuse some resources, like HttpClient.

Daniel Chronlund Cloud Tech Blog

News, tips and thoughts for Microsoft cloud fans

Вула Чăвашла

VulaCV - Чăвашла вулаттаракан сайт

Discovering SharePoint

And going crazy doing it

Bram de Jager - Architect, Speaker, Author

Microsoft 365, SharePoint and Azure

SharePoint Dragons

Nikander & Margriet on SharePoint

Mai Omar Desouki

PFE @ Microsoft

Cameron Dwyer

Office 365, SharePoint, Azure, OnePlace Solutions & Life's Other Little Wonders

paul.tavares

Me and My doings!

Share SharePoint Points !

By Mohit Vashishtha

Jimmy Janlén "Den Scrummande Konsulten"

Erfarenheter, synpunkter och raljerande om Scrum från Jimmy Janlén

Aryan Nava

DevOps, Cloud and Blockchain Consultant

SPJoel

SharePoint for everyone

SharePointRyan

Ryan Dennis is a SharePoint Solution Architect with a passion for SharePoint and PowerShell

SharePoint 2020

The Vision for a Future of Clarity

Aharoni in Unicode

Treacle tarts for great justice

... And All That JS

JavaScript, Web Apps and SharePoint

blksthl

Mostly what I know and share about...