Memory lane: My C-64 source code from -87
Saturday, July 25. 2020
When I was a kid growing up with computers, there was one (1) definite medium for a Finnish nerd to read. MikroBitti.
Wikipedia describes MikroBitti as:
a Finnish computer magazine published in Helsinki, Finland
For any youngster it will be exteremely difficult to comprehend an era of computing without Internet. It did exist, I did live (and almost survive) through it.
Among the scarce resources was code. Magazines printed on paper published source code for platforms of that era. Regular people copied the code by typing (mostly incorrectly) in an attempt to obtain more working software. As we know, a single mis-typed character in a 10000 character code will crash the entire thing, at minimum produce unpredictable results. Then came modems and era of BBSes. Before that happened, I was sure to learn everything I could from those magazine-published codes. I did that even for platforms I didn't own nor have access to.
Cover and contents.
A trivial C-64 BASIC application by an unknown software engineer wanna-be producing a calendar for a given year. It even could print the calendar if you owned a printer.
Background info:
The code was written with a C-128 in C-64 mode. I was aiming for a larger audience C-64 had at the time. I don't remember the fee I received from this, but in my imagination it must have been something around 300 FIM. By using the Value of money converter @ stat.fi, 300 FIM in 1987 would equal to ~95 € in 2019. At the time, the low amount didn't matter! That was the first ever monetary compensation I received for doing something I was doing anyway. For all day, every day.
The brief intro for the calendar app was cut half by editor. What remains is a brief Finnish introduction about the purpose of the app and for other wanna-be software engineers a description what the variables in the code do.
Enjoy! If you find bugs, please report. 
When I was a kid growing up with computers, there was one (1) definite medium for a Finnish nerd to read. MikroBitti.
Wikipedia describes MikroBitti as:
a Finnish computer magazine published in Helsinki, Finland
For any youngster it will be exteremely difficult to comprehend an era of computing without Internet. It did exist, I did live (and almost survive) through it.
Among the scarce resources was code. Magazines printed on paper published source code for platforms of that era. Regular people copied the code by typing (mostly incorrectly) in an attempt to obtain more working software. As we know, a single mis-typed character in a 10000 character code will crash the entire thing, at minimum produce unpredictable results. Then came modems and era of BBSes. Before that happened, I was sure to learn everything I could from those magazine-published codes. I did that even for platforms I didn't own nor have access to.
Cover and contents.
A trivial C-64 BASIC application by an unknown software engineer wanna-be producing a calendar for a given year. It even could print the calendar if you owned a printer.
Background info:
The code was written with a C-128 in C-64 mode. I was aiming for a larger audience C-64 had at the time. I don't remember the fee I received from this, but in my imagination it must have been something around 300 FIM. By using the Value of money converter @ stat.fi, 300 FIM in 1987 would equal to ~95 € in 2019. At the time, the low amount didn't matter! That was the first ever monetary compensation I received for doing something I was doing anyway. For all day, every day.
The brief intro for the calendar app was cut half by editor. What remains is a brief Finnish introduction about the purpose of the app and for other wanna-be software engineers a description what the variables in the code do.
Enjoy! If you find bugs, please report.
Data Visualization - Emotet banking trojan
Monday, January 27. 2020
Emotet is a nasty piece of malware. It has been around The Net for number of years now and despite all the efforts, it is still stealing money from unsuspecting victims who log in into their online bank with their computers and suddenly lose all of their money to criminals.
Last month, I bumped into a "historical" Emotet-reference. A document contains the URLs for malicious distribution endpoints of documents and binaries used to spread the malware. It also contains IPv4-addresses for Command & Control servers. There are hundreds of endpoints listed, and every single one I tested was already taken down by ISPs or appropriate government officials. Surprisingly, only 20% of the URLs were for Wordpress. Given its popularity and all the security flaws, I kinda expected the percentage to match its market share, 35% of all the websites in the entire World run Wordpress. If you're reading this in the future, I'd assume the percentage to be higher.
As a coding exercise, I analysed the listed endpoints for all three variants (or Epochs as this malware's generations are called) of Emotet and created a heatmap of them. It would be really fun to get a list of all the infected computers and list of those computers where money was stolen from, but unfortunately for my curious mind, that data wasn't available.
So, no victims, only hijacked servers in this map:

Actual Google Maps -application I wrote is at https://blog.hqcodeshop.fi/Emotet-map/map.html, go investigate it there.
This is a simple project, but if anybody want's do learn data visualization with Google Maps JavaScript API, my project is at https://github.com/HQJaTu/emotet-malware-mapper. Note: You will need an API-key from Google for your own projects. My API-key is publicly available, but restricted. It won't work for you.
As analysis of the hijacked distribution points and C2 -servers, there is lot of heat in obvious places, Europe and North America. But as you can see, there are lots of servers in use all around the globe. That should give everybody an idea why fighting cybercrime is so difficult.
Update 30th Jan 2020:
Emotet seems to be picking up speed, there is a US CISA warning about Increased Emotet Malware Activity. Apparently it is #1 malware currently in the world.
Emotet is a nasty piece of malware. It has been around The Net for number of years now and despite all the efforts, it is still stealing money from unsuspecting victims who log in into their online bank with their computers and suddenly lose all of their money to criminals.
Last month, I bumped into a "historical" Emotet-reference. A document contains the URLs for malicious distribution endpoints of documents and binaries used to spread the malware. It also contains IPv4-addresses for Command & Control servers. There are hundreds of endpoints listed, and every single one I tested was already taken down by ISPs or appropriate government officials. Surprisingly, only 20% of the URLs were for Wordpress. Given its popularity and all the security flaws, I kinda expected the percentage to match its market share, 35% of all the websites in the entire World run Wordpress. If you're reading this in the future, I'd assume the percentage to be higher.
As a coding exercise, I analysed the listed endpoints for all three variants (or Epochs as this malware's generations are called) of Emotet and created a heatmap of them. It would be really fun to get a list of all the infected computers and list of those computers where money was stolen from, but unfortunately for my curious mind, that data wasn't available.
So, no victims, only hijacked servers in this map:
Actual Google Maps -application I wrote is at https://blog.hqcodeshop.fi/Emotet-map/map.html, go investigate it there.
This is a simple project, but if anybody want's do learn data visualization with Google Maps JavaScript API, my project is at https://github.com/HQJaTu/emotet-malware-mapper. Note: You will need an API-key from Google for your own projects. My API-key is publicly available, but restricted. It won't work for you.
As analysis of the hijacked distribution points and C2 -servers, there is lot of heat in obvious places, Europe and North America. But as you can see, there are lots of servers in use all around the globe. That should give everybody an idea why fighting cybercrime is so difficult.
Update 30th Jan 2020:
Emotet seems to be picking up speed, there is a US CISA warning about Increased Emotet Malware Activity. Apparently it is #1 malware currently in the world.
Running PHP from Plesk repo
Sunday, December 8. 2019
In 2013 I packaged PHP versions 5.4, 5.5 and 5.6 into RPMs suitable for installing and running in my Plesk-box. PHP is a programming language, more about that @ https://www.php.net/. Plesk is a web-hosting platform, more about that @ https://www.plesk.com/.
As I chose to distribute my work freely (see announcement https://talk.plesk.com/threads/centos-6-4-php-versions-5-4-and-5-5.294084/), because there wasn't that much innovation there. I just enabled multiple PHP versions to co-exist on a single server as I needed that feature myself. In 2015 Plesk decided to take my freely distributed packages and started distributing them as their own. They didn't even change the names!
A sucker move, that.
However, I said it then and will say it now: that's just how open-source works. You take somebody else's hard work, bring something of your own and elevate it to the next level. Nothing wrong with that. However, a simple "Thanks!" would do it for me. Never got one from big greedy corporation.
In this case, the faceless corpo brought in stability, continuity and sustained support. Something I would never even dream of providing. I'm a single man, a hobbyist. What they have is teams of paid professionals. They completed the parts I never needed and fixed the wrinkles I made. Given the high quality of my and their work, ultimately all my boxes have been running PHP from their repo ever since.
This summer, something changed.
My /etc/yum.repos.d/plesk-php.repo
had something like this for years:
baseurl=http://autoinstall.plesk.com/PHP_7.2/dist-rpm-CentOS-$releasever-$basearch/
I was stuck at PHP 7.2.19, something that was released in May 2019. Six months had passed and I had no updates. On investigation I bumped into https://docs.plesk.com/release-notes/obsidian/change-log/#php-191126. It states for 26th November 2019 for PHP 7.2.25 to be available for Plesk. That's like a big WHAAAAAAT!
More investigation was needed. I actually got a fresh VM, downloaded Plesk installer and started installing it to get the correct URL for PHP repo. It seems to be:
baseurl=http://autoinstall.plesk.com/PHP73_17/dist-rpm-CentOS-$releasever-$basearch/
Ta-daa! Now I had PHP 7.3.12:
# rpm -q -i plesk-php73-cli
Name : plesk-php73-cli
Epoch : 1
Version : 7.3.12
Release : 1centos.8.191122.1343
Architecture: x86_64
Source RPM : plesk-php73-cli-7.3.12-1centos.8.191122.1343.src.rpm
Build Date : Fri 22 Nov 2019 01:43:45 AM EST
Build Host : bcos8x64.plesk.ru
Packager : Plesk <info@plesk.com>
Vendor : Plesk
Summary : Command-line interface for PHP
Description :
The php-cli package contains the command-line interface
executing PHP scripts, /usr/bin/php, and the CGI interface.
Actually PHP 7.4 is also available, just replace PHP73_17
with PHP74_17
, to get the desired version.
PS.
Most of you are super-happy about your Apache/PHP -pair your distro vendor provides. If you're like me and ditched Apache, getting Nginx to run PHP requires some more effort. And if your requirements are to run a newer version of PHP than your vendor can provide, then you really short on options. Getting tailored PHP from Plesk's repo and pairing that with you Nginx takes one stresser out.
In 2013 I packaged PHP versions 5.4, 5.5 and 5.6 into RPMs suitable for installing and running in my Plesk-box. PHP is a programming language, more about that @ https://www.php.net/. Plesk is a web-hosting platform, more about that @ https://www.plesk.com/.
As I chose to distribute my work freely (see announcement https://talk.plesk.com/threads/centos-6-4-php-versions-5-4-and-5-5.294084/), because there wasn't that much innovation there. I just enabled multiple PHP versions to co-exist on a single server as I needed that feature myself. In 2015 Plesk decided to take my freely distributed packages and started distributing them as their own. They didn't even change the names! A sucker move, that.
However, I said it then and will say it now: that's just how open-source works. You take somebody else's hard work, bring something of your own and elevate it to the next level. Nothing wrong with that. However, a simple "Thanks!" would do it for me. Never got one from big greedy corporation.
In this case, the faceless corpo brought in stability, continuity and sustained support. Something I would never even dream of providing. I'm a single man, a hobbyist. What they have is teams of paid professionals. They completed the parts I never needed and fixed the wrinkles I made. Given the high quality of my and their work, ultimately all my boxes have been running PHP from their repo ever since.
This summer, something changed.
My /etc/yum.repos.d/plesk-php.repo
had something like this for years:
baseurl=http://autoinstall.plesk.com/PHP_7.2/dist-rpm-CentOS-$releasever-$basearch/
I was stuck at PHP 7.2.19, something that was released in May 2019. Six months had passed and I had no updates. On investigation I bumped into https://docs.plesk.com/release-notes/obsidian/change-log/#php-191126. It states for 26th November 2019 for PHP 7.2.25 to be available for Plesk. That's like a big WHAAAAAAT!
More investigation was needed. I actually got a fresh VM, downloaded Plesk installer and started installing it to get the correct URL for PHP repo. It seems to be:
baseurl=http://autoinstall.plesk.com/PHP73_17/dist-rpm-CentOS-$releasever-$basearch/
Ta-daa! Now I had PHP 7.3.12:
# rpm -q -i plesk-php73-cli
Name : plesk-php73-cli
Epoch : 1
Version : 7.3.12
Release : 1centos.8.191122.1343
Architecture: x86_64
Source RPM : plesk-php73-cli-7.3.12-1centos.8.191122.1343.src.rpm
Build Date : Fri 22 Nov 2019 01:43:45 AM EST
Build Host : bcos8x64.plesk.ru
Packager : Plesk <info@plesk.com>
Vendor : Plesk
Summary : Command-line interface for PHP
Description :
The php-cli package contains the command-line interface
executing PHP scripts, /usr/bin/php, and the CGI interface.
Actually PHP 7.4 is also available, just replace PHP73_17
with PHP74_17
, to get the desired version.
PS.
Most of you are super-happy about your Apache/PHP -pair your distro vendor provides. If you're like me and ditched Apache, getting Nginx to run PHP requires some more effort. And if your requirements are to run a newer version of PHP than your vendor can provide, then you really short on options. Getting tailored PHP from Plesk's repo and pairing that with you Nginx takes one stresser out.
Python script for creating Linux iptables port forwarding rules based on an YAML-file
Saturday, September 14. 2019
I was trying to get Wreckfest multiplayer working. Unlike many modern games, the studio/publisher of this game does not run multiplayer servers. Instead, you need to do this the old-fashioned-way. I haven't done this since ...90s(?) and my port forwarding setup skills were rubbish. I remembered having pretty solid skills for managing stuff like this, but not needing that for past 20 years does its damage.
Instead of doing a quick look at the manuals and information what others did and hacking a single-time setup, I chose to create a generic tool to generate the required commands to be run as a root. Running the tool doesn't require any elevated privileges, but running the result will. For example, generating Wreckfest rules:
./portforward-yaml-to-iptables.py Wreckfest.yaml 192.168.0.5 eth1
will result:
iptables -t nat -F wreckfest > /dev/null || iptables -t nat -N wreckfest
iptables -t nat -A PREROUTING -i eth1 -j wreckfest
iptables -t nat -A wreckfest -p tcp -m multiport --dports 27015:27030 -j DNAT --to-destination 192.168.0.5
iptables -t nat -A wreckfest -p tcp -m multiport --dports 27036:27037 -j DNAT --to-destination 192.168.0.5
iptables -t nat -A wreckfest -p udp -m udp --dport 4380 -j DNAT --to-destination 192.168.0.5
iptables -t nat -A wreckfest -p udp -m multiport --dports 27000:27031 -j DNAT --to-destination 192.168.0.5
iptables -t nat -A wreckfest -p udp -m udp --dport 27036 -j DNAT --to-destination 192.168.0.5
iptables -t nat -A wreckfest -p udp -m udp --dport 33540 -j DNAT --to-destination 192.168.0.5
That's plenty, but will do the trick. Also my design choice is to create a per-game chain of the rules, so cleaning up is easy. Just run iptables -F wreckfest
and be gone with the forwarding.
GitHub project is at https://github.com/HQJaTu/iptables-portforwarding-rule-generator, go see it!
Update 15.9.2019:
I got comments from people: "Why do you need this? UPnP works fine. No hassle."
Well... It would take a million years for somebody to convince me to enable UPnP in my router. The entire concept has been designed from ground up to be insecure. The obvious goal was to create something that will create required port-forwardings into your router quickly and easily without asking you anything.
There is a lengthy discussion about UPnP security in multiple places of the Internet. My favorite one is at Is UPnP still insecure?. It will cover all the aspects of the issue.
The ultimate conclusion still remains: I do not want a random device/software to be able open access anything in my LAN from public Internet without me knowing about it. When such opening is required, I will open it for the duration needed and the appropriately close the access. UPnP doesn't tick any of the requirement boxes for me. That's why no. It won't be allowed in my systems.
I was trying to get Wreckfest multiplayer working. Unlike many modern games, the studio/publisher of this game does not run multiplayer servers. Instead, you need to do this the old-fashioned-way. I haven't done this since ...90s(?) and my port forwarding setup skills were rubbish. I remembered having pretty solid skills for managing stuff like this, but not needing that for past 20 years does its damage.
Instead of doing a quick look at the manuals and information what others did and hacking a single-time setup, I chose to create a generic tool to generate the required commands to be run as a root. Running the tool doesn't require any elevated privileges, but running the result will. For example, generating Wreckfest rules:
./portforward-yaml-to-iptables.py Wreckfest.yaml 192.168.0.5 eth1
will result:
iptables -t nat -F wreckfest > /dev/null || iptables -t nat -N wreckfest iptables -t nat -A PREROUTING -i eth1 -j wreckfest iptables -t nat -A wreckfest -p tcp -m multiport --dports 27015:27030 -j DNAT --to-destination 192.168.0.5 iptables -t nat -A wreckfest -p tcp -m multiport --dports 27036:27037 -j DNAT --to-destination 192.168.0.5 iptables -t nat -A wreckfest -p udp -m udp --dport 4380 -j DNAT --to-destination 192.168.0.5 iptables -t nat -A wreckfest -p udp -m multiport --dports 27000:27031 -j DNAT --to-destination 192.168.0.5 iptables -t nat -A wreckfest -p udp -m udp --dport 27036 -j DNAT --to-destination 192.168.0.5 iptables -t nat -A wreckfest -p udp -m udp --dport 33540 -j DNAT --to-destination 192.168.0.5
That's plenty, but will do the trick. Also my design choice is to create a per-game chain of the rules, so cleaning up is easy. Just run iptables -F wreckfest
and be gone with the forwarding.
GitHub project is at https://github.com/HQJaTu/iptables-portforwarding-rule-generator, go see it!
Update 15.9.2019:
I got comments from people: "Why do you need this? UPnP works fine. No hassle."
Well... It would take a million years for somebody to convince me to enable UPnP in my router. The entire concept has been designed from ground up to be insecure. The obvious goal was to create something that will create required port-forwardings into your router quickly and easily without asking you anything.
There is a lengthy discussion about UPnP security in multiple places of the Internet. My favorite one is at Is UPnP still insecure?. It will cover all the aspects of the issue.
The ultimate conclusion still remains: I do not want a random device/software to be able open access anything in my LAN from public Internet without me knowing about it. When such opening is required, I will open it for the duration needed and the appropriately close the access. UPnP doesn't tick any of the requirement boxes for me. That's why no. It won't be allowed in my systems.
Azure Developer Associate certification
Thursday, June 27. 2019
Yup.

Passed that one yesterday. For those intersted, it was a AZ-203: Developing Solutions for Microsoft Azure exam.
Previous one was 21 years ago:

Notice how they misspelled my name. Argh! 
I bet many of you haven't used Windows NT4.0 Workstation for a while (or ever).
Also notice how the certification was issued by Mr. Bill Gates himself! 
Yup.
Passed that one yesterday. For those intersted, it was a AZ-203: Developing Solutions for Microsoft Azure exam.
Previous one was 21 years ago:
Notice how they misspelled my name. Argh!
I bet many of you haven't used Windows NT4.0 Workstation for a while (or ever).
Also notice how the certification was issued by Mr. Bill Gates himself!
Trigger Azure CDN to update certificate to custom domain from Key Vault
Tuesday, June 18. 2019
I've said this multiple times: Everything in cloud computing is at least twenty times more complex than it used to be with your old-fashioned regular servers couple years ago. Things are complicated with physical servers too, but when you apply a cloud and don't have a server you could touch even if you wanted, the fog in your cloud just complicates every trivial thing. This is one of those overly-complex things I ran into and needed to get solved.
So here goes:
What happens when you combine short-lived HTTPS-certificates from Let's Encrypt with Azure Content Delivery Network?
Yeah. Misery.
Given the maximum time-span of three months for this free-of-charge certificate issued by Let's Encrypt ACME-system, a simple wish of making sure the most recent certificate is in use by Azure CDN proved to be much trickier than it needs to be. You would assume, that Azure-people understand the need to eventually have a new certificate running. The need is same for everybody, no matter where the cert came from. Yet again, assumption seems to be the mother of all f...ups.
Azure CDN users Azure Key Vault as a source of any custom certificate used by your custom domain at CDN endpoint. Uploading a X.509 certificate to Azure Key Vault is nearly trivial. You have the PKCS#12 file, upload it and boom! Done!
The next step proved to be nearly impossible. You need to inform CDN, that a new certificate is available and kindly suggest to start using it. CDN won't detect this change automatically, as I initially assumed.
After failing on my every attempt, as a last resort I posted the question in Stackoverflow, Update Azure CDN custom domain certificate from a script. As you might expect, the things I typically do with computers are on the more difficult side. The law of the ICT-jungle says: When questions are difficult, the answers are even more. This is something I knew at the time of posting the question, there are not too many people who would be able to solve my issue, or even fully comprehend the question.
Eventually I lucked out and Mr. Singh replied.
His suggestion seemed valid at a glance. The complexity hit me at the same glance. Instantly it became obvious, that testing the suggested action will not be easy. It took me a while to process all the required information/parameters/requirements, then wait for new certificate update cycle, craft a suitable approach to verify his suggestion. Eventually everything aligned just right. A new certificate was issued, I put some effort into investigating the suggested course of action and finally succeeded in my script-crafting and proved Mr. Singh suggestion to correct. The CDN Custom Domain API does work as he suggested. Go see it for yourself, the API docs are at https://docs.microsoft.com/en-us/rest/api/cdn/customdomains/enablecustomhttps.
My resulting PowerShell 6 script is at https://gist.github.com/HQJaTu/c5695626ba51c6194845fa60913e911b, go get it there.
For those trying to reason my choice of scripting tool, it's obvious: PowerShell Core runs on Azure Cloud Shell (go to https://shell.azure.com/ if you have an Azure account). Such scripts can be run on any browser capable of connecting to Azure. And yes, I do run Bash on everywhere, but I tend to NOT write scripts with it. Python or PowerShell have so much elegant approach on the important things, like error handling.
I've said this multiple times: Everything in cloud computing is at least twenty times more complex than it used to be with your old-fashioned regular servers couple years ago. Things are complicated with physical servers too, but when you apply a cloud and don't have a server you could touch even if you wanted, the fog in your cloud just complicates every trivial thing. This is one of those overly-complex things I ran into and needed to get solved.
So here goes:
What happens when you combine short-lived HTTPS-certificates from Let's Encrypt with Azure Content Delivery Network?
Yeah. Misery.
Given the maximum time-span of three months for this free-of-charge certificate issued by Let's Encrypt ACME-system, a simple wish of making sure the most recent certificate is in use by Azure CDN proved to be much trickier than it needs to be. You would assume, that Azure-people understand the need to eventually have a new certificate running. The need is same for everybody, no matter where the cert came from. Yet again, assumption seems to be the mother of all f...ups.
Azure CDN users Azure Key Vault as a source of any custom certificate used by your custom domain at CDN endpoint. Uploading a X.509 certificate to Azure Key Vault is nearly trivial. You have the PKCS#12 file, upload it and boom! Done!
The next step proved to be nearly impossible. You need to inform CDN, that a new certificate is available and kindly suggest to start using it. CDN won't detect this change automatically, as I initially assumed.
After failing on my every attempt, as a last resort I posted the question in Stackoverflow, Update Azure CDN custom domain certificate from a script. As you might expect, the things I typically do with computers are on the more difficult side. The law of the ICT-jungle says: When questions are difficult, the answers are even more. This is something I knew at the time of posting the question, there are not too many people who would be able to solve my issue, or even fully comprehend the question.
Eventually I lucked out and Mr. Singh replied.
His suggestion seemed valid at a glance. The complexity hit me at the same glance. Instantly it became obvious, that testing the suggested action will not be easy. It took me a while to process all the required information/parameters/requirements, then wait for new certificate update cycle, craft a suitable approach to verify his suggestion. Eventually everything aligned just right. A new certificate was issued, I put some effort into investigating the suggested course of action and finally succeeded in my script-crafting and proved Mr. Singh suggestion to correct. The CDN Custom Domain API does work as he suggested. Go see it for yourself, the API docs are at https://docs.microsoft.com/en-us/rest/api/cdn/customdomains/enablecustomhttps.
My resulting PowerShell 6 script is at https://gist.github.com/HQJaTu/c5695626ba51c6194845fa60913e911b, go get it there.
For those trying to reason my choice of scripting tool, it's obvious: PowerShell Core runs on Azure Cloud Shell (go to https://shell.azure.com/ if you have an Azure account). Such scripts can be run on any browser capable of connecting to Azure. And yes, I do run Bash on everywhere, but I tend to NOT write scripts with it. Python or PowerShell have so much elegant approach on the important things, like error handling.
Steam stats
Wednesday, May 29. 2019
One day I was just browsing my Steam client (in case you're not a gamer, see https://store.steampowered.com/ for more). In my profile settings, I can choose what to display there. I seem to have a "achievement showcase" selected. It looks like this:
It seems, I have two perfect games having 100% of the achievements. Nice! Well ... which games are those?
That information is nowhere to be found. By googling some, I found Where can I see my perfect games in steam? in StackExchange. Somebody else had the same question.
As suggested in the answer, I checked out Steam-Zero - Steam achivements statistics script from GitHub. Again, no joy. It seems something in Steam API changed and no matter what domain I chose for my developer API-key, server wouldn't return a reasonable CORS-response making my browser refuse to continue executing the JavaScript-code any further. If you really want to study what a Cross-Origin Resource Sharing (CORS) is, go to https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS. Warning: It is a tricky subject for non-developers.
As you might expect, I solved the problem by creating a fork of my own. It is at: https://github.com/HQJaTu/steamzero
With my code setup into an Apache-box, I can do a https://<my own web server here>/steamzero/?key=<my Steam API developer key here>
to get something like this:
Ah, Walking Dead games have 100% achievements. Why couldn't I get that information directly out of Steam?! Lot of hassle to dig that out.
Note: Steam-Zero works only for public Steam-profiles. I had to change my privacy settings a bit to get all the stats out.
One day I was just browsing my Steam client (in case you're not a gamer, see https://store.steampowered.com/ for more). In my profile settings, I can choose what to display there. I seem to have a "achievement showcase" selected. It looks like this:
It seems, I have two perfect games having 100% of the achievements. Nice! Well ... which games are those?
That information is nowhere to be found. By googling some, I found Where can I see my perfect games in steam? in StackExchange. Somebody else had the same question.
As suggested in the answer, I checked out Steam-Zero - Steam achivements statistics script from GitHub. Again, no joy. It seems something in Steam API changed and no matter what domain I chose for my developer API-key, server wouldn't return a reasonable CORS-response making my browser refuse to continue executing the JavaScript-code any further. If you really want to study what a Cross-Origin Resource Sharing (CORS) is, go to https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS. Warning: It is a tricky subject for non-developers.
As you might expect, I solved the problem by creating a fork of my own. It is at: https://github.com/HQJaTu/steamzero
With my code setup into an Apache-box, I can do a https://<my own web server here>/steamzero/?key=<my Steam API developer key here>
to get something like this:
Ah, Walking Dead games have 100% achievements. Why couldn't I get that information directly out of Steam?! Lot of hassle to dig that out.
Note: Steam-Zero works only for public Steam-profiles. I had to change my privacy settings a bit to get all the stats out.
Setting up Azure AD Application from Azure DevOps pipeline Powershell task, Part 2 of 2: The scripts
Monday, May 6. 2019
This one is about Microsoft Azure Cloud platform. Specifically about Azure DevOps Services. And specifically about accessing Azure Active Directory with appropriate permissions and changing settings there. For anybody knowing their way around Azure and Azure AD, there is nothing special about it. Tinkering with AD and what it contains is more or less business-as-usual. Doing the same without a mouse and keyboard is an another story.
Given the complexity of this topic, this part is for technical DevOps personnel. The previous part of this blog post was mostly about getting a reader aligned what the heck I'm talking about.
Disclaimer: If you are reading this and you're thinking this is some kind of gibberish magic, don't worry. You're not alone. This is about a detail of a detail of a detail most people will never need to know about. Those who do, might think of this as something too difficult to even attempt.
Access problem
In order to setup an own application to authenticate against Azure AD, a pre-announcement with parameters specific to this application needs to be done to AD. As a result, AD will assign set of IDs to identify this particular application and its chosen method of authentication. The Azure AD terminology is "App Registration" and "Enterprise application", both of which are effectively the same thing, your application from AD's point-of-view. Also both entries can be found from Azure Portal, Azure Active Directory menu with those words. As mentioned in part 1, all this setup can be done with a mouse and keyboard from Azure Portal. However, this time we choose to do it in an automated way.
To access Azure AD from a random machine via PowerShell, first you user account needs to unsurprisingly authenticate with a suitable user having enough permissions to do the change. For auth, you can use Connect-AzureAD
-cmdlet from AzureAD-module. Module information is at https://docs.microsoft.com/en-us/powershell/module/azuread/?view=azureadps-2.0, Connect-AzureAD
-cmdlet documentation is at https://docs.microsoft.com/en-us/powershell/module/azuread/connect-azuread?view=azureadps-2.0.
Investigation reveals, that for Connect-AzureAD
-call to succeed, it requires one of these:
-Credential
-argument, that translates as username and password. However, service principal users used by Azure DevOps pipeline don't have an username to use. Service principals can have a password, but these accounts are not available for regular credential-based logins.
-AccountId
-argument having "documentation" of Specifies the ID of an account. You must specify the UPN of the user when authenticating with a user access token.
-AadAccessToken
-argument having "documentation" of Specifies a Azure Active Directory Graph access token.
The documentation is very vague at best. By testing we can learn, that a logged in user (DevOps service principal) running Azure PowerShell does have an Azure context. The context has the required Account ID, but not the required UPN, as it is not a regular user. When you login from a PowerShell prompt of your own workstation, a context can be used. Again, you're a human, not service principal.
Two out of three authentication options are gone. The last one of using an access token remains. Microsoft Graph API documents obtaining such an access token in https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-protocols-openid-connect-code. A very short version of that can be found from my StackOverflow comment https://stackoverflow.com/a/54480804/1548275.
Briefly: On top of your Azure tenant ID, if you know your AD application's ID and client secret, getting the token can be done. The triplet in detail:
- Tenant ID: Azure Portal, Azure Active Directory, Properties, Directory ID.
- For an already logged in Azure user (
Login-AzureRmAccount
, in PowerShell up to 5.x)
- PowerShell up to 5.x using AzureRM-library:
Get-AzureRmSubscription
- AD application ID: A service principal used for your Azure DevOps service connection.
- You can list all available service principals and their details. The trick is to figure out the name of your Azure DevOps service connection service principal.
- PowerShell up to 5.x using AzureRM-library:
Get-AzureRmADServicePrincipal
- To get the actual Application ID, affix the command with a:
| Sort-Object -Property Displayname | Select-Object -Property Displayname,ApplicationId
- Note: Below in this article, there is a separate chapter about getting your IDs, they are very critical in this operation.
- Client secret: The password of the service principal used for your Azure DevOps service connection.
Problem:
First two can be easily found. What the client secret is, nobody knows. If you created our service connection like I did from Azure DevOps, the wizard creates everything automatically. On Azure-side, all user credentials are hashed beyond recovery. On Azure DevOps-side user credentials are encrypted and available for pipeline extensions using Azure SDK in environment variables. An Azure PowerShell task is NOT an extension and doesn't enjoy the privilege of receving much of the required details as free handouts.
To be complete, it is entirely possible to use Azure DevOps service connection wizard to create the service principal automatically and let it generate a random password for it. What you can do is reset the password on Azure AD to something of your own choosing. Just go reset the password for service connection in Azure DevOps too, and you can write your scripts using the known secret from some secret place. What that secret stash would be is up to you. Warning: Having the password as plain text in your pipeline wouldn't be a good choice.
Links
See, what other people in The Net have been doing and talking about their doings to overcome the described problem:
- Create AD application with VSTS task (in an Azure PowerShell task)
- VSTS Build and PowerShell and AzureAD Authentication (in an Azure PowerShell task, or in an Azure DevOps extension)
- Use AzureAD PowerShell cmdlets on VSTS agent (in an Azure DevOps extension)
- Azure AD Application Management -extension in Azure Marketplace
What you can do with the Azure AD Application Management -extension
In a pipeline, you can create, update, delete or get the application. Whoa! That's exactly what I need to do!
To create a new AD application, you need to specify the Name and Sign-on URL. Nothing more. Also, there is the problem. Any realistic application need to setup bunch of other settings to manifest and/or authentication.
A get AD application -operation will return following pipeline variables:
-
ObjectId
-
ApplicationId
-
Name
-
AppIdUri
-
HomePageUrl
-
ServicePrincipalObjectId
for the given AD application. You can specify the AD app as by ObjectId, ApplicationId or Name. Create or update will return the same variables.
Pipelines are used to create and MAINTAIN environments. What if you need to create the AD application conditionally? What if you need to change some settings based on a condition? With this extension, you cannot. You need to abandon this idea and go scripting. That's why I chose not to use the extension and did it the hard way.
My approach
When eyeballing the App registration from Azure AD -side, it would look something like this:

The password (or secret) is hashed and gone. But what's the second option there? A certificate!
Documentation for Microsoft identity platform access tokens is at https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens. When combined with previous link of Graph API documentation of OpenID Connect and Certificate credentials for application authentication from https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-certificate-credentials we have all required information how to make this happen.
What you need for this to work
- PowerShell 5.x prompt with AzureRM and AzureAD modules installed
- Azure account
- Azure DevOps account
- Azure user with lots of access, I'm using a Global Admin role for my setups
- Azure DevOps user with lots of access, I'm using an Organization Owner role for my setups
- Azure Key Vault service up & running
- Azure DevOps pipeline almost up & almost running
Create a certificate into your Azure Key Vault
X.509 certificate
In PowerShell 5.x there is PKIClient-module, and New-SelfSignedCertificate cmdlet. In PowerShell 6.x, that particular module hasn't been ported yet. A few very useful cmdlets are using Windows-specific tricks to get the thing done. Since PowerShell 6.x (or PowerShell Core), is a multi-platform thing, the most complex ones have not been ported to macOS and Linux, so no Windows-version is available. However, connecting into Azure AD is done with AzureAD-module, which doesn't work with PowerShell 6.x, sorry. As much you and me both would love to go with the latest one, using 5.x is kinda mandatory for this operation.
Also, if you would have a self-signed certificate, what then? How would you use it in Azure DevOps pipeline? You wouldn't, the problem remains: you need to be able to pass the certificate to the pipeline task. Same as a password would be.
There is a working solution for this: Azure Key Vault. It is an encrypted storage for shared secrets, where you as an Azure admin can control on a fine-grained level who can access and what.
Your Azure DevOps service principal
If you haven't already done so, make sure you have a logged-in user in your PowerShell 5.x prompt. Hint: Login-AzureRmAccount
cmdlet with your personal credentials for Azure Portal will get you a long way.
Next, you need to connect your logged in Azure administrator to a specific Azure AD for administering it. Run following spell to achieve that:
$currentAzureContext = Get-AzureRmContext;
$tenantId = $currentAzureContext.Tenant.Id;
$accountId = $currentAzureContext.Account.Id;
Connect-AzureAD -TenantId $tenantId -AccountId $accountId;
Now your user is connected to its "home" Azure AD. This seems bit over-complex, but you can actually connect to other Azure ADs where you might have permission to log into, so this complexity is needed.
As everything in Azure revolves around IDs. Your service principal has two very important IDs, which will be needed in various operations while granting permissions. The required IDs can be read from your Azure AD Application registration pages. My preference is to work from a PowerShell-session, I will be writing my pipeline tasks in PowerShell, so I choose to habit that realm.
As suggested earlier, run something like:
Get-AzureRmADServicePrincipal | Sort-Object -Property Displayname | Select-Object -Property Displayname,ApplicationId
will get you rolling. Carefully choose your Azure DevOps service principal from the list and capture it into a variable:
$adApp = Get-AzureADApplication -Filter "AppId eq '
-Your-DevOps-application-ID-GUID-here-'"
Key Vault Access Policy
Here, I'm assuming you already have an Azure Key Vault service setup done and running Get-AzureRMKeyVault
would return something useful for you (in Powershell 5.x).
To allow your DevOps service principal access the Key Vault, run:
$kv = Get-AzureRMKeyVault -Name
Set-AzureRmKeyVaultAccessPolicy -VaultName $kv.VaultName `
-ServicePrincipalName $adApp.AppId `
-PermissionsToKeys Get `
-PermissionsToSecrets Get,Set `
-PermissionsToCertificates Get,Create
X.509 certificate in a Key Vault
Azure Key Vault can generate self-signed certificates for you. Unlike a certificate and private key generated by you on a command-line, this one can be accessed remotely. You can read the certificate, set it as authentication mechanism for your DevOps service principal, and here comes the kicker: on an Azure DevOps pipeline task, you don't need to know the actual value of the certificate, all you need is a method for accessing it, when needed.
Create a certificate renewal policy into Azure Key Vault with:
$devOpsSpnCertificateName =
"My cool DevOps auth cert";
$policy = New-AzureKeyVaultCertificatePolicy -SubjectName "CN=My DevOps SPN cert" `
-IssuerName "Self" `
-KeyType "RSA" `
-KeyUsage "DigitalSignature" `
-ValidityInMonths 12 `
-RenewAtNumberOfDaysBeforeExpiry 60 `
-KeyNotExportable:$False `
-ReuseKeyOnRenewal:$False
Add-AzureKeyVaultCertificate -VaultName $kv.VaultName `
-Name
$devOpsSpnCertificateName `
-CertificatePolicy $policy
This will instruct Key Vault to create a self-signed certificate by name My cool DevOps auth cert, and have it expire in 12 months. Also, it will auto-renew 60 days before expiry. At that point, it is advisable to set the new certificate into Azure AD App registration.
Now you have established a known source for certificates. You as an admin can access it, also your pipeline can access it.
Allow certificate authentication
To use this shiny new certificate for authentication, following spell needs to be run to first get the X.509 certificate, extract the required details out of it and allow using it as login credential:
$pfxSecret = Get-AzureKeyVaultSecret -VaultName $kv.VaultName `
-Name $devOpsSpnCertificateName;
$pfxUnprotectedBytes = [Convert]::FromBase64String($pfxSecret.SecretValueText);
$pfx = New-Object Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList `
$pfxUnprotectedBytes, $null, `
[Security.Cryptography.X509Certificates.X509KeyStorageFlags]::Exportable;
$validFrom = [datetime]::Parse($pfx.GetEffectiveDateString());
$validFrom = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId($validFrom, `
[System.TimeZoneInfo]::Local.Id, 'GMT Standard Time');
$validTo = [datetime]::Parse($pfx.GetExpirationDateString());
$validTo = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId($validTo, `
[System.TimeZoneInfo]::Local.Id, 'GMT Standard Time');
$base64Value = [System.Convert]::ToBase64String($pfx.GetRawCertData());
$base64Thumbprint = [System.Convert]::ToBase64String($pfx.GetCertHash());
$cred = New-AzureADApplicationKeyCredential -ObjectId $adApp.ObjectId `
-CustomKeyIdentifier $base64Thumbprint `
-Type AsymmetricX509Cert `
-Usage Verify `
-Value $base64Value `
-StartDate $validFrom `
-EndDate $validTo;
That's it! Now you're able to use the X.509 certificate from Azure Key Vault for logging in as Azure DevOps service principal.
Grant permissions to administer AD
The obvious final step is to allow the DevOps service principal to make changes in Azure AD. As default, it has no admin rights into Azure AD at all.
About level of access: I chose to go with Company Administrator, as I'm doing a setup for custom AD-domain. Without that requirement, an Application administrator would do the trick. Docs for different roles are at https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/directory-assign-admin-roles.
So, the spell goes:
$roleName ='Company Administrator';
$role = Get-AzureADDirectoryRole | where-object {$_.DisplayName -eq $roleName};
Depending on your setup and what you've been doing in your AD before this, it is possible, that getting the role fails. Enable it with this one:
$roleTemplate = Get-AzureADDirectoryRoleTemplate | ? { $_.DisplayName -eq $roleName };
Enable-AzureADDirectoryRole -RoleTemplateId $roleTemplate.ObjectId;
Get the service principal, and grant the designated role for it:
$devOpsSpn = Get-AzureRmADServicePrincipal | `
where-object {$_.ApplicationId -eq $adApp.AppId};
Add-AzureADDirectoryRoleMember -ObjectId $role.Objectid -RefObjectId $devOpsSpn.Id;
Now you're good to go.
What to do in a task
Lot of setup done already, now we're ready to go for the actual business.
Here is some code. First get the certificate from Key Vault, then connect the DevOps service principal into Azure AD:
$pfxSecret = Get-AzureKeyVaultSecret -VaultName $kv.VaultName `
-Name $devOpsSpnCertificateName;
$pfxUnprotectedBytes = [Convert]::FromBase64String($pfxSecret.SecretValueText);
$pfx = New-Object Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList ` $pfxUnprotectedBytes, $null, ` [Security.Cryptography.X509Certificates.X509KeyStorageFlags]::Exportable;
ConnectCurrentSessionToAzureAD $pfx;
function ConnectCurrentSessionToAzureAD($cert) {
$clientId = (Get-AzureRmContext).Account.Id;
$tenantId = (Get-AzureRmSubscription).TenantId;
$adTokenUrl = "https://login.microsoftonline.com/$tenantId/oauth2/token";
$resource = "https://graph.windows.net/";
$now = (Get-Date).ToUniversalTime();
$nowTimeStamp = [System.Math]::Truncate((Get-Date -Date $now -UFormat %s -Millisecond 0));
$thumbprint = [System.Convert]::ToBase64String($pfx.GetCertHash());
$headerJson = @{
alg = "RS256"
typ = "JWT"
x5t = $thumbprint
} | ConvertTo-Json;
$payloadJson = @{
aud = $adTokenUrl
nbf = $nowTimeStamp
exp = ($nowTimeStamp + 3600)
iss = $clientId
jti = [System.Guid]::NewGuid().ToString()
sub = $clientId
} | ConvertTo-Json;
$jwt = New-Jwt -Cert $cert -Header $headerJson -PayloadJson $payloadJson;
$body = @{
grant_type = "client_credentials"
client_id = $clientId
client_assertion_type = "urn:ietf:params:oauth:client-assertion-type:jwt-bearer"
client_assertion = $jwt
resource = $resource
}
$response = Invoke-RestMethod -Method 'Post' -Uri $adTokenUrl `
-ContentType "application/x-www-form-urlencoded" -Body $body;
$token = $response.access_token
Connect-AzureAD -AadAccessToken $token -AccountId $clientId -TenantId $tenantId | Out-Null
}
Now, you're good to go with cmdlets like New-AzureADApplication
and Set-AzureADApplication
, or whatever you wanted to do with your Azure AD. Suggestions can be found from docs at https://docs.microsoft.com/en-us/powershell/module/azuread/?view=azureadps-2.0.
About PowerShell-modules in a task
This won't be a surprise to you: not all the modules you'll be needing are there in an Azure DevOps agent running your tasks. As an example, AzureAD won't be there, also the required JWT-module won't be there. What I'm doing in my pipeline task, is to install the requirements like this:
Install-Module -Name JWT -Scope CurrentUser -Force
Now the task won't ask anything (-Force
) and it won't require administrator privileges to do the installation to a system-wide location (-Scope CurrentUser).
Finally
Phew! That was a lot to chew on. But now my entire appication is maintained via Azure DevOps release pipeline. If something goes wrong, I can always run a deployment in my environment setup pipeline and it will guarantee all the settings are as they should.
Setting up Azure AD Application from Azure DevOps pipeline Powershell task, Part 1 of 2: The theory
Monday, May 6. 2019
This one is about Microsoft Azure Cloud platform. Specifically about Azure DevOps Services. And specifically about accessing Azure Active Directory with appropriate permissions and changing settings there. For anybody knowing their way around Azure and Azure AD, there is nothing special about it. Tinkering with AD and what it contains is more or less business-as-usual. Doing the same without a mouse and keyboard is an another story.
Given the complexity of this topic, this part is mostly about getting a reader aligned what the heck I'm talking about. Next part is for a techical person to enjoy all the dirty details and code.
Disclaimer: If you are reading this and you're thinking this is some kind of gibberish magic, don't worry. You're not alone. This is about a detail of a detail of a detail most people will never need to know about. Those who do, might think of this as something too difficult to even attempt.
DevOps primer
When talking about DevOps (see Microsoft's definition from What is DevOps?) there is an implied usage of automation in any operation. In Azure DevOps (analogous with most other systems), there is a concept of "pipeline". It's the way-of-working and achieving results in automated way. There are inputs going to the pipe, then steps to actually do "the stuff" resulting in desired output. Nothing fancy, just your basic computing. Computers are designed and built for such tasks, eating input and spitting out output.
Going even further into the dirty details: What if what you desire is to automate the setup of an entire system. In any reasonable cloud system there are: computing to do the processing, pieces of assorted storage to persist your precious stuff, entry-points to allow access to your system and what not. As an example, your cloud system might consist of a: web server, SQL-database, Redis cache and load balancer. That's four different services to get your stuff running. Somebody, or in DevOps, someTHING needs to set all that up. 100% of all the services, 100% of all the settings in a service need to be setup and that needs to be done AUTOmatically. Any knowledgeable human can go to Azure Portal and start clicking to build the required setup.
Doing the same thing automated is couple difficulty settings harder. The obvious difference between a human clicking buttons and a script running is, that the script is a DOCUMENTATION about how the system is built. "I document my work!" some annoyed people yell from the back rows. Yes, you may even document your work really well, but code is code. It doesn't forget or omit anything. Also, you can run the same script any number of times to setup new environments or change the script to maintain existing ones.
Azure Active Directory, or AD
Most applications with or without web will never use AD. Then there are software developers like me, who want to store users into such an active location. Lots of other developers abandon that as too-complex-to-do and build their own user management and user vaults themselves with services of their own choosing. That's totally doable. No complaints from here. However, using ready-built authentication mechanisms by fine Azure devs at Microsoft make a lot of sense to me. You're very likely to simply "get it right" by reading couple of manuals and calling the designated APIs from your application. You might even get to be GDPR-compliant without even trying (much).
So, that's my motivation. Be compliant. Be secure. Be all that by design.
Azure AD Application
Most people can grasp the concept of Active Directory easily. When expanded to AD applications, things get very abstract very fast and the typical reaction of a Joe Regular is to steer away from the topic. If you want to use AD authentication for an application you wrote yourself, or somebody else wrote, this is the way it needs to be done. Not possible to avoid any longer.
So, now my application is compliant and secure and all those fancy words. The obvious cost is, that I need to understand very complex mechanisms, setup my services and carefully write code into my application to talk to those services I set up earlier. All this needs to be done correctly while nothing here is trivial. Couple dozen of parameters need to align precisely right. The good part in all that is: if you make a itsy bitsy tiny mistake, your mistake doesn't go unnoticed. Your entire house of cards collapses and your application doesn't work. No user can use the system. So, ultimately somebody notices your mistake! 
Since nobody loves making mistakes, that sure sounds something most people like to AUTOmate. I need to be sure, that with a click of a button all my settings are AUTOmatically correct both in Azure AD and in my own application.
Azure DevOps pipeline
To automate any setup in Azure cloud computing environment, the obvious choice is to use Azure DevOps. As always, there are number of options you can go with, but the fact remains: the most natural choice is to stick with something you already have, doesn't cost anything extra and is very similar to something you'll be using anyway. So, you choose to go with Azure DevOps, or Visual Studio Online or Team Foundation Services on-line or whatever it used to be called earlier. Today, they call it Azure DevOps Services.
In a release pipeline you can do anything. There are tons of different types of tasks available in out-of-the-box experience of Azure DevOps and you can install tons more from 3rd-party extensions.
What I have in the above image is a screenshot from my Azure DevOps release pipeline. As you can see, it is a mockup, there is no source artifact it could consume. I was lazy and didn't do a build pipeline to create something there. But for the sake of this blog post, please imagine there being an imaginary artifact for this release pipeline to be able to function properly.
Anyway, there is a single stage called "Deploy Test" to simulate setup/maintenance of my Testing environment. There could be any number of stages before and after this one, but again, I was lazy and didn't do a very complex setup. As the image depicts, there is 1 job in the stage containing 2 tasks. A "task" in a pipeline is the basic building block. Defining a pipeline is just grouping tasks into jobs and grouping jobs into stages forming up the actual process, or pipeline of doing whatever the desired end goal requires. All this can be done in YAML, but feel free to wave your mouse and keyboard the do the same.
Azure DevOps pipeline task of a job of a stage
On the single stage this pipeline has, there is one job containing two tasks. The job looks like this:

An Azure PowerShell task (read all about them from https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-powershell) is a piece of PowerShell script run in a deployment host with a suitable Azure Service Principal (for some reason they call them Service Connections in Azure DevOps) having a set of assigned permissions in the destination Azure tenant/subscription/resource group. This is a very typical approach to get something deployed in a single task. Very easy to setup, very easy to control what permissions are assigned. The hard part is to decide the actual lines of PowerShell code to be run to get the deployment done.
Luckily PowerShell in Azure DevOps comes with tons of really handy modules already installed. Also, it is very easy to extend the functionaly by installing modules from a PowerShell repository of your own choosing on-the-fly in a task. More about that later.
Azure DevOps pipeline task doing Azure AD setup
That's the topic of part 2 in this series. Theory part concludes here and the really complex stuff begins.
This one is about Microsoft Azure Cloud platform. Specifically about Azure DevOps Services. And specifically about accessing Azure Active Directory with appropriate permissions and changing settings there. For anybody knowing their way around Azure and Azure AD, there is nothing special about it. Tinkering with AD and what it contains is more or less business-as-usual. Doing the same without a mouse and keyboard is an another story.
Given the complexity of this topic, this part is for technical DevOps personnel. The previous part of this blog post was mostly about getting a reader aligned what the heck I'm talking about.
Disclaimer: If you are reading this and you're thinking this is some kind of gibberish magic, don't worry. You're not alone. This is about a detail of a detail of a detail most people will never need to know about. Those who do, might think of this as something too difficult to even attempt.
Access problem
In order to setup an own application to authenticate against Azure AD, a pre-announcement with parameters specific to this application needs to be done to AD. As a result, AD will assign set of IDs to identify this particular application and its chosen method of authentication. The Azure AD terminology is "App Registration" and "Enterprise application", both of which are effectively the same thing, your application from AD's point-of-view. Also both entries can be found from Azure Portal, Azure Active Directory menu with those words. As mentioned in part 1, all this setup can be done with a mouse and keyboard from Azure Portal. However, this time we choose to do it in an automated way.
To access Azure AD from a random machine via PowerShell, first you user account needs to unsurprisingly authenticate with a suitable user having enough permissions to do the change. For auth, you can use Connect-AzureAD
-cmdlet from AzureAD-module. Module information is at https://docs.microsoft.com/en-us/powershell/module/azuread/?view=azureadps-2.0, Connect-AzureAD
-cmdlet documentation is at https://docs.microsoft.com/en-us/powershell/module/azuread/connect-azuread?view=azureadps-2.0.
Investigation reveals, that for Connect-AzureAD
-call to succeed, it requires one of these:
-Credential
-argument, that translates as username and password. However, service principal users used by Azure DevOps pipeline don't have an username to use. Service principals can have a password, but these accounts are not available for regular credential-based logins.-AccountId
-argument having "documentation" of Specifies the ID of an account. You must specify the UPN of the user when authenticating with a user access token.-AadAccessToken
-argument having "documentation" of Specifies a Azure Active Directory Graph access token.
The documentation is very vague at best. By testing we can learn, that a logged in user (DevOps service principal) running Azure PowerShell does have an Azure context. The context has the required Account ID, but not the required UPN, as it is not a regular user. When you login from a PowerShell prompt of your own workstation, a context can be used. Again, you're a human, not service principal.
Two out of three authentication options are gone. The last one of using an access token remains. Microsoft Graph API documents obtaining such an access token in https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-protocols-openid-connect-code. A very short version of that can be found from my StackOverflow comment https://stackoverflow.com/a/54480804/1548275.
Briefly: On top of your Azure tenant ID, if you know your AD application's ID and client secret, getting the token can be done. The triplet in detail:
- Tenant ID: Azure Portal, Azure Active Directory, Properties, Directory ID.
- For an already logged in Azure user (
Login-AzureRmAccount
, in PowerShell up to 5.x) - PowerShell up to 5.x using AzureRM-library:
Get-AzureRmSubscription
- For an already logged in Azure user (
- AD application ID: A service principal used for your Azure DevOps service connection.
- You can list all available service principals and their details. The trick is to figure out the name of your Azure DevOps service connection service principal.
- PowerShell up to 5.x using AzureRM-library:
Get-AzureRmADServicePrincipal
- To get the actual Application ID, affix the command with a:
| Sort-Object -Property Displayname | Select-Object -Property Displayname,ApplicationId
- Note: Below in this article, there is a separate chapter about getting your IDs, they are very critical in this operation.
- Client secret: The password of the service principal used for your Azure DevOps service connection.
Problem:
First two can be easily found. What the client secret is, nobody knows. If you created our service connection like I did from Azure DevOps, the wizard creates everything automatically. On Azure-side, all user credentials are hashed beyond recovery. On Azure DevOps-side user credentials are encrypted and available for pipeline extensions using Azure SDK in environment variables. An Azure PowerShell task is NOT an extension and doesn't enjoy the privilege of receving much of the required details as free handouts.
To be complete, it is entirely possible to use Azure DevOps service connection wizard to create the service principal automatically and let it generate a random password for it. What you can do is reset the password on Azure AD to something of your own choosing. Just go reset the password for service connection in Azure DevOps too, and you can write your scripts using the known secret from some secret place. What that secret stash would be is up to you. Warning: Having the password as plain text in your pipeline wouldn't be a good choice.
Links
See, what other people in The Net have been doing and talking about their doings to overcome the described problem:
- Create AD application with VSTS task (in an Azure PowerShell task)
- VSTS Build and PowerShell and AzureAD Authentication (in an Azure PowerShell task, or in an Azure DevOps extension)
- Use AzureAD PowerShell cmdlets on VSTS agent (in an Azure DevOps extension)
- Azure AD Application Management -extension in Azure Marketplace
What you can do with the Azure AD Application Management -extension
In a pipeline, you can create, update, delete or get the application. Whoa! That's exactly what I need to do!
To create a new AD application, you need to specify the Name and Sign-on URL. Nothing more. Also, there is the problem. Any realistic application need to setup bunch of other settings to manifest and/or authentication.
A get AD application -operation will return following pipeline variables:
-
ObjectId
-
ApplicationId
-
Name
-
AppIdUri
-
HomePageUrl
-
ServicePrincipalObjectId
My approach
When eyeballing the App registration from Azure AD -side, it would look something like this:
The password (or secret) is hashed and gone. But what's the second option there? A certificate!
Documentation for Microsoft identity platform access tokens is at https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens. When combined with previous link of Graph API documentation of OpenID Connect and Certificate credentials for application authentication from https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-certificate-credentials we have all required information how to make this happen.
What you need for this to work
- PowerShell 5.x prompt with AzureRM and AzureAD modules installed
- Azure account
- Azure DevOps account
- Azure user with lots of access, I'm using a Global Admin role for my setups
- Azure DevOps user with lots of access, I'm using an Organization Owner role for my setups
- Azure Key Vault service up & running
- Azure DevOps pipeline almost up & almost running
Create a certificate into your Azure Key Vault
X.509 certificate
In PowerShell 5.x there is PKIClient-module, and New-SelfSignedCertificate cmdlet. In PowerShell 6.x, that particular module hasn't been ported yet. A few very useful cmdlets are using Windows-specific tricks to get the thing done. Since PowerShell 6.x (or PowerShell Core), is a multi-platform thing, the most complex ones have not been ported to macOS and Linux, so no Windows-version is available. However, connecting into Azure AD is done with AzureAD-module, which doesn't work with PowerShell 6.x, sorry. As much you and me both would love to go with the latest one, using 5.x is kinda mandatory for this operation.
Also, if you would have a self-signed certificate, what then? How would you use it in Azure DevOps pipeline? You wouldn't, the problem remains: you need to be able to pass the certificate to the pipeline task. Same as a password would be.
There is a working solution for this: Azure Key Vault. It is an encrypted storage for shared secrets, where you as an Azure admin can control on a fine-grained level who can access and what.
Your Azure DevOps service principal
If you haven't already done so, make sure you have a logged-in user in your PowerShell 5.x prompt. Hint: Login-AzureRmAccount
cmdlet with your personal credentials for Azure Portal will get you a long way.
Next, you need to connect your logged in Azure administrator to a specific Azure AD for administering it. Run following spell to achieve that:
$currentAzureContext = Get-AzureRmContext;
$tenantId = $currentAzureContext.Tenant.Id;
$accountId = $currentAzureContext.Account.Id;
Connect-AzureAD -TenantId $tenantId -AccountId $accountId;
Now your user is connected to its "home" Azure AD. This seems bit over-complex, but you can actually connect to other Azure ADs where you might have permission to log into, so this complexity is needed.
As everything in Azure revolves around IDs. Your service principal has two very important IDs, which will be needed in various operations while granting permissions. The required IDs can be read from your Azure AD Application registration pages. My preference is to work from a PowerShell-session, I will be writing my pipeline tasks in PowerShell, so I choose to habit that realm.
As suggested earlier, run something like:
Get-AzureRmADServicePrincipal | Sort-Object -Property Displayname | Select-Object -Property Displayname,ApplicationId
will get you rolling. Carefully choose your Azure DevOps service principal from the list and capture it into a variable:
$adApp = Get-AzureADApplication -Filter "AppId eq '
-Your-DevOps-application-ID-GUID-here-'"
Key Vault Access Policy
Here, I'm assuming you already have an Azure Key Vault service setup done and running Get-AzureRMKeyVault
would return something useful for you (in Powershell 5.x).
To allow your DevOps service principal access the Key Vault, run:
$kv = Get-AzureRMKeyVault -Name
Set-AzureRmKeyVaultAccessPolicy -VaultName $kv.VaultName `
-ServicePrincipalName $adApp.AppId `
-PermissionsToKeys Get `
-PermissionsToSecrets Get,Set `
-PermissionsToCertificates Get,Create
X.509 certificate in a Key Vault
Azure Key Vault can generate self-signed certificates for you. Unlike a certificate and private key generated by you on a command-line, this one can be accessed remotely. You can read the certificate, set it as authentication mechanism for your DevOps service principal, and here comes the kicker: on an Azure DevOps pipeline task, you don't need to know the actual value of the certificate, all you need is a method for accessing it, when needed.
Create a certificate renewal policy into Azure Key Vault with:
$devOpsSpnCertificateName =
"My cool DevOps auth cert";
$policy = New-AzureKeyVaultCertificatePolicy -SubjectName "CN=My DevOps SPN cert" `
$devOpsSpnCertificateName
-IssuerName "Self" `
-KeyType "RSA" `
-KeyUsage "DigitalSignature" `
-ValidityInMonths 12 `
-RenewAtNumberOfDaysBeforeExpiry 60 `
-KeyNotExportable:$False `
-ReuseKeyOnRenewal:$False
Add-AzureKeyVaultCertificate -VaultName $kv.VaultName `
-Name `
-CertificatePolicy $policy
This will instruct Key Vault to create a self-signed certificate by name My cool DevOps auth cert, and have it expire in 12 months. Also, it will auto-renew 60 days before expiry. At that point, it is advisable to set the new certificate into Azure AD App registration.
Now you have established a known source for certificates. You as an admin can access it, also your pipeline can access it.
Allow certificate authentication
To use this shiny new certificate for authentication, following spell needs to be run to first get the X.509 certificate, extract the required details out of it and allow using it as login credential:
$pfxSecret = Get-AzureKeyVaultSecret -VaultName $kv.VaultName `
-Name $devOpsSpnCertificateName;
$pfxUnprotectedBytes = [Convert]::FromBase64String($pfxSecret.SecretValueText);
$pfx = New-Object Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList `
$pfxUnprotectedBytes, $null, `
[Security.Cryptography.X509Certificates.X509KeyStorageFlags]::Exportable;
$validFrom = [datetime]::Parse($pfx.GetEffectiveDateString());
$validFrom = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId($validFrom, `
[System.TimeZoneInfo]::Local.Id, 'GMT Standard Time');
$validTo = [datetime]::Parse($pfx.GetExpirationDateString());
$validTo = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId($validTo, `
[System.TimeZoneInfo]::Local.Id, 'GMT Standard Time');
$base64Value = [System.Convert]::ToBase64String($pfx.GetRawCertData());
$base64Thumbprint = [System.Convert]::ToBase64String($pfx.GetCertHash());
$cred = New-AzureADApplicationKeyCredential -ObjectId $adApp.ObjectId `
-CustomKeyIdentifier $base64Thumbprint `
-Type AsymmetricX509Cert `
-Usage Verify `
-Value $base64Value `
-StartDate $validFrom `
-EndDate $validTo;
That's it! Now you're able to use the X.509 certificate from Azure Key Vault for logging in as Azure DevOps service principal.
Grant permissions to administer AD
The obvious final step is to allow the DevOps service principal to make changes in Azure AD. As default, it has no admin rights into Azure AD at all.
About level of access: I chose to go with Company Administrator, as I'm doing a setup for custom AD-domain. Without that requirement, an Application administrator would do the trick. Docs for different roles are at https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/directory-assign-admin-roles.
So, the spell goes:
$roleName ='Company Administrator';
$role = Get-AzureADDirectoryRole | where-object {$_.DisplayName -eq $roleName};
Depending on your setup and what you've been doing in your AD before this, it is possible, that getting the role fails. Enable it with this one:
$roleTemplate = Get-AzureADDirectoryRoleTemplate | ? { $_.DisplayName -eq $roleName };
Enable-AzureADDirectoryRole -RoleTemplateId $roleTemplate.ObjectId;
Get the service principal, and grant the designated role for it:
$devOpsSpn = Get-AzureRmADServicePrincipal | `
where-object {$_.ApplicationId -eq $adApp.AppId};
Add-AzureADDirectoryRoleMember -ObjectId $role.Objectid -RefObjectId $devOpsSpn.Id;
Now you're good to go.
What to do in a task
Lot of setup done already, now we're ready to go for the actual business.
Here is some code. First get the certificate from Key Vault, then connect the DevOps service principal into Azure AD:
$pfxSecret = Get-AzureKeyVaultSecret -VaultName $kv.VaultName `
-Name $devOpsSpnCertificateName;
$pfxUnprotectedBytes = [Convert]::FromBase64String($pfxSecret.SecretValueText);
$pfx = New-Object Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList ` $pfxUnprotectedBytes, $null, ` [Security.Cryptography.X509Certificates.X509KeyStorageFlags]::Exportable;
ConnectCurrentSessionToAzureAD $pfx;
function ConnectCurrentSessionToAzureAD($cert) {
$clientId = (Get-AzureRmContext).Account.Id;
$tenantId = (Get-AzureRmSubscription).TenantId;
$adTokenUrl = "https://login.microsoftonline.com/$tenantId/oauth2/token";
$resource = "https://graph.windows.net/";
$now = (Get-Date).ToUniversalTime();
$nowTimeStamp = [System.Math]::Truncate((Get-Date -Date $now -UFormat %s -Millisecond 0));
$thumbprint = [System.Convert]::ToBase64String($pfx.GetCertHash());
$headerJson = @{
alg = "RS256"
typ = "JWT"
x5t = $thumbprint
} | ConvertTo-Json;
$payloadJson = @{
aud = $adTokenUrl
nbf = $nowTimeStamp
exp = ($nowTimeStamp + 3600)
iss = $clientId
jti = [System.Guid]::NewGuid().ToString()
sub = $clientId
} | ConvertTo-Json;
$jwt = New-Jwt -Cert $cert -Header $headerJson -PayloadJson $payloadJson;
$body = @{
grant_type = "client_credentials"
client_id = $clientId
client_assertion_type = "urn:ietf:params:oauth:client-assertion-type:jwt-bearer"
client_assertion = $jwt
resource = $resource
}
$response = Invoke-RestMethod -Method 'Post' -Uri $adTokenUrl `
-ContentType "application/x-www-form-urlencoded" -Body $body;
$token = $response.access_token
Connect-AzureAD -AadAccessToken $token -AccountId $clientId -TenantId $tenantId | Out-Null
}
Now, you're good to go with cmdlets like New-AzureADApplication
and Set-AzureADApplication
, or whatever you wanted to do with your Azure AD. Suggestions can be found from docs at https://docs.microsoft.com/en-us/powershell/module/azuread/?view=azureadps-2.0.
About PowerShell-modules in a task
This won't be a surprise to you: not all the modules you'll be needing are there in an Azure DevOps agent running your tasks. As an example, AzureAD won't be there, also the required JWT-module won't be there. What I'm doing in my pipeline task, is to install the requirements like this:
Install-Module -Name JWT -Scope CurrentUser -Force
Now the task won't ask anything (-Force
) and it won't require administrator privileges to do the installation to a system-wide location (-Scope CurrentUser).
Finally
Phew! That was a lot to chew on. But now my entire appication is maintained via Azure DevOps release pipeline. If something goes wrong, I can always run a deployment in my environment setup pipeline and it will guarantee all the settings are as they should.
This one is about Microsoft Azure Cloud platform. Specifically about Azure DevOps Services. And specifically about accessing Azure Active Directory with appropriate permissions and changing settings there. For anybody knowing their way around Azure and Azure AD, there is nothing special about it. Tinkering with AD and what it contains is more or less business-as-usual. Doing the same without a mouse and keyboard is an another story.
Given the complexity of this topic, this part is mostly about getting a reader aligned what the heck I'm talking about. Next part is for a techical person to enjoy all the dirty details and code.
Disclaimer: If you are reading this and you're thinking this is some kind of gibberish magic, don't worry. You're not alone. This is about a detail of a detail of a detail most people will never need to know about. Those who do, might think of this as something too difficult to even attempt.
DevOps primer
When talking about DevOps (see Microsoft's definition from What is DevOps?) there is an implied usage of automation in any operation. In Azure DevOps (analogous with most other systems), there is a concept of "pipeline". It's the way-of-working and achieving results in automated way. There are inputs going to the pipe, then steps to actually do "the stuff" resulting in desired output. Nothing fancy, just your basic computing. Computers are designed and built for such tasks, eating input and spitting out output.
Going even further into the dirty details: What if what you desire is to automate the setup of an entire system. In any reasonable cloud system there are: computing to do the processing, pieces of assorted storage to persist your precious stuff, entry-points to allow access to your system and what not. As an example, your cloud system might consist of a: web server, SQL-database, Redis cache and load balancer. That's four different services to get your stuff running. Somebody, or in DevOps, someTHING needs to set all that up. 100% of all the services, 100% of all the settings in a service need to be setup and that needs to be done AUTOmatically. Any knowledgeable human can go to Azure Portal and start clicking to build the required setup.
Doing the same thing automated is couple difficulty settings harder. The obvious difference between a human clicking buttons and a script running is, that the script is a DOCUMENTATION about how the system is built. "I document my work!" some annoyed people yell from the back rows. Yes, you may even document your work really well, but code is code. It doesn't forget or omit anything. Also, you can run the same script any number of times to setup new environments or change the script to maintain existing ones.
Azure Active Directory, or AD
Most applications with or without web will never use AD. Then there are software developers like me, who want to store users into such an active location. Lots of other developers abandon that as too-complex-to-do and build their own user management and user vaults themselves with services of their own choosing. That's totally doable. No complaints from here. However, using ready-built authentication mechanisms by fine Azure devs at Microsoft make a lot of sense to me. You're very likely to simply "get it right" by reading couple of manuals and calling the designated APIs from your application. You might even get to be GDPR-compliant without even trying (much).
So, that's my motivation. Be compliant. Be secure. Be all that by design.
Azure AD Application
Most people can grasp the concept of Active Directory easily. When expanded to AD applications, things get very abstract very fast and the typical reaction of a Joe Regular is to steer away from the topic. If you want to use AD authentication for an application you wrote yourself, or somebody else wrote, this is the way it needs to be done. Not possible to avoid any longer.
So, now my application is compliant and secure and all those fancy words. The obvious cost is, that I need to understand very complex mechanisms, setup my services and carefully write code into my application to talk to those services I set up earlier. All this needs to be done correctly while nothing here is trivial. Couple dozen of parameters need to align precisely right. The good part in all that is: if you make a itsy bitsy tiny mistake, your mistake doesn't go unnoticed. Your entire house of cards collapses and your application doesn't work. No user can use the system. So, ultimately somebody notices your mistake!
Since nobody loves making mistakes, that sure sounds something most people like to AUTOmate. I need to be sure, that with a click of a button all my settings are AUTOmatically correct both in Azure AD and in my own application.
Azure DevOps pipeline
To automate any setup in Azure cloud computing environment, the obvious choice is to use Azure DevOps. As always, there are number of options you can go with, but the fact remains: the most natural choice is to stick with something you already have, doesn't cost anything extra and is very similar to something you'll be using anyway. So, you choose to go with Azure DevOps, or Visual Studio Online or Team Foundation Services on-line or whatever it used to be called earlier. Today, they call it Azure DevOps Services.
In a release pipeline you can do anything. There are tons of different types of tasks available in out-of-the-box experience of Azure DevOps and you can install tons more from 3rd-party extensions.
What I have in the above image is a screenshot from my Azure DevOps release pipeline. As you can see, it is a mockup, there is no source artifact it could consume. I was lazy and didn't do a build pipeline to create something there. But for the sake of this blog post, please imagine there being an imaginary artifact for this release pipeline to be able to function properly.
Anyway, there is a single stage called "Deploy Test" to simulate setup/maintenance of my Testing environment. There could be any number of stages before and after this one, but again, I was lazy and didn't do a very complex setup. As the image depicts, there is 1 job in the stage containing 2 tasks. A "task" in a pipeline is the basic building block. Defining a pipeline is just grouping tasks into jobs and grouping jobs into stages forming up the actual process, or pipeline of doing whatever the desired end goal requires. All this can be done in YAML, but feel free to wave your mouse and keyboard the do the same.
Azure DevOps pipeline task of a job of a stage
On the single stage this pipeline has, there is one job containing two tasks. The job looks like this:
An Azure PowerShell task (read all about them from https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-powershell) is a piece of PowerShell script run in a deployment host with a suitable Azure Service Principal (for some reason they call them Service Connections in Azure DevOps) having a set of assigned permissions in the destination Azure tenant/subscription/resource group. This is a very typical approach to get something deployed in a single task. Very easy to setup, very easy to control what permissions are assigned. The hard part is to decide the actual lines of PowerShell code to be run to get the deployment done.
Luckily PowerShell in Azure DevOps comes with tons of really handy modules already installed. Also, it is very easy to extend the functionaly by installing modules from a PowerShell repository of your own choosing on-the-fly in a task. More about that later.
Azure DevOps pipeline task doing Azure AD setup
That's the topic of part 2 in this series. Theory part concludes here and the really complex stuff begins.
Breaking the paywall
Wednesday, April 3. 2019
Newspaper media and magazines have been suffering from digitalization for two decades. Quite few of the traditional publishers have been struggling to keep their revenue stream flowing.
Back in the 90s Internet-boom they simply went to net "because they had to". No clue what to do there, though. Paper sales were going strong, no actions required. Then the obvious happened, people stopped subscribing their paper services. Meanwhile, same people who cancelled their subscriptions, were reluctant to pay for the service on the net. Reasons were multitude: bad implementation of the site, bad implementation of subscription and payment, etc.
Past couple years the trend has been to simply block readers out of the site by erecting a paywall. That will keep the free leechers out (like me), but I'm not certain it would improve the sales. Maybe it does, because lot of the media is doing that. However, yet again the implementation is really bad. There must be enough complexity in the paywall that people like me cannot simply run something like this in Javascript console:
javascript:(function(d){d.cookie='Bhb2CdOB3V=; expires=Thu, 01 Jan 1970 00:00:00 UTC; path=/; domain=.tivi.fi;'})(document)
If you have no idea what a Javascript console is (most non-developer don't), don't worry. Here is the same thing as a bookmarklet. Concept of bookmarking sites into your browser is past gone, everybody just googles everything. Since nobody maintains bookmarks, it is not-so-user-friendly to manage bookmarks in a modern browser. However, the idea of you is to create a Javascript bookmark, aka. bookmarklet. In chrome, something like this:

Then it is easy to just click the bookmark when you run out of reads. Reload the page and keep reading!
Finally:
Apologies for the newspaper publisher for pissing in their coffee pot. Your product is good and I have enjoyed reading the articles, it's just the price point, you're too expensive! My motivation here is to offer you a pro-bono implementation evaluation. Hint: you failed badly.
I totally expect them to improve their implementation. Also I promise not to maintain any possible hacks their future versions may include. Also I'd like to offer a piece of advice.
There are two real options you can go with:
1) implement mandatory login
or
2) keep some articles free, some behind paywall. That seems to be the only working option of keeping people like me out of your system.
Newspaper media and magazines have been suffering from digitalization for two decades. Quite few of the traditional publishers have been struggling to keep their revenue stream flowing.
Back in the 90s Internet-boom they simply went to net "because they had to". No clue what to do there, though. Paper sales were going strong, no actions required. Then the obvious happened, people stopped subscribing their paper services. Meanwhile, same people who cancelled their subscriptions, were reluctant to pay for the service on the net. Reasons were multitude: bad implementation of the site, bad implementation of subscription and payment, etc.
Past couple years the trend has been to simply block readers out of the site by erecting a paywall. That will keep the free leechers out (like me), but I'm not certain it would improve the sales. Maybe it does, because lot of the media is doing that. However, yet again the implementation is really bad. There must be enough complexity in the paywall that people like me cannot simply run something like this in Javascript console:
javascript:(function(d){d.cookie='Bhb2CdOB3V=; expires=Thu, 01 Jan 1970 00:00:00 UTC; path=/; domain=.tivi.fi;'})(document)
If you have no idea what a Javascript console is (most non-developer don't), don't worry. Here is the same thing as a bookmarklet. Concept of bookmarking sites into your browser is past gone, everybody just googles everything. Since nobody maintains bookmarks, it is not-so-user-friendly to manage bookmarks in a modern browser. However, the idea of you is to create a Javascript bookmark, aka. bookmarklet. In chrome, something like this:
Then it is easy to just click the bookmark when you run out of reads. Reload the page and keep reading!
Finally:
Apologies for the newspaper publisher for pissing in their coffee pot. Your product is good and I have enjoyed reading the articles, it's just the price point, you're too expensive! My motivation here is to offer you a pro-bono implementation evaluation. Hint: you failed badly.
I totally expect them to improve their implementation. Also I promise not to maintain any possible hacks their future versions may include. Also I'd like to offer a piece of advice.
There are two real options you can go with:
1) implement mandatory login
or
2) keep some articles free, some behind paywall. That seems to be the only working option of keeping people like me out of your system.
Azure support for cloud-dyndns Python-script
Monday, April 1. 2019
Last year I wrote a tool to keep a server having dynamic IP-address updated into Rackspace DNS (that's free-of-charge!). See the article DynDNS updates to your Cloud DNS for details. Since the software is running in "production", I've been maintaining the thing. Production is in quotes, becuse it's not a real production-production thing. It's just something I run to make my life easier.
For a while, I've been running Azure DNS -provider and now I published the code at project's Github repo: https://github.com/HQJaTu/cloud-dyndns
Since Azure doesn't have API-keys, you need to use a service principal. Grasping the concept of service principal is tricky. For example Azure official documentation at https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals fails explaining it. My own explanation for a service principal is: it is a machine user account. Yes, you can limit regular users' access in Azure, but with a service principal it is even easier. Also, a service principal cannot have 2-factor authentiation enabled for it. That's why I consider it as a machine user.
Additional:
Given the requirements, I added --ip-address-detect-public
command-line argument to skip reading server's current IP-address, but probing outside world for it. The exact service is https://ipinfo.io/. Having this feature a DNS can be updated with a static IP, detected IP or probed IP. Nice!
Last year I wrote a tool to keep a server having dynamic IP-address updated into Rackspace DNS (that's free-of-charge!). See the article DynDNS updates to your Cloud DNS for details. Since the software is running in "production", I've been maintaining the thing. Production is in quotes, becuse it's not a real production-production thing. It's just something I run to make my life easier.
For a while, I've been running Azure DNS -provider and now I published the code at project's Github repo: https://github.com/HQJaTu/cloud-dyndns
Since Azure doesn't have API-keys, you need to use a service principal. Grasping the concept of service principal is tricky. For example Azure official documentation at https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals fails explaining it. My own explanation for a service principal is: it is a machine user account. Yes, you can limit regular users' access in Azure, but with a service principal it is even easier. Also, a service principal cannot have 2-factor authentiation enabled for it. That's why I consider it as a machine user.
Additional:
Given the requirements, I added --ip-address-detect-public
command-line argument to skip reading server's current IP-address, but probing outside world for it. The exact service is https://ipinfo.io/. Having this feature a DNS can be updated with a static IP, detected IP or probed IP. Nice!
StackExchange flair received
Monday, March 11. 2019
Whoa! I finally hit the long awaited 200 point mark in Stack Overflow. The good part about that is, Stack Exchange starts publishing a public badge they'll call "flair". They do this to make it possible for me to publicly boast about my prowess. To get some public boasting going on, my flair looks like this:

All of you Peeping Toms out there, I'll save you couple keystrokes of googling: My
public user profile is at https://stackexchange.com/users/1684769/hqjatu
by Jari Turkia
in Operating systems, Programming, Security, Software
at
08:35
| Comments (0)
| Share in LinkedIn
Whoa! I finally hit the long awaited 200 point mark in Stack Overflow. The good part about that is, Stack Exchange starts publishing a public badge they'll call "flair". They do this to make it possible for me to publicly boast about my prowess. To get some public boasting going on, my flair looks like this:
All of you Peeping Toms out there, I'll save you couple keystrokes of googling: My public user profile is at https://stackexchange.com/users/1684769/hqjatu
Blog improvement: Responsive theme
Sunday, March 10. 2019
Going back in the memory lane, back in 2009 Mr. Wroblewski coined up a term "mobile first". At the time pocket computers, or not-so smart phones where a new thing, but booming heavily. First iPad wasn't out there and tablet computing was merely a curiosity. Still Mr. Wroblewski envisioned a future, where most of the web browsing would be done with a mobile device.
To state the obvious: Mr. Wroblewski was not wrong in his statement. His future vision started becoming a reality in April 2015 when Google as the #1 authority on website content, announced they would be demoting pages not being mobile user friendly. In November 2016 Google announced Mobile-first Indexing initiative. In March 2018 they followed up on that and started rolling out mobile-first indexing.
Rougly past 4-5 years this blog of mine has been suffering from this demotion and I really started taking the hit in 2018. Originally I chose this paper-style theme for this blog when I first created it in 2013, and it has been unchanged ever since. Not doing anything about it was an obvious mistake on my part. However, it took a while for Serendipity theme repository to even have properly implemented responsive design themes, so it was impossible for me to change the theme. Still, why would I even want to change the way my blog looks like!
Finally: I chose to improve the theme by making it responsive. In practice, I bootstrapped it with Bootstrap. This is a super-cool project originally created by few guys at Twitter. Read the Wikipedia article about that at https://en.wikipedia.org/wiki/Bootstrap_(front-end_framework). Getting to understand the 12 column grid system takes a while, but when you do the groundwork of arranging page content to rows and columns and realize that you can get six different chunks of display real estate by 1, 2, 3, 4, 6 and 12 columns depending on user's screen size, the results will be amazing! Most of this trickery doesn't even require any JavaScript to run. Adding floating navigation bars and such will require JS, but majority of the goodies work fully on bare HTML/CSS.
So, this is where I stared my journey with:

Google Mobile-Friendly Test spits out a lot of grievance from my blog.
This is the result with this new theme applied:

Oh yeah! Now GoogleBot should be much happier with this one. For those of you who want to play around with this, just go make the browser window very narrow and see the point where screen breaks and drops the right side menu off. To access the goodies on right side menu, a hamburger menu will be added to the top of the page.
GoogleBot tester isn't completely happy about my page, there are some load errors. However, I'm not sure exactly what the load errors are as they're labeled "other". Tons of people in The Net are suffering the same. Warning says:
Page partially loaded
Not all page resources could be loaded. This can affect how Google sees and understands your page. Fix availability problems for any resources that can affect how Google understands your page.
Based on lot of other people's comments found in web, the "errors" are merely warnings and they may or may not reduce the page rank. I'm agreeing with some people commenting, that the test Googlebot smartphone client is using super-short timeouts forcing people to optimize their websites to do as little loading as fast as possible.
The standard disclaimer applies:
If any of you think, that this theme doesn't work properly or I did something wrong. Drop me a comment or go to my Github fork at https://github.com/HQJaTu/additional_themes/tree/brownpaper-r2 and create a new pull request. I'm more than interested in keeping this new theme running optimally.
Going back in the memory lane, back in 2009 Mr. Wroblewski coined up a term "mobile first". At the time pocket computers, or not-so smart phones where a new thing, but booming heavily. First iPad wasn't out there and tablet computing was merely a curiosity. Still Mr. Wroblewski envisioned a future, where most of the web browsing would be done with a mobile device.
To state the obvious: Mr. Wroblewski was not wrong in his statement. His future vision started becoming a reality in April 2015 when Google as the #1 authority on website content, announced they would be demoting pages not being mobile user friendly. In November 2016 Google announced Mobile-first Indexing initiative. In March 2018 they followed up on that and started rolling out mobile-first indexing.
Rougly past 4-5 years this blog of mine has been suffering from this demotion and I really started taking the hit in 2018. Originally I chose this paper-style theme for this blog when I first created it in 2013, and it has been unchanged ever since. Not doing anything about it was an obvious mistake on my part. However, it took a while for Serendipity theme repository to even have properly implemented responsive design themes, so it was impossible for me to change the theme. Still, why would I even want to change the way my blog looks like!
Finally: I chose to improve the theme by making it responsive. In practice, I bootstrapped it with Bootstrap. This is a super-cool project originally created by few guys at Twitter. Read the Wikipedia article about that at https://en.wikipedia.org/wiki/Bootstrap_(front-end_framework). Getting to understand the 12 column grid system takes a while, but when you do the groundwork of arranging page content to rows and columns and realize that you can get six different chunks of display real estate by 1, 2, 3, 4, 6 and 12 columns depending on user's screen size, the results will be amazing! Most of this trickery doesn't even require any JavaScript to run. Adding floating navigation bars and such will require JS, but majority of the goodies work fully on bare HTML/CSS.
So, this is where I stared my journey with:
Google Mobile-Friendly Test spits out a lot of grievance from my blog.
This is the result with this new theme applied:
Oh yeah! Now GoogleBot should be much happier with this one. For those of you who want to play around with this, just go make the browser window very narrow and see the point where screen breaks and drops the right side menu off. To access the goodies on right side menu, a hamburger menu will be added to the top of the page.
GoogleBot tester isn't completely happy about my page, there are some load errors. However, I'm not sure exactly what the load errors are as they're labeled "other". Tons of people in The Net are suffering the same. Warning says:
Page partially loaded
Not all page resources could be loaded. This can affect how Google sees and understands your page. Fix availability problems for any resources that can affect how Google understands your page.
Based on lot of other people's comments found in web, the "errors" are merely warnings and they may or may not reduce the page rank. I'm agreeing with some people commenting, that the test Googlebot smartphone client is using super-short timeouts forcing people to optimize their websites to do as little loading as fast as possible.
The standard disclaimer applies:
If any of you think, that this theme doesn't work properly or I did something wrong. Drop me a comment or go to my Github fork at https://github.com/HQJaTu/additional_themes/tree/brownpaper-r2 and create a new pull request. I'm more than interested in keeping this new theme running optimally.
RouterPing - Gathering ICMPv4 statistics about a router
Monday, September 24. 2018
When talking about the wonderful world of computers, networking and all the crap we enjoy tinkering with, sometime somewhere things don't go as planned. When thinking this bit closer, I think that's the default. Okok. The environment is distributed and in a distributed environment, there are tons of really nice places that can be misconfigured or flaky.
My fiber connection (see the details from an old post) started acting weird. Sometimes it failed to transmit anything in or out. Sometimes it failed to get a DHCP-lease. The problem wasn't that frequent, once or twice a month for 10-15 minutes. As the incident happened on any random moment, I didn't much pay attention to it. Then my neighbor asked if my fiber was flaky, because his was. Anyway, he called ISP helpdesk and a guy came to evaluate if everything was ok. Cable-guy said everything was "ok", but he "cleaned up" something. Mightly obscure description of the problem and the fix, right?
Since the trust was gone, I started thinking: Why isn't there a piece of software you can run and it would actually gather statistics if your connection works and how well ISP's router responds. Project is at https://github.com/HQJaTu/RouterPing. Go see it!
How to use the precious router_ping.py
Oh, that's easy!
First you need to establish a point in network you want to measure. My strong suggestion is to measure your ISP's router. That will give you an idea if your connection works. You can also measure a point away from you, like Google or similar. That woul indicate if your connection to a distant point would work. Sometimes there has been a flaw in networking preventing access to a specific critical resource.
On Linux-prompt, you can do a ip route show
. It will display your default gateway IP-address. It will say something like:
default via 62.0.0.1 dev enp1s0
That's the network gateway your ISP suggest you would want to use to make sure your Internet traffic would be routed to reach distant worlds. And that's precisely the point you want to measure. If you cannot reach the nearest point in the cold Internet-world, you cannot reach anything.
When you know what to measure, just run the tool.
Running the precious router_ping.py
Run the tool (as root):
./router_ping.py
<remote IP>
To get help about available options --help is a good option:
optional arguments:
-h, --help show this help message and exit
-i INTERVAL, --interval INTERVAL
Interval of pinging
-f LOGFILE, --log-file LOGFILE
Log file to log into
-d, --daemon Fork into background as a daemon
-p PIDFILE, --pid-file PIDFILE
Pidfile of the process
-t MAILTO, --mail-to MAILTO
On midnight rotation, send the old log to this email
address.
A smart suggestion is to store the precious measurements into a logfile. Also running the thing as a daemon would be smart, there is an option for it. Also, the default interval to send ICMP-requests is 10 seconds. Go knock yourself out to use whatever value (in seconds) is more appropriate for your measurements.
If you're creating a logfile, it will be auto-rotated at midnight of your computer's time. That way you'll get a bunch of files instead a huge one. The out-rotated file will be affixed with a date-extension describing that particular day the log is for. You can also auto-email the out-rotated logfile to any interested recipient, that's what the --mail-to
option is for.
Analysing the logs of the precious router_ping.py
Outputted logfile will look like this:
2018-09-24 09:09:25,105 EEST - INFO - 62.0.0.1 is up, RTT 0.000695
2018-09-24 09:09:35,105 EEST - INFO - 62.0.0.1 is up, RTT 0.000767
2018-09-24 09:09:45,105 EEST - INFO - 62.0.0.1 is up, RTT 0.000931
Above log excerpt should be pretty self-explanatory. There is a point in time, when a ICMP-request to ISP-router was made and response time (in seconds) was measured.
When something goes south, the log will indicate following:
2018-09-17 15:01:42,264 EEST - ERROR - 62.0.0.1 is down
The log-entry type is ERROR instead of INFO and the log will clearly indicate the failure. Obviously in ICMP, there is nothing you can measure. The failure is defined as "we waited a while and nobody replied". In this case, my definition of "a while" is one (1) second. Given the typical measurement being around 1 ms, thousand times that is plenty.
Finally
Unfortunately ... or fortunately depending on how you view the case, the cable guy fixed the problem. My system hasn't caught a single failure since. When it happens, I'll have proof of it.
When talking about the wonderful world of computers, networking and all the crap we enjoy tinkering with, sometime somewhere things don't go as planned. When thinking this bit closer, I think that's the default. Okok. The environment is distributed and in a distributed environment, there are tons of really nice places that can be misconfigured or flaky.
My fiber connection (see the details from an old post) started acting weird. Sometimes it failed to transmit anything in or out. Sometimes it failed to get a DHCP-lease. The problem wasn't that frequent, once or twice a month for 10-15 minutes. As the incident happened on any random moment, I didn't much pay attention to it. Then my neighbor asked if my fiber was flaky, because his was. Anyway, he called ISP helpdesk and a guy came to evaluate if everything was ok. Cable-guy said everything was "ok", but he "cleaned up" something. Mightly obscure description of the problem and the fix, right?
Since the trust was gone, I started thinking: Why isn't there a piece of software you can run and it would actually gather statistics if your connection works and how well ISP's router responds. Project is at https://github.com/HQJaTu/RouterPing. Go see it!
How to use the precious router_ping.py
Oh, that's easy!
First you need to establish a point in network you want to measure. My strong suggestion is to measure your ISP's router. That will give you an idea if your connection works. You can also measure a point away from you, like Google or similar. That woul indicate if your connection to a distant point would work. Sometimes there has been a flaw in networking preventing access to a specific critical resource.
On Linux-prompt, you can do a ip route show
. It will display your default gateway IP-address. It will say something like:
default via 62.0.0.1 dev enp1s0
That's the network gateway your ISP suggest you would want to use to make sure your Internet traffic would be routed to reach distant worlds. And that's precisely the point you want to measure. If you cannot reach the nearest point in the cold Internet-world, you cannot reach anything.
When you know what to measure, just run the tool.
Running the precious router_ping.py
Run the tool (as root):
./router_ping.py
<remote IP>
To get help about available options --help is a good option:
optional arguments:
-h, --help show this help message and exit
-i INTERVAL, --interval INTERVAL
Interval of pinging
-f LOGFILE, --log-file LOGFILE
Log file to log into
-d, --daemon Fork into background as a daemon
-p PIDFILE, --pid-file PIDFILE
Pidfile of the process
-t MAILTO, --mail-to MAILTO
On midnight rotation, send the old log to this email
address.
A smart suggestion is to store the precious measurements into a logfile. Also running the thing as a daemon would be smart, there is an option for it. Also, the default interval to send ICMP-requests is 10 seconds. Go knock yourself out to use whatever value (in seconds) is more appropriate for your measurements.
If you're creating a logfile, it will be auto-rotated at midnight of your computer's time. That way you'll get a bunch of files instead a huge one. The out-rotated file will be affixed with a date-extension describing that particular day the log is for. You can also auto-email the out-rotated logfile to any interested recipient, that's what the --mail-to
option is for.
Analysing the logs of the precious router_ping.py
Outputted logfile will look like this:
2018-09-24 09:09:25,105 EEST - INFO - 62.0.0.1 is up, RTT 0.000695
2018-09-24 09:09:35,105 EEST - INFO - 62.0.0.1 is up, RTT 0.000767
2018-09-24 09:09:45,105 EEST - INFO - 62.0.0.1 is up, RTT 0.000931
Above log excerpt should be pretty self-explanatory. There is a point in time, when a ICMP-request to ISP-router was made and response time (in seconds) was measured.
When something goes south, the log will indicate following:
2018-09-17 15:01:42,264 EEST - ERROR - 62.0.0.1 is down
The log-entry type is ERROR instead of INFO and the log will clearly indicate the failure. Obviously in ICMP, there is nothing you can measure. The failure is defined as "we waited a while and nobody replied". In this case, my definition of "a while" is one (1) second. Given the typical measurement being around 1 ms, thousand times that is plenty.
Finally
Unfortunately ... or fortunately depending on how you view the case, the cable guy fixed the problem. My system hasn't caught a single failure since. When it happens, I'll have proof of it.
Parsing multi-part sitemap.xml with Python
Wednesday, July 11. 2018
A perfectly valid sitemap.xml
can be split into multiple files. For that see, the specs at https://www.sitemaps.org/protocol.html. This is likely to happen in any content management system. For example WordPress loves to split your sitemaps into multiple dynamically generated parts to reduce the size of single XML-download.
It should be an easy task to slurp in the entire sitemap containing all the child-maps, right? I wouldn't be blogging about it, if it would be that easy.
Best solution I bumped into was by Viktor Petersson and his Python gist. I had to fix his sub-sitemap parsing algorithm first to get a desired result. What he didn't account was the fact, that a sub-sitemap address can have URL-parameters in it. Those dynamically generated ones do sometimes have that.
Go get my version from https://gist.github.com/HQJaTu/cd66cf659b8ee633685b43c5e7e92f05
My thanks go to Mr. Petersson and Mr. Addyman for creating the initial versions.
A perfectly valid sitemap.xml
can be split into multiple files. For that see, the specs at https://www.sitemaps.org/protocol.html. This is likely to happen in any content management system. For example WordPress loves to split your sitemaps into multiple dynamically generated parts to reduce the size of single XML-download.
It should be an easy task to slurp in the entire sitemap containing all the child-maps, right? I wouldn't be blogging about it, if it would be that easy.
Best solution I bumped into was by Viktor Petersson and his Python gist. I had to fix his sub-sitemap parsing algorithm first to get a desired result. What he didn't account was the fact, that a sub-sitemap address can have URL-parameters in it. Those dynamically generated ones do sometimes have that.
Go get my version from https://gist.github.com/HQJaTu/cd66cf659b8ee633685b43c5e7e92f05
My thanks go to Mr. Petersson and Mr. Addyman for creating the initial versions.