- Homepage
- http://fi.linkedin.com/in/jariturkia/
- Country
- Finland
- Occupation
- ICT pro
- Hobbies
Fedora Linux 39 - Upgrade
Sunday, November 12. 2023

Twenty years with Fedora Linux. Release 39 is out!
I've ran Red Hat since 90s. Version 3.0.3 was released in 1996 and ever since I've had something from them running. As they scrapped Red Hat Linux in 2003 and went for Fedora Core / RHEL, I've had something from those two running. For the record: I hate those semi-working everything-done-only-half-way Debian/Ubuntu crap. When I want to toy around with something that doesn't quite fit, I choose Arch Linux.
To get your Fedora in-place-upgraded, process is rather simple. Docs are in article Performing system upgrade.
First you make sure existing system is upgraded to all latest stuff with a dnf --refresh upgrade
. Then make sure upgrade tooling is installed: dnf install dnf-plugin-system-upgrade
. Now you're good to go for download all the new packages with a: dnf system-upgrade download --releasever=39
Note: This workflow has existed for a long time. It is likely to work also in future. Next time all you have to do is replace the release vesion with the one you want to upgrade into.
Now all prep is done and you're good to go for the actual upgrade. To state the obvious: this is the dangerous part. Everything before this has been a warm-up run.
dnf system-upgrade reboot
If you have a console, you'll see the progress.

Installing the packages and rebooting into the new release shouldn't take too many minutes.
When back in, verify: cat /etc/fedora-release ; cat /proc/version
This resulted in my case:
Fedora release 39 (Thirty Nine)
Linux version 6.5.11-300.fc39.x86_64 (mockbuild@d23353abed4340e492bce6e111e27898) (gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4), GNU ld version 2.40-13.fc39) #1 SMP PREEMPT_DYNAMIC Wed Nov 8 22:37:57 UTC 2023
Finally, you can do optional clean-up: dnf system-upgrade clean && dnf clean packages
That's it! You're done for at least for next 6 months.
Twenty years with Fedora Linux. Release 39 is out!
I've ran Red Hat since 90s. Version 3.0.3 was released in 1996 and ever since I've had something from them running. As they scrapped Red Hat Linux in 2003 and went for Fedora Core / RHEL, I've had something from those two running. For the record: I hate those semi-working everything-done-only-half-way Debian/Ubuntu crap. When I want to toy around with something that doesn't quite fit, I choose Arch Linux.
To get your Fedora in-place-upgraded, process is rather simple. Docs are in article Performing system upgrade.
First you make sure existing system is upgraded to all latest stuff with a dnf --refresh upgrade
. Then make sure upgrade tooling is installed: dnf install dnf-plugin-system-upgrade
. Now you're good to go for download all the new packages with a: dnf system-upgrade download --releasever=39
Note: This workflow has existed for a long time. It is likely to work also in future. Next time all you have to do is replace the release vesion with the one you want to upgrade into.
Now all prep is done and you're good to go for the actual upgrade. To state the obvious: this is the dangerous part. Everything before this has been a warm-up run.
dnf system-upgrade reboot
If you have a console, you'll see the progress.
Installing the packages and rebooting into the new release shouldn't take too many minutes.
When back in, verify: cat /etc/fedora-release ; cat /proc/version
This resulted in my case:
Fedora release 39 (Thirty Nine)
Linux version 6.5.11-300.fc39.x86_64 (mockbuild@d23353abed4340e492bce6e111e27898) (gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4), GNU ld version 2.40-13.fc39) #1 SMP PREEMPT_DYNAMIC Wed Nov 8 22:37:57 UTC 2023
Finally, you can do optional clean-up: dnf system-upgrade clean && dnf clean packages
That's it! You're done for at least for next 6 months.
Bare Bones Software - BBEdit - Indent with tab
Saturday, October 28. 2023
As a software developer I've pretty much used all text editors there is. Bold statement. Fact remains there aren't that many of them commonly used.
On Linux, I definitely use Vim, rarely Emacs, almost never Nano (any editor requiring Ctrl-s over SSH is crap!).
On Windows, mostly Notepad++ rarely Windows' own Notepad. Both use Ctrl-s, but won't work over SSH-session.
On Mac, mostly BBEdit.
Then there is the long list of IDEs available on many platforms I've used or am still using to work with multiple programming languages and file formats. Short list with typical ones would be: IntelliJ and Visual Studio.
Note: VScode is utter crap! People who designed VScode were high on drugs or plain old-fashioned idiots. They have no clue what a developer needs. That vastly overrated waste-of-disc-space a last resort editor for me.
BBEdit 14 homepage states:
It doesn’t suck.®
Oh, but it does! It can be made less sucky, though.
Here is an example:

In above example, I'm editing a JSON-file. It happens to be in a rather unreadable state and sprinkling bit of indent on top of it should make the content easily readable.
Remember, earlier I mentioned a long list of editors. Virtually every single one of those has functionality to highlight a section of text and indent selection by pressing Tab. Not BBEdit! It simply replaces entire selection with a tab-character. Insanity!
Remember the statement on not sucking? There is a well-hidden option:

The secret option is called Allow Tab key to indent text blocks and it was introduced in version 13.1.2. Why this isn't default ... correction: Why this wasn't default behaviour from get-go is a mystery.
Now the indention works as expected:

What puzzles me is the difficulty of finding the option for indenting with a Tab. I googled wide & far. No avail. Those technical writers @ Barebones really should put some effort on making this option better known.
As a software developer I've pretty much used all text editors there is. Bold statement. Fact remains there aren't that many of them commonly used.
On Linux, I definitely use Vim, rarely Emacs, almost never Nano (any editor requiring Ctrl-s over SSH is crap!).
On Windows, mostly Notepad++ rarely Windows' own Notepad. Both use Ctrl-s, but won't work over SSH-session.
On Mac, mostly BBEdit.
Then there is the long list of IDEs available on many platforms I've used or am still using to work with multiple programming languages and file formats. Short list with typical ones would be: IntelliJ and Visual Studio.
Note: VScode is utter crap! People who designed VScode were high on drugs or plain old-fashioned idiots. They have no clue what a developer needs. That vastly overrated waste-of-disc-space a last resort editor for me.
BBEdit 14 homepage states:
It doesn’t suck.®
Oh, but it does! It can be made less sucky, though.
Here is an example:
In above example, I'm editing a JSON-file. It happens to be in a rather unreadable state and sprinkling bit of indent on top of it should make the content easily readable.
Remember, earlier I mentioned a long list of editors. Virtually every single one of those has functionality to highlight a section of text and indent selection by pressing Tab. Not BBEdit! It simply replaces entire selection with a tab-character. Insanity!
Remember the statement on not sucking? There is a well-hidden option:
The secret option is called Allow Tab key to indent text blocks and it was introduced in version 13.1.2. Why this isn't default ... correction: Why this wasn't default behaviour from get-go is a mystery.
Now the indention works as expected:
What puzzles me is the difficulty of finding the option for indenting with a Tab. I googled wide & far. No avail. Those technical writers @ Barebones really should put some effort on making this option better known.
Wi-Fi Router Lifespan: A Threat to National Security?
Sunday, October 15. 2023
Wireless LAN, or Wi-Fi, is topic I've written a lot about. Router hardware is common. Most end-user appliance people use are wireless. Wi-Fi combined with proper Internet connection has tons of bandwidth and is responsive. From hacking perspective quite many of those boxes run Linux or a thing with hackable endpoints. Or alternatively, on the electronics board, there are interesting pins that a person with knowledge can lure the box do things manufacturer didn't expect to happen. Oh, I love hardware hacks! 
Routers are exploitable
Back in 2016 these routers were harnessed to a new use. From hacker's perspective, there exists a thing which works perfectly, but doesn't do the thing hacker wishes it to do. So, after little bit of hacking, the device "learns" a new skill. This new skill was to participate in criminal activity as a DDoS traffic generator. Geekflare article How to Secure Your Router Against Mirai Botnet Attacks explains as follows:
According to Paras Jha, one of the authors of the Mirai bot, most of the IoT devices infected and used by the Mirai Botnet were routers.
A word from national intelligence organization
Fast forward nine years to 2023. Things really haven't changed much. There exists even more Wi-Fi routers. The routers are manufactured in huge volumes, designed to make maximum profit for the manufacturer and are lacking a ton of security features. Combine these hackable routers with all the geopolitical tension in the World right now our beloved routers have become the interest of Finnish Security and Intelligence Service (or Supo, acronym in Finnish).
This month, in their annual National Security Overview 2023, in Threats to national security are continually evolving, they issued a warning: "Cyber espionage exploits unprotected consumer devices". Actually, they did pretty much the same thing back in March -21 with following statement: "Supo has observed that the intelligence services of authoritarian states have been exploiting dozens of network devices and servers of Finnish individuals and businesses in cyber espionage operations."
How?
Having a national intelligence service to warn about crappy network hardware is a big deal. They don't do the same warning about your toaster or dish washer or cheap Android phone. Same characteristics don't really apply to anything else. A device needs to be:
- On-line / Internet connected
- See Mr. Hyppönen's book: If It’s Smart, It’s Vulnerable
- And btw., all routers are computers. Any computer is considered as a "smart" device.
- Insecure
- Yeah. Even the expensive Wi-Fi routers have crappy manufacturer firmware in them. Cheap ones are especially vulnerable. And even the good ones expire after couple of years as manufacturer loses interest and ceases publishing new upgrades.
- Exist in masses
- Literally every home and business has at least one. I don't know the total number of homes and businesses in the World, but it must be a big number!
On those three characteristics, following things are true:
- Every single vulnerable device can be found easily.
- As the internet, there are 3,706,452,992 public IPv4 addresses. That seems like a big number, but in reality it isn't. (Actually, the scale of the number is same as the number of homes+businesses.)
- In 2013 the entire address space could be scanned in 44 minutes. (See Washington Post article Here’s what you find when you scan the entire Internet in an hour)
- Every single vulnerable device can be cracked wide open without human interaction by automated tools in a split second.
- Every single cracked device can be made to do whatever the attacker wants.
- Any typical scenario is some sort of criminal activity.
Why?
100% of all lay-persons I've talked to state "I don't care if I'm hacked. I'm not an important person and have nothing to hide." What they miss is, attacker wanting to use their paid connection while impersonating as them to commit crimes.
We also have learned not all attackers are after money, some of them are state actors doing espionage. There are multiple types of attackers ranging from teenagers smearing their school's website to cybercriminals stealing money to government sponsored spies stealing state secrets.
Now we're getting to the reason why intelligence services are issuing warnings!
Scaring consumers - There is profit to be made
Since these intelligence service warnings have existed for couple years, in May -23 a major ISP / Telco in Finland, DNA, issued a press release (this is in Finnish, sorry) stating following:
Does you home have over four year old router? An expert reveals why it can be a risk.
Translated quote:
As a rule of thumb I'd say: a four year old router for basic user is aged. Advanced users may replace their routers every two years.
Going open-source!
For clarity: I'm not disputing the fact an aged router with never being upgraded to latest available firmware wouldn't be a security risk. It is! As a hacker I'm disputing is the need to purchase a new one. Gen. pop. will never ever be able to hack their devices into running OpenWrt or DD-WRT, that's for sure. Instead, educating people with risks involved with cheap consumer electronics and offering advice on smart choices would be preferred.
Here is my advice:
- Router manufacturers (and ISPs) are commercial entities targeting to maximize their profit. Their intent is to sell you a new router even when the hardware of your old device is still ok.
- Part of profit maximizing is to abandon the device after couple years of its release. There exists manufacturers which have never released a single security patch. Profit maximizing is very insecure for you as consumer.
- Hardware doesn't expire as fast as software does. There are exeptions to this. Power supplies and radio frequency electronics takes the greatest wear&tear on your 24/7 enabled device, sometimes getting a new box is the best option.
- Your old hardware may be eligible for re-purposing with open-source options. As your local hacker for details.
- Open-source firmware gets upgrades for both features and security for any forseeable future. This can happen as open-source firmware unifies all various hardware under a single umbrella.
- Make a habit of upgrading your open-source firmware every now and then. New firmwares will be made available on a regular basis.
Personally, for the past 19 years I've only purchased Wi-Fi routers which have OpenWrt or DD-WRT -support. Typically, after unboxing the thing, factory firmware runs only those precious minutes to get a proper Linux running into them. This is what I recommend everybody else to do!
PS. Kudos to those manufacturers who skipped the part with creating something and abandoning firmware of their own and license open-source solutions. There aren't many of you. Keep it up!
Wireless LAN, or Wi-Fi, is topic I've written a lot about. Router hardware is common. Most end-user appliance people use are wireless. Wi-Fi combined with proper Internet connection has tons of bandwidth and is responsive. From hacking perspective quite many of those boxes run Linux or a thing with hackable endpoints. Or alternatively, on the electronics board, there are interesting pins that a person with knowledge can lure the box do things manufacturer didn't expect to happen. Oh, I love hardware hacks!
Routers are exploitable
Back in 2016 these routers were harnessed to a new use. From hacker's perspective, there exists a thing which works perfectly, but doesn't do the thing hacker wishes it to do. So, after little bit of hacking, the device "learns" a new skill. This new skill was to participate in criminal activity as a DDoS traffic generator. Geekflare article How to Secure Your Router Against Mirai Botnet Attacks explains as follows:
According to Paras Jha, one of the authors of the Mirai bot, most of the IoT devices infected and used by the Mirai Botnet were routers.
A word from national intelligence organization
Fast forward nine years to 2023. Things really haven't changed much. There exists even more Wi-Fi routers. The routers are manufactured in huge volumes, designed to make maximum profit for the manufacturer and are lacking a ton of security features. Combine these hackable routers with all the geopolitical tension in the World right now our beloved routers have become the interest of Finnish Security and Intelligence Service (or Supo, acronym in Finnish).
This month, in their annual National Security Overview 2023, in Threats to national security are continually evolving, they issued a warning: "Cyber espionage exploits unprotected consumer devices". Actually, they did pretty much the same thing back in March -21 with following statement: "Supo has observed that the intelligence services of authoritarian states have been exploiting dozens of network devices and servers of Finnish individuals and businesses in cyber espionage operations."
How?
Having a national intelligence service to warn about crappy network hardware is a big deal. They don't do the same warning about your toaster or dish washer or cheap Android phone. Same characteristics don't really apply to anything else. A device needs to be:
- On-line / Internet connected
- See Mr. Hyppönen's book: If It’s Smart, It’s Vulnerable
- And btw., all routers are computers. Any computer is considered as a "smart" device.
- Insecure
- Yeah. Even the expensive Wi-Fi routers have crappy manufacturer firmware in them. Cheap ones are especially vulnerable. And even the good ones expire after couple of years as manufacturer loses interest and ceases publishing new upgrades.
- Exist in masses
- Literally every home and business has at least one. I don't know the total number of homes and businesses in the World, but it must be a big number!
On those three characteristics, following things are true:
- Every single vulnerable device can be found easily.
- As the internet, there are 3,706,452,992 public IPv4 addresses. That seems like a big number, but in reality it isn't. (Actually, the scale of the number is same as the number of homes+businesses.)
- In 2013 the entire address space could be scanned in 44 minutes. (See Washington Post article Here’s what you find when you scan the entire Internet in an hour)
- Every single vulnerable device can be cracked wide open without human interaction by automated tools in a split second.
- Every single cracked device can be made to do whatever the attacker wants.
- Any typical scenario is some sort of criminal activity.
Why?
100% of all lay-persons I've talked to state "I don't care if I'm hacked. I'm not an important person and have nothing to hide." What they miss is, attacker wanting to use their paid connection while impersonating as them to commit crimes.
We also have learned not all attackers are after money, some of them are state actors doing espionage. There are multiple types of attackers ranging from teenagers smearing their school's website to cybercriminals stealing money to government sponsored spies stealing state secrets.
Now we're getting to the reason why intelligence services are issuing warnings!
Scaring consumers - There is profit to be made
Since these intelligence service warnings have existed for couple years, in May -23 a major ISP / Telco in Finland, DNA, issued a press release (this is in Finnish, sorry) stating following:
Does you home have over four year old router? An expert reveals why it can be a risk.
Translated quote:
As a rule of thumb I'd say: a four year old router for basic user is aged. Advanced users may replace their routers every two years.
Going open-source!
For clarity: I'm not disputing the fact an aged router with never being upgraded to latest available firmware wouldn't be a security risk. It is! As a hacker I'm disputing is the need to purchase a new one. Gen. pop. will never ever be able to hack their devices into running OpenWrt or DD-WRT, that's for sure. Instead, educating people with risks involved with cheap consumer electronics and offering advice on smart choices would be preferred.
Here is my advice:
- Router manufacturers (and ISPs) are commercial entities targeting to maximize their profit. Their intent is to sell you a new router even when the hardware of your old device is still ok.
- Part of profit maximizing is to abandon the device after couple years of its release. There exists manufacturers which have never released a single security patch. Profit maximizing is very insecure for you as consumer.
- Hardware doesn't expire as fast as software does. There are exeptions to this. Power supplies and radio frequency electronics takes the greatest wear&tear on your 24/7 enabled device, sometimes getting a new box is the best option.
- Your old hardware may be eligible for re-purposing with open-source options. As your local hacker for details.
- Open-source firmware gets upgrades for both features and security for any forseeable future. This can happen as open-source firmware unifies all various hardware under a single umbrella.
- Make a habit of upgrading your open-source firmware every now and then. New firmwares will be made available on a regular basis.
Personally, for the past 19 years I've only purchased Wi-Fi routers which have OpenWrt or DD-WRT -support. Typically, after unboxing the thing, factory firmware runs only those precious minutes to get a proper Linux running into them. This is what I recommend everybody else to do!
PS. Kudos to those manufacturers who skipped the part with creating something and abandoning firmware of their own and license open-source solutions. There aren't many of you. Keep it up!
More USB Ports for iMac - HyperDrive 5-in-1 USB-C Hub
Sunday, September 3. 2023
Apple computers are known to have not-so-many ports. Design philosophy is for a machine to be self-sufficient and to not need any extensions nor ports for extensions. Reality bites and eats any ideology for a breakfast. I definitely need the extensions!
So, for my new iMac, I went shopping for more ports. Whenever I need to do this, I'll first check Hyper. Their products are known to be of highest quality and well designed to meet the specific requirements of a Mac. To be clear: on an iMac there are four ports: two USB-C and two Thunderbolt 4. This ought to be enough for everybody, right? Nope. All of them are in the back of the computer. What if you need something with easy access?
This is what's in a HyperDrive for iMac box:


Those changeable covers are designed to match the colour of the iMac. This reminds me of 1998 when Nokia introduced the Xpress-on Covers for 5110:

Image courtesy of nokiaprojectdream.com.
This is how the USB-hub clamps into iMac:

Now I don't have to try and blindly attempt to touchy/feely the location of a port behind the computer. On my desk, back of the iMac is definitely not visible nor available. Also, it is noteworthy, somebody might declare the setup "ugly" or complain of my choice of color express-on cover. The iMac is silver, but I have a yellow cover on the HyperDrive. That's how I roll! 
Apple computers are known to have not-so-many ports. Design philosophy is for a machine to be self-sufficient and to not need any extensions nor ports for extensions. Reality bites and eats any ideology for a breakfast. I definitely need the extensions!
So, for my new iMac, I went shopping for more ports. Whenever I need to do this, I'll first check Hyper. Their products are known to be of highest quality and well designed to meet the specific requirements of a Mac. To be clear: on an iMac there are four ports: two USB-C and two Thunderbolt 4. This ought to be enough for everybody, right? Nope. All of them are in the back of the computer. What if you need something with easy access?
This is what's in a HyperDrive for iMac box:
Those changeable covers are designed to match the colour of the iMac. This reminds me of 1998 when Nokia introduced the Xpress-on Covers for 5110:
Image courtesy of nokiaprojectdream.com.
This is how the USB-hub clamps into iMac:
Now I don't have to try and blindly attempt to touchy/feely the location of a port behind the computer. On my desk, back of the iMac is definitely not visible nor available. Also, it is noteworthy, somebody might declare the setup "ugly" or complain of my choice of color express-on cover. The iMac is silver, but I have a yellow cover on the HyperDrive. That's how I roll!
New toys - Apple iMac
Monday, August 28. 2023
Summer here in Finland is over. It's windy and raining cats&dogs. Definitely beginning of autumn.
For me, summer is typically time to do lots of other things than write blog posts. No exceptions this time. I did tinker around with computers some: new rack server to replace the old Intel Atom, some USB-probing via DLMS, some Python code, etc. etc. I may post someething on those projects later.
And: I got a new iMac. Here are some pics:


Back-in-the-days, there used to be 21.5" and 27" iMac. Actually, my old one is the small one. Since Apple abandoned Intel CPUs, one-size-fits-all, only 24" option available. Also, the iMacs I have are VESA-mounted ones. There is no room on my desk.
Apple's magic mouse is for somebody else. I hate the thing! Good thing I still have a perfectly working MX Anywhere 2. On my other computer I (obviously) use a MX Anywhere 3 and have plans to upgrade it into a 3S.
Cabling in an iMac is not-so-standard:

Ethernet RJ-45 -socket is in the PSU, which has your standard IEC C5 "Mickey Mouse" connector. On the other end, there is a Magsafe. With Ethernet in it! It has to be some sort of USB-C / Thunderbolt 4 -thingie with really weird magnetic connector.
Transferring setteings and data from a OS X to modern macOS works like a charm. Nothing in Windows can touch that level of easiness. Also, now I have an OS that can do Time Machine backups to a Linux/Samba. Nice! Maybe I should post something about that setup also.
Next: Run the thing for 10+ years. Btw. the old one is for sale, it really has no monetary value, but it works and I don't need it anymore.
Summer here in Finland is over. It's windy and raining cats&dogs. Definitely beginning of autumn.
For me, summer is typically time to do lots of other things than write blog posts. No exceptions this time. I did tinker around with computers some: new rack server to replace the old Intel Atom, some USB-probing via DLMS, some Python code, etc. etc. I may post someething on those projects later.
And: I got a new iMac. Here are some pics:
Back-in-the-days, there used to be 21.5" and 27" iMac. Actually, my old one is the small one. Since Apple abandoned Intel CPUs, one-size-fits-all, only 24" option available. Also, the iMacs I have are VESA-mounted ones. There is no room on my desk.
Apple's magic mouse is for somebody else. I hate the thing! Good thing I still have a perfectly working MX Anywhere 2. On my other computer I (obviously) use a MX Anywhere 3 and have plans to upgrade it into a 3S.
Cabling in an iMac is not-so-standard:
Ethernet RJ-45 -socket is in the PSU, which has your standard IEC C5 "Mickey Mouse" connector. On the other end, there is a Magsafe. With Ethernet in it! It has to be some sort of USB-C / Thunderbolt 4 -thingie with really weird magnetic connector.
Transferring setteings and data from a OS X to modern macOS works like a charm. Nothing in Windows can touch that level of easiness. Also, now I have an OS that can do Time Machine backups to a Linux/Samba. Nice! Maybe I should post something about that setup also.
Next: Run the thing for 10+ years. Btw. the old one is for sale, it really has no monetary value, but it works and I don't need it anymore.
Nuvoton NCT6793D lm_sensors output
Monday, July 3. 2023
LM-Sensors is set of libraries and tools for accessing your Linux server's motheboard sensors. See more @ https://github.com/lm-sensors/lm-sensors.
If you're ever wondered why in Windows it is tricky to get readings from your CPU-fan rotation speed or core temperatures from you fancy GPU without manufacturer utilities. Obviously vendors do provide all the possible readings in their utilities, but people who would want to read, record and store the data for their own purposes, things get hairy. Nothing generic exists and for unknown reason, such API isn't even planned.
In Linux, The One toolkit to use is LM-Sensors. On kernel side, there exists The Linux Hardware Monitoring kernel API. For this stack to work, you also need a kernel module specific to your motherboard providing the requested sensor information via this beautiful API. It's also worth noting, your PC's hardware will have multiple sensors data providers. An incomplete list would include: motherboard, CPU, GPU, SSD, PSU, etc.
Now that sensors-detect
found all your sensors, confirm sensors
will output what you'd expect it to. In my case there was a major malfunction. On a boot, following thing happened when system started sensord
(in case you didn't know, kernel-stuff can be read via dmesg
):
systemd[1]: Starting lm_sensors.service - Hardware Monitoring Sensors...
kernel: nct6775: Enabling hardware monitor logical device mappings.
kernel: nct6775: Found NCT6793D or compatible chip at 0x2e:0x290
kernel: ACPI Warning: SystemIO range 0x0000000000000295-0x0000000000000296 conflicts with OpRegion 0x0000000000000290-0x0000000000000299 (_GPE.HWM) (20221020/utaddress-204)
kernel: ACPI: OSL: Resource conflict; ACPI support missing from driver?
systemd[1]: Finished lm_sensors.service - Hardware Monitoring Sensors.
This conflict resulted in no available mobo readings! NVMe, GPU and CPU-cores were ok, the part I was mostly looking for was fan RPMs and mobo temps just to verify my system health. No such joy.
Uff.
It seems, this particular Linux kernel module has issues. Or another way to state it: mobo manufacturers have trouble implementing Nuvoton chip into their mobos. On Gentoo forums, there is a helpful thread: [solved] nct6775 monitoring driver conflicts with ACPI
Disclaimer: For ROG Maximus X Code -mobo adding acpi_enforce_resources=no
into kernel parameters is the correct solution. Results will vary depending on what mobo you have.
Such ACPI-setting can be permanently enforced by first querying about the Linux kernel version being used (I run a Fedora): grubby --info=$(grubby --default-index)
. The resulting kernel version can be updated by: grubby --args="acpi_enforce_resources=no" --update-kernel DEFAULT
. A reboot shows fix in effect, ACPI Warning is gone and mobo sensor data can be seen.
As a next step you'll need userland tooling to interpret the raw data into human-readable information with semantics. A new years back, I wrote about Improving Nuvoton NCT6776 lm_sensors output. It's mainly about bridging the flow of zeros and ones into something having meaning to humans. This is my LM-Sensors configuration for ROG Maximus X Code:
chip "nct6793-isa-0290"
# 1. voltages
ignore in0
ignore in1
ignore in2
ignore in3
ignore in4
ignore in5
ignore in6
ignore in7
ignore in8
ignore in9
ignore in10
ignore in11
label in12 "Cold Bug Killer"
set in12_min 0.936
set in12_max 2.613
set in12_beep 1
label in13 "DMI"
set in13_min 0.550
set in13_max 2.016
set in13_beep 1
ignore in14
# 2. fans
label fan1 "Chassis fan1"
label fan2 "CPU fan"
ignore fan3
ignore fan4
label fan5 "Ext fan?"
# 3. temperatures
label temp1 "MoBo"
label temp2 "CPU"
set temp2_max 90
set temp2_beep 1
ignore temp3
ignore temp5
ignore temp6
ignore temp9
ignore temp10
ignore temp13
ignore temp14
ignore temp15
ignore temp16
ignore temp17
ignore temp18
# 4. other
set beep_enable 1
ignore intrusion0
ignore intrusion1
I'd like to credit Mr. Peter Sulyok on his work about ASRock Z390 Taichi. This mobo happens to use the same Nuvoton NCT6793D -chip for LPC/eSPI SI/O (I have no idea what those acronyms are for, I just copy/pasted them from the chip data sheet). The configuration is in GitHub for everybody to see: https://github.com/petersulyok/asrock_z390_taichi
Also, I''d like to state my ignorance. After reading less than 500 pages of the NCT6793D data sheet, I have no idea what is:
- Cold Bug Killer voltage
- DMI voltage
- AUXTIN1 is or exactly what temperature measurement it serves
- PECI Agent 0 temperature
- PECI Agent 0 Calibration temperature
Remember, I did mention semantics. From sensors
-command output I can read a reading, what it translates into, no idea! Luckily there are some of the readings seen are easy to understand and interpret. As an example, fan RPMs are really easy to measure by removing the fan from its connector. Here is an excerpt from my mobo manual to explain fan-connectors:

As data quality is taken care of and output is meaningful, next step is to start recording data. In LM-Sensors, there is sensord
for that. It is a system service taking a snapshot (you can define the frequency) and storing it for later use. I'll enrich the stored data points with system load averages, this enables me to estimate a relation with high temperatures and/or fan RPMs with how much my system is working.
Finally, all data gathered into a RRDtool database can be easily visualized with rrdcgi
into HTML + set of PNG-images to present a web page like this:

Nice!
LM-Sensors is set of libraries and tools for accessing your Linux server's motheboard sensors. See more @ https://github.com/lm-sensors/lm-sensors.
If you're ever wondered why in Windows it is tricky to get readings from your CPU-fan rotation speed or core temperatures from you fancy GPU without manufacturer utilities. Obviously vendors do provide all the possible readings in their utilities, but people who would want to read, record and store the data for their own purposes, things get hairy. Nothing generic exists and for unknown reason, such API isn't even planned.
In Linux, The One toolkit to use is LM-Sensors. On kernel side, there exists The Linux Hardware Monitoring kernel API. For this stack to work, you also need a kernel module specific to your motherboard providing the requested sensor information via this beautiful API. It's also worth noting, your PC's hardware will have multiple sensors data providers. An incomplete list would include: motherboard, CPU, GPU, SSD, PSU, etc.
Now that sensors-detect
found all your sensors, confirm sensors
will output what you'd expect it to. In my case there was a major malfunction. On a boot, following thing happened when system started sensord
(in case you didn't know, kernel-stuff can be read via dmesg
):
systemd[1]: Starting lm_sensors.service - Hardware Monitoring Sensors...
kernel: nct6775: Enabling hardware monitor logical device mappings.
kernel: nct6775: Found NCT6793D or compatible chip at 0x2e:0x290
kernel: ACPI Warning: SystemIO range 0x0000000000000295-0x0000000000000296 conflicts with OpRegion 0x0000000000000290-0x0000000000000299 (_GPE.HWM) (20221020/utaddress-204)
kernel: ACPI: OSL: Resource conflict; ACPI support missing from driver?
systemd[1]: Finished lm_sensors.service - Hardware Monitoring Sensors.
This conflict resulted in no available mobo readings! NVMe, GPU and CPU-cores were ok, the part I was mostly looking for was fan RPMs and mobo temps just to verify my system health. No such joy. Uff.
It seems, this particular Linux kernel module has issues. Or another way to state it: mobo manufacturers have trouble implementing Nuvoton chip into their mobos. On Gentoo forums, there is a helpful thread: [solved] nct6775 monitoring driver conflicts with ACPI
Disclaimer: For ROG Maximus X Code -mobo adding acpi_enforce_resources=no
into kernel parameters is the correct solution. Results will vary depending on what mobo you have.
Such ACPI-setting can be permanently enforced by first querying about the Linux kernel version being used (I run a Fedora): grubby --info=$(grubby --default-index)
. The resulting kernel version can be updated by: grubby --args="acpi_enforce_resources=no" --update-kernel DEFAULT
. A reboot shows fix in effect, ACPI Warning is gone and mobo sensor data can be seen.
As a next step you'll need userland tooling to interpret the raw data into human-readable information with semantics. A new years back, I wrote about Improving Nuvoton NCT6776 lm_sensors output. It's mainly about bridging the flow of zeros and ones into something having meaning to humans. This is my LM-Sensors configuration for ROG Maximus X Code:
chip "nct6793-isa-0290"
# 1. voltages
ignore in0
ignore in1
ignore in2
ignore in3
ignore in4
ignore in5
ignore in6
ignore in7
ignore in8
ignore in9
ignore in10
ignore in11
label in12 "Cold Bug Killer"
set in12_min 0.936
set in12_max 2.613
set in12_beep 1
label in13 "DMI"
set in13_min 0.550
set in13_max 2.016
set in13_beep 1
ignore in14
# 2. fans
label fan1 "Chassis fan1"
label fan2 "CPU fan"
ignore fan3
ignore fan4
label fan5 "Ext fan?"
# 3. temperatures
label temp1 "MoBo"
label temp2 "CPU"
set temp2_max 90
set temp2_beep 1
ignore temp3
ignore temp5
ignore temp6
ignore temp9
ignore temp10
ignore temp13
ignore temp14
ignore temp15
ignore temp16
ignore temp17
ignore temp18
# 4. other
set beep_enable 1
ignore intrusion0
ignore intrusion1
I'd like to credit Mr. Peter Sulyok on his work about ASRock Z390 Taichi. This mobo happens to use the same Nuvoton NCT6793D -chip for LPC/eSPI SI/O (I have no idea what those acronyms are for, I just copy/pasted them from the chip data sheet). The configuration is in GitHub for everybody to see: https://github.com/petersulyok/asrock_z390_taichi
Also, I''d like to state my ignorance. After reading less than 500 pages of the NCT6793D data sheet, I have no idea what is:
- Cold Bug Killer voltage
- DMI voltage
- AUXTIN1 is or exactly what temperature measurement it serves
- PECI Agent 0 temperature
- PECI Agent 0 Calibration temperature
Remember, I did mention semantics. From sensors
-command output I can read a reading, what it translates into, no idea! Luckily there are some of the readings seen are easy to understand and interpret. As an example, fan RPMs are really easy to measure by removing the fan from its connector. Here is an excerpt from my mobo manual to explain fan-connectors:
As data quality is taken care of and output is meaningful, next step is to start recording data. In LM-Sensors, there is sensord
for that. It is a system service taking a snapshot (you can define the frequency) and storing it for later use. I'll enrich the stored data points with system load averages, this enables me to estimate a relation with high temperatures and/or fan RPMs with how much my system is working.
Finally, all data gathered into a RRDtool database can be easily visualized with rrdcgi
into HTML + set of PNG-images to present a web page like this:
Nice!
Postman Making it Mandatory to Store All Information Into Postman's Servers
Monday, June 19. 2023
Postman. Not the guy delivering letters to your mailbox, the software which makes API-development much easier.
Earlier this year I was mighty shocked to learn Postman is tightening its grip on developers.
First they made using a Postman-account mandatory. Any new installation into a workstation MUST login to a Postman-account. Earlier it was a strong suggestion, but there existed a skip-button. Anybody having an old installation can keep their way-of-working. When ever there is a need for new installation, surprise is imminent.
For the record: I do have a Postman-account. On some of my machines, I do login into it to collaborate with other developers. There are number of machines where I do not.
The traditional way-of-working is called "Scratch Pad" in the new world order. This scratchpad will contain any locally stored information such as collections of requests, URL endpoints, variables and secrets. Not sharing such delicate information with anybody is my preferred approach.
Hint: For any logged in user, getting to your scratchpad is surprisingly easy. Access to it is well hidden, but available:

Settings sprocket-wheel will contain the option for Scratch Pad.
There is a change coming.
In September 2023 Postman will force to store all passwords, API-keys, secrets, customer's endpoints to their server. There is a blog post from May 2023: Announcing the new lightweight Postman API Client and sunsetting Scratch Pad.
It remains to be seen if this "lightweight" Postman is a piece of usless crap or not. Somehow I don't see the option of storing precious secrets my customers handed me for safe-keeping to a 3rd party who can do whatever they want with it. In some cases, contractually I'm pretty sure I'm not even allowed do to that!

Postman. Not the guy delivering letters to your mailbox, the software which makes API-development much easier.
Earlier this year I was mighty shocked to learn Postman is tightening its grip on developers.
First they made using a Postman-account mandatory. Any new installation into a workstation MUST login to a Postman-account. Earlier it was a strong suggestion, but there existed a skip-button. Anybody having an old installation can keep their way-of-working. When ever there is a need for new installation, surprise is imminent.
For the record: I do have a Postman-account. On some of my machines, I do login into it to collaborate with other developers. There are number of machines where I do not.
The traditional way-of-working is called "Scratch Pad" in the new world order. This scratchpad will contain any locally stored information such as collections of requests, URL endpoints, variables and secrets. Not sharing such delicate information with anybody is my preferred approach.
Hint: For any logged in user, getting to your scratchpad is surprisingly easy. Access to it is well hidden, but available:
Settings sprocket-wheel will contain the option for Scratch Pad.
There is a change coming.
In September 2023 Postman will force to store all passwords, API-keys, secrets, customer's endpoints to their server. There is a blog post from May 2023: Announcing the new lightweight Postman API Client and sunsetting Scratch Pad.
It remains to be seen if this "lightweight" Postman is a piece of usless crap or not. Somehow I don't see the option of storing precious secrets my customers handed me for safe-keeping to a 3rd party who can do whatever they want with it. In some cases, contractually I'm pretty sure I'm not even allowed do to that!
HP Color LaserJet custom X.509 certificate - Part 2
Sunday, June 18. 2023
Earlier this year, I wrote how it was impossible to replace X.509 certificate on a HP printer.
Let's state the good news first: Me and many other people suspected, it was a broken firmware. Indeed, that was the case. Now there is a more recent firmware which contains a fix for the issue.
Determining current firmware version
Version can be verified in System --> Administration menu.
How Laserjet's on-line firmware check works
While tinkering on this, it became evident my HP Laserjet unit isn't especially good at figuring out if there was a new firmware available or not. After some wiresharking, it was possible to determine the logic.
Main endpoint is at http://h19005.www1.hp.com/pub/laserjet/updates/me/manifest. Note: yes it is HTTP, not HTTPS. Also, if you click the link, a file will be downloaded to your machine. This is because most browsers refuse to display octet-streams on screen.
Response will contain a human-readable proprietary application/octet-stream
with information about available Laserjet firmwares. Among the data is a mapping table:
20190819: 20211122
20200324: 20211122
20200603: 20211122
20201022: 20211122
20211122: 20230321
20211221: 20230321
20220224: 20230321
20220309: 20230321
20220411: 20230321
20220729: 20230321
My interpretation is as follows:
- Firmware versions 20190819, 20200324, 20200603 and 20201022 can be upgraded into version 20211122.
- Any firmware versions beginning from 20211122 up to most recent one can be upgraded 20230321
As we know, latest frimware 20230321 contains the fix for certificate uploading, there is an upgrade path for any firmware to have the latest and the brightest version.
If I wanted to, using this URL it would be possible to create a daily/weekly/monthly check if I should take a look at my printer. The own mechanism isn't especially good, and I'd rather do the upgrading manually anyway.
Certificate / PKCS #12
When the bug prevented upload of any new certificates, I did investigate pretty much all available options OpenSSL has for packing PKCS#12-files.
Fixed firmware does accept any reasonable PKCS#12-file. This will work with full chain certificate in the file or not. In Laserjet, only the certificate will be picked and used, any intermediate CA certs will be ignored.
Finally
It took HP half a year to fix this. Unacceptable!
Earlier this year, I wrote how it was impossible to replace X.509 certificate on a HP printer.
Let's state the good news first: Me and many other people suspected, it was a broken firmware. Indeed, that was the case. Now there is a more recent firmware which contains a fix for the issue.
Determining current firmware version
Version can be verified in System --> Administration menu.
How Laserjet's on-line firmware check works
While tinkering on this, it became evident my HP Laserjet unit isn't especially good at figuring out if there was a new firmware available or not. After some wiresharking, it was possible to determine the logic.
Main endpoint is at http://h19005.www1.hp.com/pub/laserjet/updates/me/manifest. Note: yes it is HTTP, not HTTPS. Also, if you click the link, a file will be downloaded to your machine. This is because most browsers refuse to display octet-streams on screen.
Response will contain a human-readable proprietary application/octet-stream
with information about available Laserjet firmwares. Among the data is a mapping table:
20190819: 20211122
20200324: 20211122
20200603: 20211122
20201022: 20211122
20211122: 20230321
20211221: 20230321
20220224: 20230321
20220309: 20230321
20220411: 20230321
20220729: 20230321
My interpretation is as follows:
- Firmware versions 20190819, 20200324, 20200603 and 20201022 can be upgraded into version 20211122.
- Any firmware versions beginning from 20211122 up to most recent one can be upgraded 20230321
As we know, latest frimware 20230321 contains the fix for certificate uploading, there is an upgrade path for any firmware to have the latest and the brightest version.
If I wanted to, using this URL it would be possible to create a daily/weekly/monthly check if I should take a look at my printer. The own mechanism isn't especially good, and I'd rather do the upgrading manually anyway.
Certificate / PKCS #12
When the bug prevented upload of any new certificates, I did investigate pretty much all available options OpenSSL has for packing PKCS#12-files.
Fixed firmware does accept any reasonable PKCS#12-file. This will work with full chain certificate in the file or not. In Laserjet, only the certificate will be picked and used, any intermediate CA certs will be ignored.
Finally
It took HP half a year to fix this. Unacceptable!
Custom authentication with Postman Pre-request script
Friday, May 19. 2023
Postman. Not the guy delivering letters to your mailbox, the software which makes API-development much easier.

With APIs, there are number of authentication mechanisms. All kinds of passwords and keys are used to make sure I'm allowed to make the request. Also, modern authentication mechanisms include OAuth 2.0, OpenID Connect and such. Postman does support OAuth 2.0 out-of-the-box. For those less informed, here is a brief description by Auth0:
OAuth 2.0, which stands for “Open Authorization”, is a standard designed to allow a website or application to access resources hosted by other web apps on behalf of a user.
For the record, OAuth 2.0 is a good protocol. Getting a JWT and making subsequent requests with it is among many things secure & sensible. If you didn't guess it by this far, yes, there is a glitch. RFC 6749 states following in subsection 4.3.2. Access Token Request:
The client makes a request to the token endpoint by adding the following parameters using the "application/x-www-form-urlencoded" format per Appendix B with a character encoding of UTF-8 in the HTTP request entity-body:
grant_type REQUIRED.
Value MUST be set to "password".
username REQUIRED.
The resource owner username.
password REQUIRED.
The resource owner password.
Since 1999 when RFC 2617 was written, we kinda have grown out of sending URL-encoded payloads as form responses. Today, it is 20s and JSON is the way to go. This is the glitch I mentioned, computer says: NO! Spec doesn't state "you may use JSON if you prefer to use it". I do. Many others do too. As Postman is per spec, they don't support JSON-encoded request bodies in OAuth 2.0 in their boxed authentication code. They do support all of the bells/whistles OAuth 2.0 has, but only per-spec.
Again, Postman is a good product, it does support freedom in form of a Pre-request script. I chose to exercise my freedom and crafted a plan is to following:
- Write couple lines of JavaScript to be run as Postman Pre-request script to handle authentication
- Script will be run for every request.
- It is possible to override this mechanism by writing a per-request pre-request script, in which case the collection one will be ignored.
- Per-request script can be as short as "
return
" to do a no-op.
- Required authentication credential in form of JWT is stored into Postman collection variables and is generally available for authenticating any request in the collection
- Pre-request script needs to be smart:
- It will contain logic to determine if any further steps to execute an authentication is required or already existing token can be used to save time and resources.
- We don't want to flood the authentication provider by requesting a JWT for everey single time. They might get angry and kick us out if we're too greedy.
- Script is capable of reading the required authentication credentials for username and password from Postman environment.
- NOTE! Credentials are not read from Collection, from Environment.
- These credentials will be exhcnaged into a token. The token will be used for actual authentication.
- Authentication protocol used is OAuth 2.0 ish, using JSON-body and expecting a JSON-response.
- That's it. Done!
This is my Postman request-collection with the pre-request script:

Further, I have collection variables and the environment (the name I have for this environment is Production):


Here is one request using the pre-request -script from collection:

There isn't much setup needed. It just inherits the authorization setting from collection. Simple!
Full script I have is as follows:
const authTokenCollectionVariableName = "AuthToken";
// Verify, an OAUTH access token exists
if (pm.collectionVariables.get(authTokenCollectionVariableName)) {
const tokStr = atob(pm.collectionVariables.get(authTokenCollectionVariableName).split('.')[1]);
const tok = JSON.parse(tokStr);
const expiry = new Date(tok.exp * 1000);
const now = new Date();
console.log(`Expiry: ${expiry}`);
if (expiry > now) {
console.log(`Auth - DEBUG: Has existing token. It is still valid. No need to renew.`);
return;
}
console.log(`Auth - DEBUG: Has existing token. Renewing expired one.`);
} else {
console.log(`Auth - DEBUG: No existing token found. Requesting one.`);
}
// Refresh the access token and set it into environment variable
pm.sendRequest({
url: pm.collectionVariables.get("Account_API_BaseURL") + "/api/authenticate",
method: 'POST',
header: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: {
mode: 'raw',
raw: JSON.stringify({
username: pm.environment.get("User"),
password: pm.environment.get("Password")
})
}
}, (error, response) => {
pm.expect(error).to.equal(null);
pm.expect(response).to.have.property('code', 200);
pm.collectionVariables.set(authTokenCollectionVariableName, response.json().accessToken);
console.log(`Auth - INFO: Saved token ok.`);
});
To repeat: This script will run only inside Postman as embedded script. If you just whip up a random JavaScript environment, it most certainly will not have a pm
-object in it and if it does, you cannot do all those fancy things I'm doing there.
Now the problem is solved. My requests run as smoothly and effortlessy as they would run with 100% per-RFC OAuth 2.0.
Postman. Not the guy delivering letters to your mailbox, the software which makes API-development much easier.
With APIs, there are number of authentication mechanisms. All kinds of passwords and keys are used to make sure I'm allowed to make the request. Also, modern authentication mechanisms include OAuth 2.0, OpenID Connect and such. Postman does support OAuth 2.0 out-of-the-box. For those less informed, here is a brief description by Auth0:
OAuth 2.0, which stands for “Open Authorization”, is a standard designed to allow a website or application to access resources hosted by other web apps on behalf of a user.
For the record, OAuth 2.0 is a good protocol. Getting a JWT and making subsequent requests with it is among many things secure & sensible. If you didn't guess it by this far, yes, there is a glitch. RFC 6749 states following in subsection 4.3.2. Access Token Request:
The client makes a request to the token endpoint by adding the following parameters using the "application/x-www-form-urlencoded" format per Appendix B with a character encoding of UTF-8 in the HTTP request entity-body:
grant_type REQUIRED.
Value MUST be set to "password".
username REQUIRED.
The resource owner username.
password REQUIRED.
The resource owner password.
Since 1999 when RFC 2617 was written, we kinda have grown out of sending URL-encoded payloads as form responses. Today, it is 20s and JSON is the way to go. This is the glitch I mentioned, computer says: NO! Spec doesn't state "you may use JSON if you prefer to use it". I do. Many others do too. As Postman is per spec, they don't support JSON-encoded request bodies in OAuth 2.0 in their boxed authentication code. They do support all of the bells/whistles OAuth 2.0 has, but only per-spec.
Again, Postman is a good product, it does support freedom in form of a Pre-request script. I chose to exercise my freedom and crafted a plan is to following:
- Write couple lines of JavaScript to be run as Postman Pre-request script to handle authentication
- Script will be run for every request.
- It is possible to override this mechanism by writing a per-request pre-request script, in which case the collection one will be ignored.
- Per-request script can be as short as "
return
" to do a no-op.
- Required authentication credential in form of JWT is stored into Postman collection variables and is generally available for authenticating any request in the collection
- Pre-request script needs to be smart:
- It will contain logic to determine if any further steps to execute an authentication is required or already existing token can be used to save time and resources.
- We don't want to flood the authentication provider by requesting a JWT for everey single time. They might get angry and kick us out if we're too greedy.
- Script is capable of reading the required authentication credentials for username and password from Postman environment.
- NOTE! Credentials are not read from Collection, from Environment.
- These credentials will be exhcnaged into a token. The token will be used for actual authentication.
- Authentication protocol used is OAuth 2.0 ish, using JSON-body and expecting a JSON-response.
- That's it. Done!
This is my Postman request-collection with the pre-request script:
Further, I have collection variables and the environment (the name I have for this environment is Production):
Here is one request using the pre-request -script from collection:
There isn't much setup needed. It just inherits the authorization setting from collection. Simple!
Full script I have is as follows:
const authTokenCollectionVariableName = "AuthToken";
// Verify, an OAUTH access token exists
if (pm.collectionVariables.get(authTokenCollectionVariableName)) {
const tokStr = atob(pm.collectionVariables.get(authTokenCollectionVariableName).split('.')[1]);
const tok = JSON.parse(tokStr);
const expiry = new Date(tok.exp * 1000);
const now = new Date();
console.log(`Expiry: ${expiry}`);
if (expiry > now) {
console.log(`Auth - DEBUG: Has existing token. It is still valid. No need to renew.`);
return;
}
console.log(`Auth - DEBUG: Has existing token. Renewing expired one.`);
} else {
console.log(`Auth - DEBUG: No existing token found. Requesting one.`);
}
// Refresh the access token and set it into environment variable
pm.sendRequest({
url: pm.collectionVariables.get("Account_API_BaseURL") + "/api/authenticate",
method: 'POST',
header: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: {
mode: 'raw',
raw: JSON.stringify({
username: pm.environment.get("User"),
password: pm.environment.get("Password")
})
}
}, (error, response) => {
pm.expect(error).to.equal(null);
pm.expect(response).to.have.property('code', 200);
pm.collectionVariables.set(authTokenCollectionVariableName, response.json().accessToken);
console.log(`Auth - INFO: Saved token ok.`);
});
To repeat: This script will run only inside Postman as embedded script. If you just whip up a random JavaScript environment, it most certainly will not have a pm
-object in it and if it does, you cannot do all those fancy things I'm doing there.
Now the problem is solved. My requests run as smoothly and effortlessy as they would run with 100% per-RFC OAuth 2.0.
AMD Software: Adrenalin Edition - Error 202
Saturday, April 29. 2023
GPU drivers are updated often. The surprise comes from the fact, the update fails. What! Why! How is this possible?

Error 202 – AMD Software Installer Cannot Proceed as a Windows Update Is Currently in Progress
Making sure, this isn't a fluke. I did retry. A few times, actually. Nope. Didn't help.
I'm not alone with this. In AMD community, there is a thread Error 202 but there's no pending Windows Update. Also, on my Windows, there was nothing pending:

As this was the first time I had a hiccup, I realized I knew nothing about the installer. Always on Windows, it pays off to run setup.exe
with switch /?
. This is what will be displayed:

Haa! Options. Going for: .\Setup.exe -install -boot -log inst.log
After, the expected failure, log reveals:
InstallMan::isDisplayDriverEligibleToInstall :6090 Display driver Eligible to Install
isWindowsServiceStopped :4102 Queried Windows Update Status: 4
pauseWindowsUpdate :5244 drvInst.exe is currently running
InstallMan::performMyState :5177 ERROR --- InstallMan -> Caught an AMD Exception. Error code to display to user: 202. Debug hint: "drvInst.exe is currently running"
No idea where the claim of drvInst.exe is currently running
comes from. It isn't! Obviously something is reading Windows Update status wrong. Let's see what'll happen if I'll bring the Windows Update service down with a simple PowerShell-command: Stop-Service -Name wuauserv
Ta-daa! It works! Now installation will complete.
GPU drivers are updated often. The surprise comes from the fact, the update fails. What! Why! How is this possible?
Error 202 – AMD Software Installer Cannot Proceed as a Windows Update Is Currently in Progress
Making sure, this isn't a fluke. I did retry. A few times, actually. Nope. Didn't help.
I'm not alone with this. In AMD community, there is a thread Error 202 but there's no pending Windows Update. Also, on my Windows, there was nothing pending:
As this was the first time I had a hiccup, I realized I knew nothing about the installer. Always on Windows, it pays off to run setup.exe
with switch /?
. This is what will be displayed:
Haa! Options. Going for: .\Setup.exe -install -boot -log inst.log
After, the expected failure, log reveals:
InstallMan::isDisplayDriverEligibleToInstall :6090 Display driver Eligible to Install
isWindowsServiceStopped :4102 Queried Windows Update Status: 4
pauseWindowsUpdate :5244 drvInst.exe is currently running
InstallMan::performMyState :5177 ERROR --- InstallMan -> Caught an AMD Exception. Error code to display to user: 202. Debug hint: "drvInst.exe is currently running"
No idea where the claim of drvInst.exe is currently running
comes from. It isn't! Obviously something is reading Windows Update status wrong. Let's see what'll happen if I'll bring the Windows Update service down with a simple PowerShell-command: Stop-Service -Name wuauserv
Ta-daa! It works! Now installation will complete.
Assembly of my new PC 2023
Friday, April 28. 2023
It's funny how fast you stop paying much attention to your car or sofa. First you spend reasonable amount of money on it. Then they just exist as everyday items. Then, slowly but surely, there is the urge: Should I get a new one?
Exactly the same thing happens with your PC. Maybe one of those precious resources starts running out, SSD or RAM. Maybe CPU or GPU starts feeling bit slow. Then you remember how good everything was when you first got the thing. Yup, the urge is there. Must obtain new.
Roughly five years ago, I was doing my PC on Twitch. As Larpdog doesn't stream anymore, no public appearances this time. I did assemble a new rig, though.
During those years, things have evolved. This happens all the time.
- EATX is a real thing, see Wikipedia for ATX info
- The fact that both cases and motherboard are sold is a big deal.
- This is not a real standard!
- CPU is LGA1700
- Back in the days, I thought 1200 pins connecting my CPU to motherboard was a lot.
- DDR-5
- Having slight overclock on DDR-4 made it pretty fast.
- Now slowest DDR-5 begins way above OCd DDR-4.
- PCIe is 5.0
- I don't own any extension cards supporting 5.0. Good thing 4.0 and 3.0 cards work ok.
- M.2 NMVe with 5.0 is generally available.
- Every single case has a plexi-glass side
- What! Why? For what purpose!!
- If I wanted to see into my PC, I'd also wanted to see into my own stomach and would install plexiglass 6-pack into myself.
- Well. I don't want either. Stupid idea!
- There are almost no USB-connectors in cases
- There used to be. Plenty of those connectors.
- Motherboard box contains coded messages in swag
I ordered the first case I found without ridiculous see-through side. It is from Swedish manufacturer Fractal, Meshify 2. The entire case is steel mesh. Like full of tiny holes. It's bad if you'd spill Pepsi into it, but it's good for ventilation.
This is what Republic of Games mobo box has in it, some swag with a message:

The first thing I had to do is put 25°07'29.5"N 121°28'15.6"E into Google Maps. It results in this link: https://goo.gl/maps/diiBipMcFfz5oBgz7

Ah! Those coordinates point to Asus headquarters in Taiwan. It was a nice gift. Also, it was a fun "spy game" trying to figure out what those numbers translate into. 👍
I/O performance of M.2 is satisfactory:

Those ATTO Benchmark results are very good! I'm happy with my investment.
This is a solid PC for my next 5-year period. I know I will upgrade the GPU in next couple years. At the moment, my existing ATI does a fine job running Dead Island 2.
It's funny how fast you stop paying much attention to your car or sofa. First you spend reasonable amount of money on it. Then they just exist as everyday items. Then, slowly but surely, there is the urge: Should I get a new one?
Exactly the same thing happens with your PC. Maybe one of those precious resources starts running out, SSD or RAM. Maybe CPU or GPU starts feeling bit slow. Then you remember how good everything was when you first got the thing. Yup, the urge is there. Must obtain new.
Roughly five years ago, I was doing my PC on Twitch. As Larpdog doesn't stream anymore, no public appearances this time. I did assemble a new rig, though.
During those years, things have evolved. This happens all the time.
- EATX is a real thing, see Wikipedia for ATX info
- The fact that both cases and motherboard are sold is a big deal.
- This is not a real standard!
- CPU is LGA1700
- Back in the days, I thought 1200 pins connecting my CPU to motherboard was a lot.
- DDR-5
- Having slight overclock on DDR-4 made it pretty fast.
- Now slowest DDR-5 begins way above OCd DDR-4.
- PCIe is 5.0
- I don't own any extension cards supporting 5.0. Good thing 4.0 and 3.0 cards work ok.
- M.2 NMVe with 5.0 is generally available.
- Every single case has a plexi-glass side
- What! Why? For what purpose!!
- If I wanted to see into my PC, I'd also wanted to see into my own stomach and would install plexiglass 6-pack into myself.
- Well. I don't want either. Stupid idea!
- There are almost no USB-connectors in cases
- There used to be. Plenty of those connectors.
- Motherboard box contains coded messages in swag
I ordered the first case I found without ridiculous see-through side. It is from Swedish manufacturer Fractal, Meshify 2. The entire case is steel mesh. Like full of tiny holes. It's bad if you'd spill Pepsi into it, but it's good for ventilation.
This is what Republic of Games mobo box has in it, some swag with a message:
The first thing I had to do is put 25°07'29.5"N 121°28'15.6"E into Google Maps. It results in this link: https://goo.gl/maps/diiBipMcFfz5oBgz7
Ah! Those coordinates point to Asus headquarters in Taiwan. It was a nice gift. Also, it was a fun "spy game" trying to figure out what those numbers translate into. 👍
I/O performance of M.2 is satisfactory:
Those ATTO Benchmark results are very good! I'm happy with my investment.
This is a solid PC for my next 5-year period. I know I will upgrade the GPU in next couple years. At the moment, my existing ATI does a fine job running Dead Island 2.
Secure Torx - Drive decommission trouble
Tuesday, March 28. 2023
I'm a known owner of an angle grider:

Above pic is from my blog post about making sure my data won't be read off on a decommissioned hard drive.
One day I had an untypical burst of let's-clean-the-storage-to-make-room-for-new-stuff -energy. In storege, there were two rack-servers, which hadn't been running for many years, so it was time to let them to a greener pastures. I have an absolute policy of recycling obsoleted electronics without storage media. Drives will be a "special" treatment. See the above pic.
This is what I wanted to do, get the drive removed from the hot-swap cage. No avail!

Blocker was a screw head which looked like a T-10 Torx, but not exactly. There was an unexpected post in the middle of the head making a T-10 fit really badly:


By reading the Wikipedia page for Torx, I realized the problem. This was an infamous Security Torx! I had a faint recollection of such thing existing, but those are so rare, I'd never seen one. Quickly browsing trough my tools and bits, it seemed I didn't own anything to make the task possible.
This meant I got to go to a hardware store to get new toys:

Right tools for the job and problem was solved!
Now there is a stack of rack-server drives ready to bite the grinder disc.
I'm a known owner of an angle grider:
Above pic is from my blog post about making sure my data won't be read off on a decommissioned hard drive.
One day I had an untypical burst of let's-clean-the-storage-to-make-room-for-new-stuff -energy. In storege, there were two rack-servers, which hadn't been running for many years, so it was time to let them to a greener pastures. I have an absolute policy of recycling obsoleted electronics without storage media. Drives will be a "special" treatment. See the above pic.
This is what I wanted to do, get the drive removed from the hot-swap cage. No avail!
Blocker was a screw head which looked like a T-10 Torx, but not exactly. There was an unexpected post in the middle of the head making a T-10 fit really badly:
By reading the Wikipedia page for Torx, I realized the problem. This was an infamous Security Torx! I had a faint recollection of such thing existing, but those are so rare, I'd never seen one. Quickly browsing trough my tools and bits, it seemed I didn't own anything to make the task possible.
This meant I got to go to a hardware store to get new toys:
Right tools for the job and problem was solved!
Now there is a stack of rack-server drives ready to bite the grinder disc.
File 'repomd.xml' from repository is unsigned
Thursday, March 23. 2023
For all those years I've been running SUSE-Linux, I've never bumped into this one while running a trivial zypper update
:
Warning: File 'repomd.xml' from repository 'Update repository with updates from SUSE Linux Enterprise 15' is unsigned.
Note: Signing data enables the recipient to verify that no modifications occurred
after the data were signed. Accepting data with no, wrong or unknown signature can
lead to a corrupted system and in extreme cases even to a system compromise.
Note: File 'repomd.xml' is the repositories master index file. It ensures the
integrity of the whole repo.
Warning: We can't verify that no one meddled with this file, so it might not be
trustworthy anymore! You should not continue unless you know it's safe.
continue? [yes/no] (no):

This error slash warning being weird and potentially dangerous, my obvious reaction was to hit ctrl-c and go investigate. First, my package verification mechanism should be intact and be able to verify if updates downloaded are unaltered or not. Second, there should have not been any breaking changes into my system, at least I didn't make any. As my system didn't seem to breached, I assumed a system malfunction and went investigating.
Quite soon, I learned this is less than rare event. It has happeded multiple times for other people. According to article Signature verification failed for file ‘repomd.xml’ from repository ‘openSUSE-Leap-42.2-Update’ there exists a simple fix.
By running two command zypper clean --all
and zypper ref
, the problem should dissolve.
Yes, that is the case. After a simple wash/clean/rinse -cycle zypper update
worked again.
It was just weird to bump into that for the first time I'd assume this would have occurred some time earlier.
For all those years I've been running SUSE-Linux, I've never bumped into this one while running a trivial zypper update
:
Warning: File 'repomd.xml' from repository 'Update repository with updates from SUSE Linux Enterprise 15' is unsigned.
Note: Signing data enables the recipient to verify that no modifications occurred
after the data were signed. Accepting data with no, wrong or unknown signature can
lead to a corrupted system and in extreme cases even to a system compromise.
Note: File 'repomd.xml' is the repositories master index file. It ensures the
integrity of the whole repo.
Warning: We can't verify that no one meddled with this file, so it might not be
trustworthy anymore! You should not continue unless you know it's safe.
continue? [yes/no] (no):
This error slash warning being weird and potentially dangerous, my obvious reaction was to hit ctrl-c and go investigate. First, my package verification mechanism should be intact and be able to verify if updates downloaded are unaltered or not. Second, there should have not been any breaking changes into my system, at least I didn't make any. As my system didn't seem to breached, I assumed a system malfunction and went investigating.
Quite soon, I learned this is less than rare event. It has happeded multiple times for other people. According to article Signature verification failed for file ‘repomd.xml’ from repository ‘openSUSE-Leap-42.2-Update’ there exists a simple fix.
By running two command zypper clean --all
and zypper ref
, the problem should dissolve.
Yes, that is the case. After a simple wash/clean/rinse -cycle zypper update
worked again.
It was just weird to bump into that for the first time I'd assume this would have occurred some time earlier.
Writing a secure Systemd daemon with Python
Sunday, March 5. 2023
This is a deep dive into systems programing using Python. For those unfamiliar with programming, systems programming sits on top of hardware / electronics design, firmware programming and operating system programming. However, it is not applications programming which targets mostly end users. Systems programming targets the running system. Mr. Yadav of Dark Bears has an article Systems Programming is Hard to Do – But Somebody’s Got to Do it, where he describes the limitations and requirements of doing so.
Typically systems programming is done with C, C++, Perl or Bash. As Python is gaining popularity, I definitely want to take a swing at systems programming with Python. In fact, there aren''t many resources about the topic in the entire Internet.
Requirements
This is the list of basic requirements I have for a Python-based system daemon:
- Run as service: Must run as a Linux daemon, https://man7.org/linux/man-pages/man7/daemon.7.html
- Start runing on system boot and stop running on system shutdown
- Modern: systemd-compatible, https://systemd.io/
- Not interested in ancient SysV init support anymore, https://danielmiessler.com/study/the-difference-between-system-v-and-systemd/
- Modern: D-bus -connected
- Service provided will have an interface on system D-bus, https://www.freedesktop.org/wiki/Software/dbus/
- All Linux systems are built on top of D-bus, I absolutely want to be compatible
- Monitoring: Must support systemd watchdog, https://0pointer.de/blog/projects/watchdog.html
- Surprisingly many out-of-box Linux daemons won't support this. This is most likely because they're still SysV-init based and haven't modernized their operation.
- I most definitely want to have this feature!
- Security: Must use Linux capabilities to run only with necessary permissions, https://man7.org/linux/man-pages/man7/capabilities.7.html
- Security: Must support SElinux to run only with required permissions, https://github.com/SELinuxProject/selinux-notebook/blob/main/src/selinux_overview.md
- Isolation: Must be independent from system Python
- venv
- virtualenv
- Any possible changes to system Python won't affect daemon or its dependencies at all.
- Modern: Asynchronous Python
- Event-based is the key to success.
- D-bus and systemd watchdog pretty much nail this. Absolutely must be asynchronous.
- Packaging: Installation from RPM-package
- This is the only one I'll support for any forseeable future.
- The package will contain all necessary parts, libraries and dependencies to run a self-contained daemon.
That's a tall order. Selecting only two or three of those are enough to add tons of complexity to my project. Also, I initially expected somebody else in The Net to be doing this same or something similar. Looks like I was wrong. Most systems programmers love sticking to their old habits and being at SysV-init state with their synchronous C / Perl -daemons.
Scope / Target daemon
I've previously blogged about running an own email server and fighting spam. Let's automate lot of those tasks and while automating, create a Maildir monitor of junk mail -folder.
This is the project I wrote for that purpose: Spammer Blocker
Toolkit will query for AS-number of spam-sending SMTP-server. Typically I'll copy/paste the IP-address from SpamCop's report and produce a CIDR-table for Postfix. The table will add headers to email to-be-stored so that possible Procmail / Maildrop can act on it, if so needed. As junk mail -folder is constantly monitored, any manually moved mail will be processed also.
Having these features bring your own Linux box spam handling capabilities pretty close to any of those free-but-spy-all -services commonly used by everybody.
Addressing the list of requirements
Let's take a peek on what I did to meet above requirements.
Systemd daemon
This is nearly trivial. See service definition in spammer-reporter.service.
What's in the file is your run-of-the-mill systemd service with appropriate unit, service and install -definitions. That triplet makes a Linux run a systemd service as a daemon.
Python venv isolation
For any Python-developer this is somewhat trivial. You create the environment box/jail, install requirements via setup.py and that's it. You're done. This same isolation mechanism will be used later for packaging and deploying the ready-made daemon into a system.
What's missing or to-do is start using a pyproject.toml
. That is something I'm yet to learn. Obviously there is always something. Nobody, nowhere is "ready". Ever.
Asynchronous code
Talking to systemd watchdog and providing a service endpoint on system D-bus requires little bit effort. Read: lots of it.
To get a D-bus service properly running, I'll first become asynchronous. For that I'll initiate an event-loop dbus.mainloop.glib
. As there are multiple options for event loop, that is the only one actually working. Majority of Python-code won't work with Glib, they need asyncio. For that I'll use asyncio_glib to pair the GLib's loop with asyncio. It took me while to learn and understand how to actually achieve that. When successfully done, everything needed will run in a single asynchronous event loop. Great success!
With solid foundation, As the main task I'll create an asynchronous task for monitoring filesystem changes and run it in a forever-loop. See inotify(7) for non-Python command. See asyncinotify-library for more details of the Pythonic version. What I'll be monitoring is users' Maildirs configured to receive junk/spam. When there is a change, a check is made to see if change is about new spam received.
For side tasks, there is a D-bus service provider. If daemon is running under systemd also the required watchdog -handler is attached to event loop as a periodic task. Out-of-box my service-definition will state max. 20 seconds between watchdog notifications (See service Type=notify
and WatchdogSec=20s
). In daemon configuration file spammer-reporter.toml
, I'll use 15 seconds as the interval. That 5 seconds should be plenty of headroom.
Documentation of systemd.service for WatchdogSec states following:
If the time between two such calls is larger than the configured time, then the service is placed in a failed state and it will be terminated
For any failed service, there is the obvious Restart=on-failure
.
If for ANY possible reason, the process is stuck, systemd scaffolding will take control of the failure and act instantly. That's resiliency and self-healing in my books!
Security: Capabilities
My obvious choice would be to not run as root. As my main task is to provide a D-bus service while reading all users' mailboxes, there is absolutely no way of avoiding root permissions. Trust me! I tried everything I could find to grant nobody's process with enough permissions to do all that. No avail. See the documenatiion about UID / GID rationale.
As standard root will have waaaaaay too much power (especially if mis-applied) I'll take some of those away. There is a yet rarely used mechanism of capabilities(7) in a Linux. My documentation of what CapabilityBoundingSet=CAP_AUDIT_WRITE CAP_DAC_READ_SEARCH CAP_IPC_LOCK CAP_SYS_NICE
means in system service definition is also in the source code. That set grants the process for permission to monitor, read and write any users' files.
There are couple of other super-user permissions left. Most not needed powers a regular root would have are stripped, tough. If there is a security leak and my daemon is used to do something funny, quite lot of the potential impact is already mitigated as the process isn't allowed to do much.
Security: SElinux
For even more improved security, I do define a policy in spammer-block_policy.te
. All my systems are hardened and run SElinux in enforcing -mode. If something leaks, there is a limiting box in-place already. In 2014, I wrote a post about a security flaw with no impact on a SElinux-hardened system.
The policy will allow my daemon to:
- read and write FIFO-files
- create new unix-sockets
- use STDIN, STDOUT and STDERR streams
- read files in
/etc/
, note: read! not write
- read I18n (or internationalization) files on the system
- use capabilities
- use TCP and UDP -sockets
- acess D-bus -sockets in
/run/
- access D-bus watchdog UDP-socket
- access user
passwd
information on the system via SSSd
- read and search user's home directories as mail is stored in them, note: not write
- send email via SMTPd
- create, write, read and delete temporary files in
/tmp/
Above list is a comprehensive requirement of accesses in a system to meet the given task of monitoring received emails and act on determined junk/spam. As the policy is very carefully crafted not to allow any destruction, writing, deletion or manging outside /tmp/
, in my thinking having such a hardening will make the daemon very secure.
Yes, in /tmp/
, there is stuff that can be altered with potential security implications. First you have to access the process. While hacking the daemon, make sure to keep the event-loop running or systemd will zap the process within next 20 seconds or less. I really did consider quite a few scenarios if, and only if, something/somebody pops the cork on my daemon.
RPM Packaging
To wrap all of this into a nice packages, I'm using rpmenv. This toolkit will automatically wrap anything needed by the daemon into a nice virtualenv and deploy that to /usr/libexec/spammer-block/
. See rpm.json
for details.
SElinux-policy has an own spammer-block_policy_selinux.spec
. Having these two in separate packages is mandatory as the mechanisms to build the boxes are completely different. Also, this is the typical approach on other pieces of software. Not everybody has strict requirements to harden their systems.
Where to place an entire virtualenv in a Linux? That one is a ball-buster. RPM Packaging Guide really doesn't say how to handle your Python-based system daemons. Remember? Up there ↑, I tried explaining how all of this is rather novel and there isn't much information in The Net regarding this. However, I found somebody asking What is the purpose of /usr/libexec? on Stackexchange and decided that libexec/
is fine for this purpose. I do install shell wrappers into /bin/
to make everybody's life easier. Having entire Python environment there wouldn't be sensible.
Final words
Only time will tell if I made the right design choices. I totally see Python and Rust -based deamons gaining popularity in the future. The obvious difference is Rust is a compiled language like Go, C and C++. Python isn't.
This is a deep dive into systems programing using Python. For those unfamiliar with programming, systems programming sits on top of hardware / electronics design, firmware programming and operating system programming. However, it is not applications programming which targets mostly end users. Systems programming targets the running system. Mr. Yadav of Dark Bears has an article Systems Programming is Hard to Do – But Somebody’s Got to Do it, where he describes the limitations and requirements of doing so.
Typically systems programming is done with C, C++, Perl or Bash. As Python is gaining popularity, I definitely want to take a swing at systems programming with Python. In fact, there aren''t many resources about the topic in the entire Internet.
Requirements
This is the list of basic requirements I have for a Python-based system daemon:
- Run as service: Must run as a Linux daemon, https://man7.org/linux/man-pages/man7/daemon.7.html
- Start runing on system boot and stop running on system shutdown
- Modern: systemd-compatible, https://systemd.io/
- Not interested in ancient SysV init support anymore, https://danielmiessler.com/study/the-difference-between-system-v-and-systemd/
- Modern: D-bus -connected
- Service provided will have an interface on system D-bus, https://www.freedesktop.org/wiki/Software/dbus/
- All Linux systems are built on top of D-bus, I absolutely want to be compatible
- Monitoring: Must support systemd watchdog, https://0pointer.de/blog/projects/watchdog.html
- Surprisingly many out-of-box Linux daemons won't support this. This is most likely because they're still SysV-init based and haven't modernized their operation.
- I most definitely want to have this feature!
- Security: Must use Linux capabilities to run only with necessary permissions, https://man7.org/linux/man-pages/man7/capabilities.7.html
- Security: Must support SElinux to run only with required permissions, https://github.com/SELinuxProject/selinux-notebook/blob/main/src/selinux_overview.md
- Isolation: Must be independent from system Python
- venv
- virtualenv
- Any possible changes to system Python won't affect daemon or its dependencies at all.
- Modern: Asynchronous Python
- Event-based is the key to success.
- D-bus and systemd watchdog pretty much nail this. Absolutely must be asynchronous.
- Packaging: Installation from RPM-package
- This is the only one I'll support for any forseeable future.
- The package will contain all necessary parts, libraries and dependencies to run a self-contained daemon.
That's a tall order. Selecting only two or three of those are enough to add tons of complexity to my project. Also, I initially expected somebody else in The Net to be doing this same or something similar. Looks like I was wrong. Most systems programmers love sticking to their old habits and being at SysV-init state with their synchronous C / Perl -daemons.
Scope / Target daemon
I've previously blogged about running an own email server and fighting spam. Let's automate lot of those tasks and while automating, create a Maildir monitor of junk mail -folder.
This is the project I wrote for that purpose: Spammer Blocker
Toolkit will query for AS-number of spam-sending SMTP-server. Typically I'll copy/paste the IP-address from SpamCop's report and produce a CIDR-table for Postfix. The table will add headers to email to-be-stored so that possible Procmail / Maildrop can act on it, if so needed. As junk mail -folder is constantly monitored, any manually moved mail will be processed also.
Having these features bring your own Linux box spam handling capabilities pretty close to any of those free-but-spy-all -services commonly used by everybody.
Addressing the list of requirements
Let's take a peek on what I did to meet above requirements.
Systemd daemon
This is nearly trivial. See service definition in spammer-reporter.service.
What's in the file is your run-of-the-mill systemd service with appropriate unit, service and install -definitions. That triplet makes a Linux run a systemd service as a daemon.
Python venv isolation
For any Python-developer this is somewhat trivial. You create the environment box/jail, install requirements via setup.py and that's it. You're done. This same isolation mechanism will be used later for packaging and deploying the ready-made daemon into a system.
What's missing or to-do is start using a pyproject.toml
. That is something I'm yet to learn. Obviously there is always something. Nobody, nowhere is "ready". Ever.
Asynchronous code
Talking to systemd watchdog and providing a service endpoint on system D-bus requires little bit effort. Read: lots of it.
To get a D-bus service properly running, I'll first become asynchronous. For that I'll initiate an event-loop dbus.mainloop.glib
. As there are multiple options for event loop, that is the only one actually working. Majority of Python-code won't work with Glib, they need asyncio. For that I'll use asyncio_glib to pair the GLib's loop with asyncio. It took me while to learn and understand how to actually achieve that. When successfully done, everything needed will run in a single asynchronous event loop. Great success!
With solid foundation, As the main task I'll create an asynchronous task for monitoring filesystem changes and run it in a forever-loop. See inotify(7) for non-Python command. See asyncinotify-library for more details of the Pythonic version. What I'll be monitoring is users' Maildirs configured to receive junk/spam. When there is a change, a check is made to see if change is about new spam received.
For side tasks, there is a D-bus service provider. If daemon is running under systemd also the required watchdog -handler is attached to event loop as a periodic task. Out-of-box my service-definition will state max. 20 seconds between watchdog notifications (See service Type=notify
and WatchdogSec=20s
). In daemon configuration file spammer-reporter.toml
, I'll use 15 seconds as the interval. That 5 seconds should be plenty of headroom.
Documentation of systemd.service for WatchdogSec states following:
If the time between two such calls is larger than the configured time, then the service is placed in a failed state and it will be terminated
For any failed service, there is the obvious Restart=on-failure
.
If for ANY possible reason, the process is stuck, systemd scaffolding will take control of the failure and act instantly. That's resiliency and self-healing in my books!
Security: Capabilities
My obvious choice would be to not run as root. As my main task is to provide a D-bus service while reading all users' mailboxes, there is absolutely no way of avoiding root permissions. Trust me! I tried everything I could find to grant nobody's process with enough permissions to do all that. No avail. See the documenatiion about UID / GID rationale.
As standard root will have waaaaaay too much power (especially if mis-applied) I'll take some of those away. There is a yet rarely used mechanism of capabilities(7) in a Linux. My documentation of what CapabilityBoundingSet=CAP_AUDIT_WRITE CAP_DAC_READ_SEARCH CAP_IPC_LOCK CAP_SYS_NICE
means in system service definition is also in the source code. That set grants the process for permission to monitor, read and write any users' files.
There are couple of other super-user permissions left. Most not needed powers a regular root would have are stripped, tough. If there is a security leak and my daemon is used to do something funny, quite lot of the potential impact is already mitigated as the process isn't allowed to do much.
Security: SElinux
For even more improved security, I do define a policy in spammer-block_policy.te
. All my systems are hardened and run SElinux in enforcing -mode. If something leaks, there is a limiting box in-place already. In 2014, I wrote a post about a security flaw with no impact on a SElinux-hardened system.
The policy will allow my daemon to:
- read and write FIFO-files
- create new unix-sockets
- use STDIN, STDOUT and STDERR streams
- read files in
/etc/
, note: read! not write - read I18n (or internationalization) files on the system
- use capabilities
- use TCP and UDP -sockets
- acess D-bus -sockets in
/run/
- access D-bus watchdog UDP-socket
- access user
passwd
information on the system via SSSd - read and search user's home directories as mail is stored in them, note: not write
- send email via SMTPd
- create, write, read and delete temporary files in
/tmp/
Above list is a comprehensive requirement of accesses in a system to meet the given task of monitoring received emails and act on determined junk/spam. As the policy is very carefully crafted not to allow any destruction, writing, deletion or manging outside /tmp/
, in my thinking having such a hardening will make the daemon very secure.
Yes, in /tmp/
, there is stuff that can be altered with potential security implications. First you have to access the process. While hacking the daemon, make sure to keep the event-loop running or systemd will zap the process within next 20 seconds or less. I really did consider quite a few scenarios if, and only if, something/somebody pops the cork on my daemon.
RPM Packaging
To wrap all of this into a nice packages, I'm using rpmenv. This toolkit will automatically wrap anything needed by the daemon into a nice virtualenv and deploy that to /usr/libexec/spammer-block/
. See rpm.json
for details.
SElinux-policy has an own spammer-block_policy_selinux.spec
. Having these two in separate packages is mandatory as the mechanisms to build the boxes are completely different. Also, this is the typical approach on other pieces of software. Not everybody has strict requirements to harden their systems.
Where to place an entire virtualenv in a Linux? That one is a ball-buster. RPM Packaging Guide really doesn't say how to handle your Python-based system daemons. Remember? Up there ↑, I tried explaining how all of this is rather novel and there isn't much information in The Net regarding this. However, I found somebody asking What is the purpose of /usr/libexec? on Stackexchange and decided that libexec/
is fine for this purpose. I do install shell wrappers into /bin/
to make everybody's life easier. Having entire Python environment there wouldn't be sensible.
Final words
Only time will tell if I made the right design choices. I totally see Python and Rust -based deamons gaining popularity in the future. The obvious difference is Rust is a compiled language like Go, C and C++. Python isn't.
Satellite Internet — Past & Present
Saturday, March 4. 2023
About satellites
ESA has a really good information package Space Transportation - Types of orbits.
When context is satellite Internet, there are pretty much three options: LEO, MEO or GEO. As satellites fly high, latency from Earth's surface to satellite and back is a factor. Low Earth Orbit is the most common choice, altough the lower you fly, the more satellites are required for reasonable coverage.
Past
Inspiration for this blog post came from my 1998 LUT course presentation of Satellite Internet. 25 years ago IT-bubble was still growing and it was about two years before it bursted. As I was a telecommunications major, I took a seminar course which contained a presentation. From the list of topics available, apparently I chose satellite Internet. Back in those days getting the bandwidth delivered to everybody was in rapid growth. Mobile Internet was mostly non-existent, fiber-to-the-home was non-existent, dial-up model, ADSL or cable-TV internet were the methods for getting to The Net.
Here is my PDF presentation about satellite networks Iridium, Globalstar and Teledesic (in Finnish):

This past can be considered as Round #1 of satellite internet.
Retrospective: Iridium
Went into production. 2nd generation begun deploying in 2017. Not generally known, nor common. Originally, not financially viable.
Retrospective: Globalstar
https://www.globalstar.com/, LEO
Went into production. 2013 2nd generation, 2018 3rd generation announcement. Apple iPhone 14 emergency messages are using 2nd gen Globalstar. Not generally known, nor common. Originally, not financially viable.
Retrospective: Teledesic
Never saw production. Project suspended 2002 as IT-bubble bursted and saw financial problems of Iridium and Globalstar. Completely vaporware and nobody knows this ever existed. Plan was to have hundreds of LEO satellites for really fast Internet connectivity.
I have to mention their tech. Instead of traditional approach all others have, relying on ground station being able to find a passing satellite, Teledesic originally planned having 840 satellites. As it was extremely expensive, they later reduced the number to 288. Their idea was to map Earth into regions and having enough capacity, multiple satellites were available at one "square" at a time to provide massive speeds. In theory, really good idea!
Other services
My presentation contained only three companies/services as the topic given to me dictates. At the time, other satellite services did exist.
Still today, Inmarsat Broadband is well known for its voice capabilities and mobile units made famous in popular culture. They do support broadband connection in their GEO satellites. This Satphone 2 is a well-known product:

Another not well-known service is ORBCOMM, an IoT service provider. As their service is targeted to hardware manufacturers, not consumers, nobody has not ever heard of this one. Their 50+ satellites are LEO.
Present
Fast-forward to today.
Obviously Iridium, Globalstar, Inmarsat and ORBCOMM do still exist. As there are some changes in service offering for the past 25 years, let's do a recap of the new players.
SpaceX Starlink
https://www.starlink.com/, very low LEO
Mr. Musk and his Starlink is getting tons of media attention, especially for his donation of terminals to Ukraine to help them in war against Russia. This well-known service is farthest from vaporware, is affordable and easily available. Broadband speed is reasonable, see Ookla Speedtest results comparison.
Notable fact about Starlink is, they are the only satellite broadband provider so far to have their own vehicle for orbiting their own satellites. This capability will give them much required longevity. Satellite Internet is not a sprint, after all.
HughesNet / EchoStar
https://www.hughesnet.com/, GEO
Originally Hughes Communications was owned by Hughes Aircraft, the company founded by Howard Hughes. They are a major service provider in Americas. Here in Europe, they're not well known.
Viasat
https://www.viasat.com/space-innovation/satellite-fleet/viasat-3/, GEO
This is an upcoming broadband service. Viasat is a known satellite-TV operator expanding its services to Internet.
Amazon / Project Kuiper
Obviously Mr. Bezos want's to compete against his nemesis, Mr. Musk and launch his own satellite Internet. His project is the most recent one and really doesn't even have a proper website.
Boeing
LEO
The legendary aircraft manufacturer wants their share of satellite Internet business. This very recent project has FCC approval for 147 satellites, but not much is known about the project.
Future?
As we learned back in 2002 when Teledesic folded, there isn't room for all above players. Some of them will fold and/or merge. Especially SpaceX is gaining lot of customers in rural areas with their modern service offering. Having a fierce battle-of-pricing won't be happening. Launching those satellites is v e r y expensive. Lacking financial stability, not everybody will survive in this Round #2.
There is plenty of space in the sky. It is already crowded and traffic jam is likely to get worse as more and more satellites are launched. Just take a look at Wikipedia's Satellites orbiting Earth -article.
About satellites
ESA has a really good information package Space Transportation - Types of orbits.
When context is satellite Internet, there are pretty much three options: LEO, MEO or GEO. As satellites fly high, latency from Earth's surface to satellite and back is a factor. Low Earth Orbit is the most common choice, altough the lower you fly, the more satellites are required for reasonable coverage.
Past
Inspiration for this blog post came from my 1998 LUT course presentation of Satellite Internet. 25 years ago IT-bubble was still growing and it was about two years before it bursted. As I was a telecommunications major, I took a seminar course which contained a presentation. From the list of topics available, apparently I chose satellite Internet. Back in those days getting the bandwidth delivered to everybody was in rapid growth. Mobile Internet was mostly non-existent, fiber-to-the-home was non-existent, dial-up model, ADSL or cable-TV internet were the methods for getting to The Net.
Here is my PDF presentation about satellite networks Iridium, Globalstar and Teledesic (in Finnish):
This past can be considered as Round #1 of satellite internet.
Retrospective: Iridium
Went into production. 2nd generation begun deploying in 2017. Not generally known, nor common. Originally, not financially viable.
Retrospective: Globalstar
https://www.globalstar.com/, LEO
Went into production. 2013 2nd generation, 2018 3rd generation announcement. Apple iPhone 14 emergency messages are using 2nd gen Globalstar. Not generally known, nor common. Originally, not financially viable.
Retrospective: Teledesic
Never saw production. Project suspended 2002 as IT-bubble bursted and saw financial problems of Iridium and Globalstar. Completely vaporware and nobody knows this ever existed. Plan was to have hundreds of LEO satellites for really fast Internet connectivity.
I have to mention their tech. Instead of traditional approach all others have, relying on ground station being able to find a passing satellite, Teledesic originally planned having 840 satellites. As it was extremely expensive, they later reduced the number to 288. Their idea was to map Earth into regions and having enough capacity, multiple satellites were available at one "square" at a time to provide massive speeds. In theory, really good idea!
Other services
My presentation contained only three companies/services as the topic given to me dictates. At the time, other satellite services did exist.
Still today, Inmarsat Broadband is well known for its voice capabilities and mobile units made famous in popular culture. They do support broadband connection in their GEO satellites. This Satphone 2 is a well-known product:
Another not well-known service is ORBCOMM, an IoT service provider. As their service is targeted to hardware manufacturers, not consumers, nobody has not ever heard of this one. Their 50+ satellites are LEO.
Present
Fast-forward to today.
Obviously Iridium, Globalstar, Inmarsat and ORBCOMM do still exist. As there are some changes in service offering for the past 25 years, let's do a recap of the new players.
SpaceX Starlink
https://www.starlink.com/, very low LEO
Mr. Musk and his Starlink is getting tons of media attention, especially for his donation of terminals to Ukraine to help them in war against Russia. This well-known service is farthest from vaporware, is affordable and easily available. Broadband speed is reasonable, see Ookla Speedtest results comparison.
Notable fact about Starlink is, they are the only satellite broadband provider so far to have their own vehicle for orbiting their own satellites. This capability will give them much required longevity. Satellite Internet is not a sprint, after all.
HughesNet / EchoStar
https://www.hughesnet.com/, GEO
Originally Hughes Communications was owned by Hughes Aircraft, the company founded by Howard Hughes. They are a major service provider in Americas. Here in Europe, they're not well known.
Viasat
https://www.viasat.com/space-innovation/satellite-fleet/viasat-3/, GEO
This is an upcoming broadband service. Viasat is a known satellite-TV operator expanding its services to Internet.
Amazon / Project Kuiper
Obviously Mr. Bezos want's to compete against his nemesis, Mr. Musk and launch his own satellite Internet. His project is the most recent one and really doesn't even have a proper website.
Boeing
LEO
The legendary aircraft manufacturer wants their share of satellite Internet business. This very recent project has FCC approval for 147 satellites, but not much is known about the project.
Future?
As we learned back in 2002 when Teledesic folded, there isn't room for all above players. Some of them will fold and/or merge. Especially SpaceX is gaining lot of customers in rural areas with their modern service offering. Having a fierce battle-of-pricing won't be happening. Launching those satellites is v e r y expensive. Lacking financial stability, not everybody will survive in this Round #2.
There is plenty of space in the sky. It is already crowded and traffic jam is likely to get worse as more and more satellites are launched. Just take a look at Wikipedia's Satellites orbiting Earth -article.