Secure Torx - Drive decommission trouble
Tuesday, March 28. 2023
I'm a known owner of an angle grider:
Above pic is from my blog post about making sure my data won't be read off on a decommissioned hard drive.
One day I had an untypical burst of let's-clean-the-storage-to-make-room-for-new-stuff -energy. In storege, there were two rack-servers, which hadn't been running for many years, so it was time to let them to a greener pastures. I have an absolute policy of recycling obsoleted electronics without storage media. Drives will be a "special" treatment. See the above pic.
This is what I wanted to do, get the drive removed from the hot-swap cage. No avail!
Blocker was a screw head which looked like a T-10 Torx, but not exactly. There was an unexpected post in the middle of the head making a T-10 fit really badly:
By reading the Wikipedia page for Torx, I realized the problem. This was an infamous Security Torx! I had a faint recollection of such thing existing, but those are so rare, I'd never seen one. Quickly browsing trough my tools and bits, it seemed I didn't own anything to make the task possible.
This meant I got to go to a hardware store to get new toys:
Right tools for the job and problem was solved!
Now there is a stack of rack-server drives ready to bite the grinder disc.
File 'repomd.xml' from repository is unsigned
Thursday, March 23. 2023
For all those years I've been running SUSE-Linux, I've never bumped into this one while running a trivial zypper update
:
Warning: File 'repomd.xml' from repository 'Update repository with updates from SUSE Linux Enterprise 15' is unsigned.
Note: Signing data enables the recipient to verify that no modifications occurred
after the data were signed. Accepting data with no, wrong or unknown signature can
lead to a corrupted system and in extreme cases even to a system compromise.
Note: File 'repomd.xml' is the repositories master index file. It ensures the
integrity of the whole repo.
Warning: We can't verify that no one meddled with this file, so it might not be
trustworthy anymore! You should not continue unless you know it's safe.
continue? [yes/no] (no):
This error slash warning being weird and potentially dangerous, my obvious reaction was to hit ctrl-c and go investigate. First, my package verification mechanism should be intact and be able to verify if updates downloaded are unaltered or not. Second, there should have not been any breaking changes into my system, at least I didn't make any. As my system didn't seem to breached, I assumed a system malfunction and went investigating.
Quite soon, I learned this is less than rare event. It has happeded multiple times for other people. According to article Signature verification failed for file ‘repomd.xml’ from repository ‘openSUSE-Leap-42.2-Update’ there exists a simple fix.
By running two command zypper clean --all
and zypper ref
, the problem should dissolve.
Yes, that is the case. After a simple wash/clean/rinse -cycle zypper update
worked again.
It was just weird to bump into that for the first time I'd assume this would have occurred some time earlier.
Writing a secure Systemd daemon with Python
Sunday, March 5. 2023
This is a deep dive into systems programing using Python. For those unfamiliar with programming, systems programming sits on top of hardware / electronics design, firmware programming and operating system programming. However, it is not applications programming which targets mostly end users. Systems programming targets the running system. Mr. Yadav of Dark Bears has an article Systems Programming is Hard to Do – But Somebody’s Got to Do it, where he describes the limitations and requirements of doing so.
Typically systems programming is done with C, C++, Perl or Bash. As Python is gaining popularity, I definitely want to take a swing at systems programming with Python. In fact, there aren''t many resources about the topic in the entire Internet.
Requirements
This is the list of basic requirements I have for a Python-based system daemon:
- Run as service: Must run as a Linux daemon, https://man7.org/linux/man-pages/man7/daemon.7.html
- Start runing on system boot and stop running on system shutdown
- Modern: systemd-compatible, https://systemd.io/
- Not interested in ancient SysV init support anymore, https://danielmiessler.com/study/the-difference-between-system-v-and-systemd/
- Modern: D-bus -connected
- Service provided will have an interface on system D-bus, https://www.freedesktop.org/wiki/Software/dbus/
- All Linux systems are built on top of D-bus, I absolutely want to be compatible
- Monitoring: Must support systemd watchdog, https://0pointer.de/blog/projects/watchdog.html
- Surprisingly many out-of-box Linux daemons won't support this. This is most likely because they're still SysV-init based and haven't modernized their operation.
- I most definitely want to have this feature!
- Security: Must use Linux capabilities to run only with necessary permissions, https://man7.org/linux/man-pages/man7/capabilities.7.html
- Security: Must support SElinux to run only with required permissions, https://github.com/SELinuxProject/selinux-notebook/blob/main/src/selinux_overview.md
- Isolation: Must be independent from system Python
- venv
- virtualenv
- Any possible changes to system Python won't affect daemon or its dependencies at all.
- Modern: Asynchronous Python
- Event-based is the key to success.
- D-bus and systemd watchdog pretty much nail this. Absolutely must be asynchronous.
- Packaging: Installation from RPM-package
- This is the only one I'll support for any forseeable future.
- The package will contain all necessary parts, libraries and dependencies to run a self-contained daemon.
That's a tall order. Selecting only two or three of those are enough to add tons of complexity to my project. Also, I initially expected somebody else in The Net to be doing this same or something similar. Looks like I was wrong. Most systems programmers love sticking to their old habits and being at SysV-init state with their synchronous C / Perl -daemons.
Scope / Target daemon
I've previously blogged about running an own email server and fighting spam. Let's automate lot of those tasks and while automating, create a Maildir monitor of junk mail -folder.
This is the project I wrote for that purpose: Spammer Blocker
Toolkit will query for AS-number of spam-sending SMTP-server. Typically I'll copy/paste the IP-address from SpamCop's report and produce a CIDR-table for Postfix. The table will add headers to email to-be-stored so that possible Procmail / Maildrop can act on it, if so needed. As junk mail -folder is constantly monitored, any manually moved mail will be processed also.
Having these features bring your own Linux box spam handling capabilities pretty close to any of those free-but-spy-all -services commonly used by everybody.
Addressing the list of requirements
Let's take a peek on what I did to meet above requirements.
Systemd daemon
This is nearly trivial. See service definition in spammer-reporter.service.
What's in the file is your run-of-the-mill systemd service with appropriate unit, service and install -definitions. That triplet makes a Linux run a systemd service as a daemon.
Python venv isolation
For any Python-developer this is somewhat trivial. You create the environment box/jail, install requirements via setup.py and that's it. You're done. This same isolation mechanism will be used later for packaging and deploying the ready-made daemon into a system.
What's missing or to-do is start using a pyproject.toml
. That is something I'm yet to learn. Obviously there is always something. Nobody, nowhere is "ready". Ever.
Asynchronous code
Talking to systemd watchdog and providing a service endpoint on system D-bus requires little bit effort. Read: lots of it.
To get a D-bus service properly running, I'll first become asynchronous. For that I'll initiate an event-loop dbus.mainloop.glib
. As there are multiple options for event loop, that is the only one actually working. Majority of Python-code won't work with Glib, they need asyncio. For that I'll use asyncio_glib to pair the GLib's loop with asyncio. It took me while to learn and understand how to actually achieve that. When successfully done, everything needed will run in a single asynchronous event loop. Great success!
With solid foundation, As the main task I'll create an asynchronous task for monitoring filesystem changes and run it in a forever-loop. See inotify(7) for non-Python command. See asyncinotify-library for more details of the Pythonic version. What I'll be monitoring is users' Maildirs configured to receive junk/spam. When there is a change, a check is made to see if change is about new spam received.
For side tasks, there is a D-bus service provider. If daemon is running under systemd also the required watchdog -handler is attached to event loop as a periodic task. Out-of-box my service-definition will state max. 20 seconds between watchdog notifications (See service Type=notify
and WatchdogSec=20s
). In daemon configuration file spammer-reporter.toml
, I'll use 15 seconds as the interval. That 5 seconds should be plenty of headroom.
Documentation of systemd.service for WatchdogSec states following:
If the time between two such calls is larger than the configured time, then the service is placed in a failed state and it will be terminated
For any failed service, there is the obvious Restart=on-failure
.
If for ANY possible reason, the process is stuck, systemd scaffolding will take control of the failure and act instantly. That's resiliency and self-healing in my books!
Security: Capabilities
My obvious choice would be to not run as root. As my main task is to provide a D-bus service while reading all users' mailboxes, there is absolutely no way of avoiding root permissions. Trust me! I tried everything I could find to grant nobody's process with enough permissions to do all that. No avail. See the documenatiion about UID / GID rationale.
As standard root will have waaaaaay too much power (especially if mis-applied) I'll take some of those away. There is a yet rarely used mechanism of capabilities(7) in a Linux. My documentation of what CapabilityBoundingSet=CAP_AUDIT_WRITE CAP_DAC_READ_SEARCH CAP_IPC_LOCK CAP_SYS_NICE
means in system service definition is also in the source code. That set grants the process for permission to monitor, read and write any users' files.
There are couple of other super-user permissions left. Most not needed powers a regular root would have are stripped, tough. If there is a security leak and my daemon is used to do something funny, quite lot of the potential impact is already mitigated as the process isn't allowed to do much.
Security: SElinux
For even more improved security, I do define a policy in spammer-block_policy.te
. All my systems are hardened and run SElinux in enforcing -mode. If something leaks, there is a limiting box in-place already. In 2014, I wrote a post about a security flaw with no impact on a SElinux-hardened system.
The policy will allow my daemon to:
- read and write FIFO-files
- create new unix-sockets
- use STDIN, STDOUT and STDERR streams
- read files in
/etc/
, note: read! not write - read I18n (or internationalization) files on the system
- use capabilities
- use TCP and UDP -sockets
- acess D-bus -sockets in
/run/
- access D-bus watchdog UDP-socket
- access user
passwd
information on the system via SSSd - read and search user's home directories as mail is stored in them, note: not write
- send email via SMTPd
- create, write, read and delete temporary files in
/tmp/
Above list is a comprehensive requirement of accesses in a system to meet the given task of monitoring received emails and act on determined junk/spam. As the policy is very carefully crafted not to allow any destruction, writing, deletion or manging outside /tmp/
, in my thinking having such a hardening will make the daemon very secure.
Yes, in /tmp/
, there is stuff that can be altered with potential security implications. First you have to access the process. While hacking the daemon, make sure to keep the event-loop running or systemd will zap the process within next 20 seconds or less. I really did consider quite a few scenarios if, and only if, something/somebody pops the cork on my daemon.
RPM Packaging
To wrap all of this into a nice packages, I'm using rpmenv. This toolkit will automatically wrap anything needed by the daemon into a nice virtualenv and deploy that to /usr/libexec/spammer-block/
. See rpm.json
for details.
SElinux-policy has an own spammer-block_policy_selinux.spec
. Having these two in separate packages is mandatory as the mechanisms to build the boxes are completely different. Also, this is the typical approach on other pieces of software. Not everybody has strict requirements to harden their systems.
Where to place an entire virtualenv in a Linux? That one is a ball-buster. RPM Packaging Guide really doesn't say how to handle your Python-based system daemons. Remember? Up there ↑, I tried explaining how all of this is rather novel and there isn't much information in The Net regarding this. However, I found somebody asking What is the purpose of /usr/libexec? on Stackexchange and decided that libexec/
is fine for this purpose. I do install shell wrappers into /bin/
to make everybody's life easier. Having entire Python environment there wouldn't be sensible.
Final words
Only time will tell if I made the right design choices. I totally see Python and Rust -based deamons gaining popularity in the future. The obvious difference is Rust is a compiled language like Go, C and C++. Python isn't.
Satellite Internet — Past & Present
Saturday, March 4. 2023
About satellites
ESA has a really good information package Space Transportation - Types of orbits.
When context is satellite Internet, there are pretty much three options: LEO, MEO or GEO. As satellites fly high, latency from Earth's surface to satellite and back is a factor. Low Earth Orbit is the most common choice, altough the lower you fly, the more satellites are required for reasonable coverage.
Past
Inspiration for this blog post came from my 1998 LUT course presentation of Satellite Internet. 25 years ago IT-bubble was still growing and it was about two years before it bursted. As I was a telecommunications major, I took a seminar course which contained a presentation. From the list of topics available, apparently I chose satellite Internet. Back in those days getting the bandwidth delivered to everybody was in rapid growth. Mobile Internet was mostly non-existent, fiber-to-the-home was non-existent, dial-up model, ADSL or cable-TV internet were the methods for getting to The Net.
Here is my PDF presentation about satellite networks Iridium, Globalstar and Teledesic (in Finnish):
This past can be considered as Round #1 of satellite internet.
Retrospective: Iridium
Went into production. 2nd generation begun deploying in 2017. Not generally known, nor common. Originally, not financially viable.
Retrospective: Globalstar
https://www.globalstar.com/, LEO
Went into production. 2013 2nd generation, 2018 3rd generation announcement. Apple iPhone 14 emergency messages are using 2nd gen Globalstar. Not generally known, nor common. Originally, not financially viable.
Retrospective: Teledesic
Never saw production. Project suspended 2002 as IT-bubble bursted and saw financial problems of Iridium and Globalstar. Completely vaporware and nobody knows this ever existed. Plan was to have hundreds of LEO satellites for really fast Internet connectivity.
I have to mention their tech. Instead of traditional approach all others have, relying on ground station being able to find a passing satellite, Teledesic originally planned having 840 satellites. As it was extremely expensive, they later reduced the number to 288. Their idea was to map Earth into regions and having enough capacity, multiple satellites were available at one "square" at a time to provide massive speeds. In theory, really good idea!
Other services
My presentation contained only three companies/services as the topic given to me dictates. At the time, other satellite services did exist.
Still today, Inmarsat Broadband is well known for its voice capabilities and mobile units made famous in popular culture. They do support broadband connection in their GEO satellites. This Satphone 2 is a well-known product:
Another not well-known service is ORBCOMM, an IoT service provider. As their service is targeted to hardware manufacturers, not consumers, nobody has not ever heard of this one. Their 50+ satellites are LEO.
Present
Fast-forward to today.
Obviously Iridium, Globalstar, Inmarsat and ORBCOMM do still exist. As there are some changes in service offering for the past 25 years, let's do a recap of the new players.
SpaceX Starlink
https://www.starlink.com/, very low LEO
Mr. Musk and his Starlink is getting tons of media attention, especially for his donation of terminals to Ukraine to help them in war against Russia. This well-known service is farthest from vaporware, is affordable and easily available. Broadband speed is reasonable, see Ookla Speedtest results comparison.
Notable fact about Starlink is, they are the only satellite broadband provider so far to have their own vehicle for orbiting their own satellites. This capability will give them much required longevity. Satellite Internet is not a sprint, after all.
HughesNet / EchoStar
https://www.hughesnet.com/, GEO
Originally Hughes Communications was owned by Hughes Aircraft, the company founded by Howard Hughes. They are a major service provider in Americas. Here in Europe, they're not well known.
Viasat
https://www.viasat.com/space-innovation/satellite-fleet/viasat-3/, GEO
This is an upcoming broadband service. Viasat is a known satellite-TV operator expanding its services to Internet.
Amazon / Project Kuiper
Obviously Mr. Bezos want's to compete against his nemesis, Mr. Musk and launch his own satellite Internet. His project is the most recent one and really doesn't even have a proper website.
Boeing
LEO
The legendary aircraft manufacturer wants their share of satellite Internet business. This very recent project has FCC approval for 147 satellites, but not much is known about the project.
Future?
As we learned back in 2002 when Teledesic folded, there isn't room for all above players. Some of them will fold and/or merge. Especially SpaceX is gaining lot of customers in rural areas with their modern service offering. Having a fierce battle-of-pricing won't be happening. Launching those satellites is v e r y expensive. Lacking financial stability, not everybody will survive in this Round #2.
There is plenty of space in the sky. It is already crowded and traffic jam is likely to get worse as more and more satellites are launched. Just take a look at Wikipedia's Satellites orbiting Earth -article.
Nokia 5.3 de-bricking after reset
Friday, March 3. 2023
Given the vast differences between Apple's iOS and Google's Android platforms, I own, run and operate both. For those interested: Apple I have as my daily mobile, Android, the more popular platform, I use for more experimental features which are not available on the other one. These features include: access to mobile radios, access to NFC and Bluetooth.
Nokia (or HMD Global to be precise) is a really good Android mobile manufacturer. Generally speasking, they don't bloat their firmware with mandatory always forced-on Facebook or any such crap. Also, my years old 5.3 got Android 12 update. Obviously, this was nice as most manufacturers sell you forgetware getting no updates, but ... (there is always a but-part). What typically happens with electronics, is the hardware becomes obsolete faster than consumer would like to. This puppy doesn't pack the oomph in it's Snapdragon CPU to fluently run Android 12. I had no problems with Androids 10 or 11, with 12 everything started feeling too sluggish. To shopping I went. I came back with a Nokia G21.
Resetting an Android
Onboarding new phone was almost painless. Most icons on my start screen were lost, apps were loaded from Play, icons not so much. Such a thing is easy to fix, so I made the call to do a full reset to the old mobile. That is the standard procedure when you're about to donate/sell/hand out your old computing hardware.
Aftermath – Reset bricked my Nokia!
Crap! The thing failed to boot after reset.
What! What? How is this possible?
Yes, I wasn't alone. Nokia Community forum has following post: WARNING - Do NOT factory reset Nokia 5.3 -- Bricked phone. Factory resetting stuff is such a basic operation done commonly, I didn't much do any research for it. On hindsight (it is 20-20 always) I should have done some.
De-brick
On above thread Mr. Adam Howard faced the same situation and presented a solution.
Prerequisites
Following is needed:
- A computer capable of running Android SDK.
- I used macOS, no drivers or such needed
- I know Linux will work fine, my understanding is no drivers are needed there either
- Windows is known to work, but will require device driver for Android. Which one? No idea here.
- Enough permissions and skills to run Android tools on your computer.
- USB-C cable to connect Android to your computer
- Make sure the device is unconnected, it will be connected later
- Android SDK Platform Tools
- Available @ https://developer.android.com/studio/releases/platform-tools
- Install and test run Android Remote Debugger
adb
- Nokia 5.3 Android 12 firmware
- Available @ https://android.googleapis.com/packages/ota-api/package/d50cb0137919fd20d43cb67a7cb47a073966269d.zip
- Do NOT unzip! Package is needed by
adb
in zipped form.
- That's it! Time and your favorite beverages (don't spill, electronics and liquids won't match).
Hard reset / Recovery mode
Apparently you can manually reset any Nokia 5.3 enough to force it into a mode suitable for force installing a new firmware. In this situation, obviously very helpful for recovery purposes. Scary as hell if you have a habit of losing your mobile to dishonest people. They can do nasty stuff to your mobile.
Instructions are here: HardReset.info: How to put NOKIA 5.3 in recovery mode?
Here is the sequence:
- Power off device
- Power on. This is your typical turn-it-on -sequence. Press power-button for ~4 seconds.
- This is your typical turn-it-on -sequence. Release power button.
- Press and hold: power button & volume down.
- Keep pressing the buttons until recovery screen appears: "START"
- Tap volume down 2 times: "Recovery mode"
- Press power button to select Recovery mode
- Device will restart.
- Wait for Android with side open to appear. Note: there are no options in this screen.
- Press and release: power button & volume up.
- Android Recovery menu will appear
- Tap volume down 3 times, "Apply update from ADB"
- Connect cable
- Press power button to select Apply update from ADB
- Leave you mobile be, next operation will be done on your computer.
On your computer: Upload firmware
Here is the sequence:
- Requirement: Your mobile must be waiting for firmware to be uploaded
- Info: Android Platform Tools (directory
platform-tools
) will contain utilityadb
- Info: You will be using sideload-function of
adb
. Info @ Sideload ROMs and Mods Using ADB Sideload - Run adb and point it to downloaded firmware, adapt your filename:
adb sideload ../Nokia\ 5.3\ firmware.zip
- On your mobile following will happen:
- As time passes, progress will be updated:
- Firmware update done
- That's it!
Done
Observe out-of-box -experience on your mobile:
This is a major blooper by HMD guys. The community forum is full of angry people who bricked their 5.3 with Android 12.
Python Windows update 2023: Pip requiring Build Tools for Visual Studio
Sunday, February 26. 2023
Couple years ago, I got a new Windows computer and my Python pip
failed to install required modules as no suitable C/C++ -compiler was found. I obviously figured out what was wrong and how to right the wrong and posted an article about the fix.
As always, things change, new tools are made available and old tools are obsoleted. This fact was pointed out by an aggressive reader. Without doubt, he did fumble with versions and became irritated when I pointed that out. However, also without doubt, the blog post's expiration date was in the past. What worked in 2021 didn't work anymore.
Here's the update.
Given Wheel, build tools aren't required as often as they used to. See What Are Python Wheels and Why Should You Care? for details on this modern approach. Copy/pasting from the article: "A wheel is a type of built distribution. In this case, built means that the wheel comes in a ready-to-install format and allows you to skip the build stage required with source distributions."
On my Python 3.10 and 3.11, Visual Studio Build Tools 2017 do work. I tested the latest 2022 version and it does not. Older build tools come harder and harder to obtain from Microsoft. Visual Studio 2022 - Downloads -page @ Microsoft doesn't seem to carry old stuff nymore. To not confuse/irritate other people, I won't even post the link there.
As many people have this same exact problem, there is for example a question in StackOverflow: Are Visual Studio 2017 Build Tools still available for download? [closed]. Mr. Chris Wright's answer will have the link https://aka.ms/vs/15/release/vs_buildtools.exe. So, download isn't lost forever. Little bit obscured, yes, but not lost.
To test new Build Tools, I deliberately uninstalled my perfectly working Visual Studio 2017 build tools and Visual Studio 2022 and whatnot. A completely clean slate. Next, make a point on how build will fail. See how much effort it takes for not to use wheels! Running following will emit the dreaded error:
pip.exe install pyOpenSSL --no-binary :all: --no-cache-dir
Yay, the expected error will be emitted: error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools"
This part isn't changed, I'm using the old images from two years back. Download and install Visual Studio Build Tools 2017. Make sure to install a few gigabytes of binaries by selecting C/C++ compiler version 140:
Ta-daa! Unsurprisingly re-running the forced build-from-source -command will work:
Collecting pyOpenSSL
Downloading pyOpenSSL-23.0.0.tar.gz (182 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: pyOpenSSL
Building wheel for pyOpenSSL(pyproject.toml) ... done
Successfully built pyOpenSSL
Installing collected packages: pyOpenSSL
Successfully installed pyOpenssl-23.0.0
This works in all those Python versions I tried. While testing, I did find some packages which wouldn't compile at all. The wheel installed just fine, so I wouldn't worry too much about that.
Final words:
It is weird how regular people in the Internet assume, that I as a blogger would have massive resources and money to maintain their systems. Why would I want to be a target of their inability to ask smart questions or ever explain what they attempted and what failed. Somehow I'm supposed to read their minds and see what's on their computer screens telepathically. Free of charge, of course!
RAID1 Disc Drive upgrade
Tuesday, January 31. 2023
On my home Linux server I ran out of disc space on my 2 TB hard drive. Or, technically speaking there were hundred or so unallocated megs left on my Logical Volume Manager. That translates as I hadn't yet ran out, I was about to run out of space. There was some reserves left, but it would have required some LVM-tinkering to do to unleash what was left into actual use.
--->
To hardware shopping!
That's a pair of 8 TB Seagate Barracudas hooked up into my old-but-trustworthy LSI MegaRAID.
Yeah, you read it right. BIOS is from year 2011. The logical volume / virtual drive created by 90s-looking WebBIOS looks really nice with all those terabytes:
Hint: Don't do what I did and forgot to hook up one S-ATA power cable properly after finalizing installations. The mirrored RAID-1 -drive will need rebuil. On this particular LSI MegaRAID such rebuild takes ~20 hours to complete. Good thing, the drive was fully available during the operation. It did respond bit slowly during rebuild, but that's what spinning platters do anyways.
Amounts of data I seem to have lying around at my home server is a handful. This simple LVM-tinkering sequence of vgextend
/ pvmove
took nearly five hours to complete. This is one of the multiple advantages of having a logical volume, there exists capability to tell at which physical drive a volume resides at.
When I made the purchase order for new drives, I was considering whether I should not use LVM anymore and go for btrfs. Obvious advantage if upgrade would give me even more flexibility on disc space allocation. On negatives, such transition would require for me to copy all data from old LVM/Ext4 drives to new btrfs-drive. LVM's transition simplicity of entire file system without touching individual files did it for me and I chose to not go for The New Thing™.
Hopefully these platters keep spinning for many years to come.
HP Color LaserJet custom X.509 certificate
Monday, January 30. 2023
Update 18th June 2023: See part 2.
One of the pieces of hardware I own and opereate is a HP printer. Most of the time it acts as a ... well, paperweight. Then there is an urgent need to have an A4 with information to be delivered somewhere.
As a keen enthusiast for custom TLS certificates, I always take the option to install one. Especially to a LAN-connected device like printer. This one, however, is broken:
All I can manage from it is: "The format of the file is invalid."
Not so cool. Uh!
For troubleshooting, I looked at Error message "The format of the file is invalid" when attempting to import certificate on HP printer and No more ssl certificate update possible. Both are pretty much stating it doesn't work. Couple years ago in Installing TLS certificates on HP printers automatically the thing worked.
In an attempt to solve this, I exported the generated self-signed key as PKCS #12. Certificate has rather "interesting" crypto, pbeWithSHA1And40BitRC2-CBC, Iteration 2048. That is a seriously obsoleted one! Private key has pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 2048, which is also on the tired side. MAC verification has iteration count 1, which is well aligned with the other insecurity.
No matter what I do, AES, DES, 3-DES, RC2, the PKCS #12 won't import. Neither will CA root cert as PEM.
My conclusion has to be the same: recent firmware upgrades have broken the import.
Multiple hours wasted on that! Darn.
Happy New Year 2023!
Saturday, December 31. 2022
Year 2022 wasn't especially good.
Entire globe had been suffering from COVID-19 for way too long. At that point a greedy old man started a war to gain more land area into his ridiculous dictatorship. There are not many more stupid things than that. Everybody loses on such move. Everybody.
Hopefully 2023 will be a better one!
Monitor Arm installation
Sunday, November 20. 2022
This is one of my computing setups at home office:
This one has a MacBook Pro 16" at desk and some extra display real estate on top of it. It's a Samsung 4K screen. As there are other computers on this desk, this display has no room anywhere else and it has to go high.
An open lid of this MBP will take ~24 cm in height. On this particular Samsung, there is couple centimeters of frame. For the display area to start at 24 cm, bottom of screen has to be at ~22 cm. Like this:
Most inexpensive no-brand solutions (like my previous one) at their arm can go up to to 50 cm. Then at the end of arm, there is the VESA mount, which will be attached to back of the monitor. Like this:
Again, on this particular Samsung the VESA D-mount is a 75x75 mm and it is located at nearly at the top of the screen making the bottom go low. Most displays have a 100x100 and the VESA-mount is typically at center of the thing. Looking at the picture, notice how commonly sold monitor arms have both 100x100 and 75x75 end. This Samsung has a really beautiful round space for all connectors and VESA-mount (invisible to user at the back of the thing). Nobody at Samsung thought how a bigger 100x100 mount would fit inside this obviously-too-small-ring. I couldn't use the regular M4 screws and had to use extra long ones.
Here is a picture of the problem with this Samsung's let's-place-the-mount-at-top -design:
When I go to max height of 50 cm on my arm, I'm missing ~3 cm at the bottom. Monitor is only at 19, but need to be at 22 for the MBP's lid to not obstruct anything at the 4K display. Crap! My old arm clearly has grown too small. There is an illustration of the problem:
Should my MBP be a 13" model or Samsung have its mount in a more reasonable location, this old monitor arm would do it for me. To monitor arm shopping, then. This is the one I got:
It's a Kensington SmartFit® One-Touch Height Adjustable Single Monitor Arm. This more expensive baby can go the required height and much more! Problem solved by throwing money at it.
Davis Vantage Vue battery life
Sunday, November 13. 2022
Over three years ago I had to get a new weather station. See my blog post about it.
Yesterday a storm hit and I wanted to see the measured wind and gust speeds. They were zero. As in no wind at all!
This measurement of nothingness vastly differed what my eyeballs and ears could measure. Wind was strong enough to rip shingles of my roof. Any storm effects were also very audible indoors. --->
To troubleshooting the issue.
My indoors console of Davis Vantage Vue reported: Low battery in sensor 1. Darn!
VeeWX web UI confirmed the fact. Signal quality was flaky:
Notice how signal quality would plummet on dark bars (no sunlight) and recover on light bars (sun started shining into the panel). The battery had gotten its share of -30° C winters and +38° C summers. Sustaining Finnish weather isn't easy for man made objects.
I dug up pictures from my 2019 blog post:
It all came back to me vividly. Indeed, in the bottom of the outdoors unit there is a lithium CR-123 battery. Integrated Sensor Suite (or ISS) has a plastic cover which can be easily removed by opening the plastic screw. As the unit is well designed, the battery pack is well protected from rain, sleet and snow.
Next, getting a replacement. Maybe this battery is one of those hard-to-get and possibly expensive ones? To my surprise a lithium CR-123 is easily available:
Pack of two costs 9,90€ on my local hardware store. Not even expensive!
Yet another well-designed feature of a Vantage Vue.
Unfortunately the second battery has its expiry date before 2025 which is the due time of next change. Generally speaking I'd rather power the unit from a cable. That way I wouldn't have to climb to my garage roof at all. Doing the climbing every three years in a reasonably good weather isn't too bad.
Databricks CentOS 9 Stream containers
Thursday, October 20. 2022
Earlier this year I was tinkering with Databricks and got fed up with Ubuntu 18 and 20 with pretty old Python in it. Easy fix! I just made containers with CentOS so I could have more recent versions of stuff in my nodes.
Natural next move was to bump CentOS version from 8 to 9. While at it I discarded the previous hierarchy. Here is the original pic:
CentOS 8 containers explained:
- Minimal: CentOS 8 Stream + stuff to make CentOS work in Apache Spark / Databricks
- SSH: Minimal + OpenSSH server for those running Databricks on top of their own VNets. If you aren't this won't do you any good. TCP/22 won't be open from The World.
- Python: This here is the beef in Databricks! Running a Jupyter Notebook / IPython will absolutely definitely need this.
- DBFS FUSE: Linux user file system to attach the container into DatabricksFS / HadoopFS
- Standard: DBFS FUSE + OpenSSH, see disclaimer from SSH container about connectivity
The hierachy originates from https://github.com/databricks/containers/tree/master/experimental. Initially I just went with the flow, but as always, gaining more information and experience on Databrics, it became apparent to me this separation wasn't working.
CentOS 8 containers explained:
- Base: CentOS 9 Stream + stuff to make CentOS work in Apache Spark / Databricks + FUSE
- Rationale: You will want to have DBFS mounted to your container anyway. It won't be a security risk and FUSE is a very light piece of software in a Linux.
- Note: This will not work as an Apache Spark / Databricks node.
- Python: Running a Jupyter Notebook will absolutely definitely need this.
- Rationale: A Spark node without proper Python won't boot. This should be in minimal/base to begin with, but I just wanted to separate all the required Linux components and settings from a Python.
- Note: This is the minimal. This will boot and work.
- Python-SSH: Python + OpenSSH
- Note: You will need your own VNet to get SSH-access into your Spark Driver -node.
- Note 2: If you don't specify your own, a managed VNet will be used. You just won't have any access into it.
- R: For statistical computing needs, quite a few users prefer R programming language. This here container-type will enable you to do that from a Jupyter Notebook in Databricks. Will contain Python.
- Rationale: R is a huge chunk of container. If you won't be needing this, stick with Python which is so much lighter to load and operate.
- R-SSH: R + OpenSSH
- See disclaimer from above
Python components version table
To verify what Databicks puts into their nodes, I gathered versions following Python components.
Python component | CentOS 9 | 11.2 | 11.1 | 11.0 | 10.4 LTS | 9.1 LTS |
---|---|---|---|---|---|---|
ipykernel | 6.16.0 | 6.12.1 | 6.12.1 | 6.12.1 | 5.3.4 | 5.3.4 |
ipython | 7.32.0 | 7.32.0 | 7.32.0 | 7.32.0 | 7.22.0 | 7.22.0 |
Jinja2 | 2.11.3 | 2.11.3 | 2.11.3 | 2.11.3 | 2.11.3 | 2.11.3 |
jupyter-core | 4.11.1 | 4.8.1 | 4.8.1 | 4.8.1 | 4.7.1 | 4.7.1 |
matplotlib | 3.4.3 | 3.4.3 | 3.4.3 | 3.4.3 | 3.4.2 | 3.4.2 |
numpy | 1.20.3 | 1.20.3 | 1.20.3 | 1.20.3 | 1.20.1 | 1.19.2 |
pandas | 1.3.4 | 1.3.4 | 1.3.4 | 1.3.4 | 1.2.4 | 1.2.4 |
pyarrow | 4.0.0 | 7.0.0 | 7.0.0 | 7.0.0 | 4.0.0 | 4.0.0 |
six | 1.16.0 | 1.16.0 | 1.16.0 | 1.16.0 | 1.15.0 | 1.15.0 |
virtualenv | 20.16.5 | 20.8.0 | 20.8.0 | 20.8.0 | 20.4.1 | 20.4.1 |
Comparison of other differences:
Component | CentOS 9 | 11.2 | 11.1 | 11.0 | 10.4 LTS | 9.1 LTS |
---|---|---|---|---|---|---|
Scala | 2.12.14 | 2.12.14 | 2.12.14 | 2.12 | 2.12 | 2.12 |
Spark | 3.3.0 | 3.3.0 | 3.3.0 | 3.3.0 | 3.2.1 | 3.2.1 |
Python | 3.9.14 | 3.9.5 | 3.9.5 | 3.9.5 | 3.8.10 | 3.8.10 |
R | 4.2.1 | 4.1.3 | 4.1.3 | 4.1.3 | 4.1.2 | 4.1.2 |
Linux | CentOS 9 Stream |
Ubuntu 20.04.5 LTS |
Ubuntu 20.04.5 LTS |
Ubuntu 20.04.5 LTS |
Ubuntu 20.04.5 LTS |
Ubuntu 20.04.5 LTS |
These two tables explain very well my motivation of doing all this. Getting a full control of what goes into those containers. Second motivation is to publish the recipe for anybody to tailor their own custom made containers containing the versions of software they'll be needing.
Testing my container
Here is a sample notebook I used while develping this:
Modern Databricks notebook supports SQL, R and Scala with ease. I absolutely wanted to see what'll be needed to get all of those running.
To repeat: Python will be needed to get the entire Databricks node booting. On top of that, Scala will be included by Apache Spark. SQL will be handled by Spark. R will be provided by Rserve and interface to that is via notebook's R Python-client.
Final words
Databricks: Please, publish your own work also. For unknown reason you aren't doing that.
ChromeOS Flex test drive
Monday, October 10. 2022
Would you like to run an operating system which ships as-is, no changes allowed after installation? Can you imagine your mobile phone without apps of your own choosing? Your Windows10 PC? Your macOS Monterey? Most of us cannot.
As a computer enthusiast, of course I had to try such a feat!
Prerequisites
How to get your ball rolling, check out Chrome OS Flex installation guide @ Google. What you'll need is a supported hardware. In the installation guide, there is a certified models list and it will contain a LOT of supported PC and Mac models. My own victim/subject/target was 12 year old Lenovo and even that is in the certified list! WARNING: The hard drive of the victim computer will be erased beyond data recovery.
The second thing you'll need is an USB-stick to boot your destination ChromeOS. Any capacity will do. I had a 32 GiB stick and it used 128 MiB of it. That's less than 1% of the capacity. So, any booting stick will do the trick for you. Also, you won't be needing the stick after install, requirement is to just briefly slip an installer into it, boot and be done.
Third and final thing you'll be needing is a Google Chrome browser and ChromeOS recovery media extension into it:
To clarify, let's repeat that:
Your initial installation into your USB-stick will be done from Google Chrome browser using a specific extension in it.
Yes. It sounds bit unorthodox or different than other OS does. Given Google's reach on web browser users, that seemed like the best idea. This extension will work in any OS.
To log into your ChromeOS, you will need a Google Account. Most people on this planet do have one, so most likely you're covered. On the other hand, if your religious beliefs are strongly anti-Google, the likelihood of you running an opearting system made by Google is low. You rare person won't be able to log in, but everybody else will.
Creating installation media
That's it. As there won't be much data on the stick, the creation flys by fast!
Installing ChromeOS Flex
If media creation was fast, this will be even faster.
Just boot the newly crated stick and that's pretty much it. The installer won't store much information to the drive, so you will be done in a jiffy.
Running ChromeOS Flex
Log into the machine with your Google Account. Remenber: This OS won't work without a network connection. You really really will need active network connection to use this one.
All you have is set of Google's own apps: Chrome, Gmail, YouTube and such. By looking at the list in Find apps for your Chromebook, you'd initially think all is good and you can have our favorite apps in it. To make your (mis)belief stronger, even Google Play is there for you to run and search apps. Harsh reality sets in quite fast: you can not install anything via Google Play. All the apps in Google Play are for Android or real ChromeOS, not for Flex. Reason is obvious: your platform is running AMD-64 CPU and all the apps are for ARM. This may change in the future, but at the time of writing it is what it is.
You lose a lot, but there is something good in this trade-off. As you literally can not install anything, not even malware can be installed. ChromeOS Flex has to be the safest OS ever made! Most systems in the world are manufactured from ground up to be generic and be able to run anything. This puppy isn't.
SSH
After initial investigation, without apps, without password manager, without anything, I was about to throw the laptop back to its original dust gathering duty. What good is a PC which runs a Chrome browser and nothing else? Then I found the terminal. It won't let you to actually enter the shell of our ChromeOS laptop. It will let you SSH to somewhere else.
On my own boxes, I'll always deactivate plaintext passwords, so I bumped into a problem. From where do I get the private key for SSH-session? Obvious answer is either via Google Drive (<shivers>) or via USB-stick. You can import a key to the laptop and not worry about that anymore.
Word of caution: Think very carefully if you want to store your private keys in a system managed for you by Google.
Biggest drawbacks
For this system to be actually usable, I'd need:
- Proper Wi-Fi. This 12 year old laptop had only Wi-Fi 4 (802.11n)
- This I managed to solve by using an Asus USB-AC51 -dongle to get Wi-Fi 5.
- lsusb:
ID 0b05:17d1 ASUSTek Computer, Inc. AC51 802.11a/b/g/n/ac Wireless Adapter [Mediatek MT7610U]
- This won't solve my home network's need for Wi-Fi 6, but gets me to The Net.
- There is no list of supported USB-devices. I have bunch of 802.11ac USB-sticks and this is the only one to work in ChromeOS Flex.
- My password manager and passwords in it
- No apps means: no apps, not even password manager
- What good is a browser when you cannot log into anything. All my passwords are random and ridiculously complex. They were not designed to be remembered nor typed.
- In The world's BEST password advice, Mr. Horowitz said: "The most secure Operating System most people have access to is a Chromebook running in Guest Mode."
Nuisances
Installer won't let you change keyboard layout. If you have US keyboard, fine. If you don't, it sucks for you.
Partitions
As this is a PC, the partition table has EFI boot. Is running EXT-4 and EXT-2 partitions. Contains encrypted home drive. It's basically a hybrid between an Android phone and a Linux laptop.
My 240 GiB SSD installed as follows:
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
11 32.8kB 33.3kB 512B RWFW
6 33.3kB 33.8kB 512B KERN-C chromeos_kernel
7 33.8kB 34.3kB 512B ROOT-C
9 34.3kB 34.8kB 512B reserved
10 34.8kB 35.3kB 512B reserved
2 35.3kB 16.8MB 16.8MB KERN-A chromeos_kernel
4 16.8MB 33.6MB 16.8MB KERN-B chromeos_kernel
8 35.7MB 52.4MB 16.8MB ext4 OEM
12 52.4MB 120MB 67.1MB fat16 EFI-SYSTEM boot, legacy_boot, esp
5 120MB 4415MB 4295MB ext2 ROOT-B
3 4415MB 8709MB 4295MB ext2 ROOT-A
1 8709MB 240GB 231GB ext4 STATE
Finally
This is either for explorers who want to try stuff out or alternatively for people whose needs are extremely limited. If all you do is surf the web or YouTube then this might be for you. Anything special --> forget about it.
The best part with this is the price. I had the old laptop already, so cost was $0.
MacBook Pro - Fedora 36 sleep wake - part 2
Friday, September 30. 2022
This topic won't go away. It just keeps bugging me. Back in -19 I wrote about GPE06 and couple months ago I wrote about sleep wake. As there is no real solution in existence and I've been using my Mac with Linux, I've come to a conclusion they are in fact the same problem.
When I boot my Mac, log into Linux and observe what's going on. Following CPU-hog can be observed in top
:
RES SHR S %CPU %MEM TIME+ COMMAND
0 0 I 41.5 0.0 2:01.50 [kworker/0:1-kacpi_notify]
ACPI-notify will chomp quite a lot of CPU. As previously stated, all of this will go to zero if /sys/firmware/acpi/interrupts/gpe06
would be disabled. Note how GPE06 and ACPI are intertwined. They do have a cause and effect.
Also, doing what I suggested earlier to apply acpi=strict noapic
kernel arguments:
grubby --args="acpi=strict noapic" --update-kernel=$(ls -t1 /boot/vmlinuz-*.x86_64 | head -1)
... will in fact reduce GPE06 interrupt storm quite a lot:
RES SHR S %CPU %MEM TIME+ COMMAND
0 0 I 10.0 0.0 0:22.92 [kworker/0:1-kacpi_notify]
Storm won't be removed, but drastically reduced. Also, the aluminium case of MBP will be a lot cooler.
However, by running grubby, the changes won't stick. Fedora User Docs, System Administrator’s Guide, Kernel, Module and Driver Configuration, Working with the GRUB 2 Boot Loader tells following:
To reflect the latest system boot options, the boot menu is rebuilt automatically when the kernel is updated or a new kernel is added.
Translation: When you'll install a new kernel. Whatever changes you did with grubby
won't stick to the new one. To make things really stick, edit file /etc/default/grub
and have line GRUB_CMDLINE_LINUX
contain these ACPI-changes as before: acpi=strict noapic
Many people are suffering from this same issue. Example: Bug 98501 - [i915][HSW] ACPI GPE06 storm
Even this change won't fix the problem. Lot of CPU-resources are still wasted. When you close the lid for the first time and open it again, this GPE06-storm miraculously disappears. Also what will happen, your next lid open wake will take couple of minutes. It seems the entire Mac is stuck, but it necessarily isn't (sometimes it really is). It just takes a while for the hardware to wake up. Without noapic
, it never will. Also, reading the Freedesktop bug-report, there is a hint of problem source being Intel i915 GPU.
Hopefully somebody would direct some development resources to this. Linux on a Mac hardware runs well, besides these sleep/sleep-wake issues.
QNAP Stopping Maintenance of TS-419P II (yet again)
Monday, September 19. 2022
Back in 2018, Qnap announced to stop supporting my NAS on December 2020. They walked that date back multiple times and at the time of writing, the EOL date is on October 2022. I hope they don't mean it this time, but am afraid they will:
Recent security advisory QSA-22-24 is warning about DeadBolt Ransomware.
The campaign appears to target QNAP NAS devices running Photo Station with internet exposure.
Apparently, if you published your Photo Station photos to public Internet, your NAS-box was at risk of being encrypted and ransom would be required.
This is something similar what happened back in February 2019, see NAS-201902-13 about details of QNAPCrypt / eCh0raix. If you hadn't patched your Qnap and something crawled past the box, it was possible to brute-force their way into it aaaaand encrypt the files aaaand ransom bitcoin to restore the files. See: New eCh0raix Ransomware Brute-Forces QNAP NAS Devices
Apparently running a NAS is becoming more and more demanding. Maybe I have a wrong brand or something?