Upgraded Internet connection - Fiber to the Home
Monday, May 24. 2021
Seven years ago I moved to a new house with FTTH. Actually it was one of the criteria I had for a new place. It needs to have fiber-connection. I had cable-TV -Internet for 11 years before that and I was fed up with all the problems a shared medium has.
Today, we're here:
Last month my Telco, Elisa Finland, made 1 Gig connection available in this region and me being me, there is no real option for not getting it. I had no issues with previous one, connection wasn't slow or buggy, but faster IS better. Price is actually 8€ cheaper than my previous 250 Mbit connection. To make this absolutely clear: I'm paying 42€ / month for above connection.
To verify result, same with Python-based testing speedtest-cli --server 22669:
Retrieving speedtest.net configuration...
Testing from Elisa Oyj (62.248.128.0)...
Retrieving speedtest.net server list...
Retrieving information for the selected server...
Hosted by Elisa Oyj (Helsinki) [204.66 km]: 4.433 ms
Testing download speed.....................
Download: 860.23 Mbit/s
Testing upload speed.......................
Upload: 382.10 Mbit/s
Nice. Huh!
Stop the insanity! There are TLDs longer than 4 characters - Part 2
Sunday, May 23. 2021
What happens when IT-operations are run by incompetent idiots?
For reference, I've written about State of Ubisoft and failures with top level domain handling.
I'm an avid gamer. I play games on daily basis. It is not possible to avoid bumping into games by giant corporations like Activision or Ubisoft. They have existed since 80s, have the personnel, money and resources. They also keep publishing games I occasionally love playing.
How one accesses their games is via software called Ubisoft Connect:
You need to log into the software with your Ubisoft-account. As one will expect, creating such account requires you to verify your email address so, the commercial company can target their marketing towards you. No surprises there.
Based on my previous blog posts, you might guess my email isn't your average gmail.com or something similar. I have multiple domains in my portfolio and am using them as my email address. With Ubisoft, initially everything goes smoothly. At some point the idiots at Ubisoft decided, that I needed to re-verify my email. Sure thing. Let's do that. I kept clicking the Verify my email address -button on Ubisoft Connect for years. Nothing happened, though. I could click the button but the promised verification email never arrived.
In 2019 I had enough of this annoyance and approached Ubisoft support regarding the failure to deliver the email.
Their response was:
I would still advise you to use a regularly known e-mail domain such as G-mail, Yahoo, Hotmail or Outlook as they have been known to cause no problems.
Ok. They didn't like my own Linux box as mailserver.
Luckily Google Apps / G Suite / Google Workspace (whatever their name is this week) does support custom domains (Set up Gmail with your business address (@your-company)). I did that. Now they couldn't complain for my server to be non-standard or causing problems.
Still no joy.
As the operation was run by incompetent idiots, I could easily send and receive email back and forth with: Ubisoft support, Ubisoft Store and Ubisoft marketing-spam. The ONLY kind of emails I could not receive was their email address verification. Until Apri 23rd 2021. Some jack-ass saw the light and realized "Whoa! There exists TLDs which are longer than 4 characters!" In reality I guess they changed their email service provider into Amazon SES and were able to deliver the mails.
This in insane!
SD-Cards - Deciphering the Hieroglyphs
Monday, April 5. 2021
Luckily xkcd #927 isn't all true. When talking about memory cards used in cameras and other appliance, SD has taken the market and become The Standard to rule all standards.
In my junk-pile I have all kinds of CF, MMC and Memory Sticks all of which have became completely obsoleted. Last usable one was the Memory Stick into my PSP (Playstation Portable). For some reason the stick became rotten and I'm hesitant go get a "new" one. That Sony-specific standard has been obsoleted waaay too long. Not to mention anything about 2012 obsoletion of PSP.
So, SD-cards. There is an association managing the standard, SD Association. Major patents are owned by Panasonic, SanDisk and Toshiba, but they've learned the lesson fom Sony's failures (with Betamax and Memory Stick). Competition can get the SD-license with relax-enough terms and make the ecosystem thrive keeping all of us consumers happy.
SDA defines their existence as follows:
SD Association is a global ecosystem of companies setting industry-leading memory card standards that simplify the use and extend the life of consumer electronics, including mobile phones, for millions of people every day.
Well said!
That's exactly what countering Xkcd #927 will need. An undisputed leader with good enough product for us consumers to accept and use.
SD Standards
SD-cards have existed for a while now and given progress in accessing bits in silicon, the speeds have changed a lot. This is how SDA defines their standards for consumers:
There are four different standards reaching the most recent SDUC. Those four can have five different classifications of speed having multiple speed modes in them. Above table is bit confusing, but when you look at it bit closer, you'll realize the duplicates. As an example, speed modes C4 and C6 exist in all of the 5 speed classes spanning from early ones to most recent.
If you go shopping, the old SD-standard cards aren't available anymore. SDHC and SDXC are the ones being sold actively. The newcomer SDUC is still rare as of 2021.
As the access for all of the standards require different approach from the appliance, be really careful to go for a compatible card. Personally I've seen some relatively new GPS devices require SDHC with max. filesystem size of 32 GiB. Obviously the design and components in those devices are from past.
SD Speeds
Why is this all important?
Well, it isn't unless the thing you're using your SD-card with has some requirements. Ultimately there will be requirements depending on what you do.
Examples of requirements might be:
- Storing still images from a camera, for that pretty much all of the cards work, any U-class card will do the trick
- Storing video from a camera, for that see V-class, U-class might choke on big data streams
- Reading and writing data with your Raspberry Pi, for that see A-class, U-class will work ok, but might lack the random-access performance of the A-class
Symbols indicating speed would be:
Examples
To make this practical, let's see some real-world readers and cards to see if any of the above symbols can be found in them.
Readers
In above pic are couple reader/writer units I own. Both are USB 3.0, but the leftmost one is a very simple micro-SD -reader. For "regular" size SD-cards I use the bigger box, which can access multiple cards at the same time.
Readers (writers) won't have a speed class in them. They will have the SD-standard mentioned. Please be aware of USB 2.0 speed limitations if using any of the old tech. Any reasonably new SD-card will be much faster than the USB-bus. When transferring your already recorded moments, speed is not an issue. When working with large video files or tons of pics, make sure to have a fast reader.
Card, 128 GB
Here is a micro-SD from my GoPro. Following symbols can be seen on the card:
- Manufacturer: Kingston
- Form factor. Micro SD
- Standard: SDXC, II is for UHS-II speed
- Capacity: 128 GB, ~119 GiB
- Speed classification: U3, V90 and A1
- Comment: An action camera will produce a steady stream of 4K H.265 video, that's what the UHS-II V90 is for. A card with this kind of classification is on the expensive side, well over 100€.
Card, 32 GB
Here is a micro-SD from my Garmin GPS. Following symbols can be seen on the card:
- Manufacturer: SanDisk
- Capacity: 32 GB, ~30 GiB
- Form factor. Micro SD
- Standard: SDHC, I is for UHS-I speed
- Speed classification: U3, V30 and A1
- Comment: I'm using this for a dual-purpose, it serves as map data storage (A1) and dash cam video recorder (V30) for HD H.264 video stream. UHS-I will suit this purpose fine as the video stream is very reasonable.
Card, 16 GB
Here is a micro-SD from my Raspberry Pi. Following symbols can be seen on the card:
- Manufacturer: Transcend
- Capacity: 16 GB, ~15 GiB
- Speed classification: 10
- Form factor. Micro SD
- Speed classification: U1
- Standard: SDHC, I is for UHS-I speed
- Comment: Running an application-heavy Raspi might benefit for having an A-class card, instead of U-class which is better suited for streaming data. This one is an old one from a still camera which it suited well.
Card, 8 GB
Here is a micro-SD which I'm not actively using anymore. Following symbols can be seen on the card:
- Capacity: 8 GB, ~7.4 GiB
- Form factor. Micro SD
- Standard: SDHC, I is for UHS-I speed
- Speed classification: U1
- Comment: An obvious old card lacking both A and V speed classes
Additional info
For further info, see:
- SD Association - Speed Class
- Picking the Right SD Card: What Do the Numbers Mean?
Rotting bits - Cell charge leak
Storage fragmentation. It is a real physical phenomenon in NAND storage causing a stored bit to "rot". This exact type of failure exists both in SD cards and SSD (Solid-State Drive). If the same exact storage location is written constantly, eventually it will cause the cell charge to leak causing data loss. As manufacturers/vendors are aware of this, there are countermeasures.
Typically you as an end-user don't need to worry about this. Older cards and SSDs would start losing your precious stored data, but given technological advances it is less and less an issue. Even if you would create a piece of software for the purpose of stressing out an exact location of storage, modern hardware wouldn't be bothered. You may hear and read stories of data loss caused by this. I see no reason not to believe any such stories, but bear in mind any new hardware is less and less prone of this kind of failure.
Finally
While shopping for storage capacity, I'll always go big (unless there is a clear reason not to). Bigger ones tend to have modern design, be able to handle faster access and have really good resistance to data loss.
My suggestion for anybody would be to do the same.
Google Drive spam
Friday, April 2. 2021
A completely new type of spam has been flooding my mailbox. Ok, not flooding, but during past week I've got 7 different ones. The general idea for this spam delivery method is for the spam to originate from Google. How in detail the operation works, is to either exploit some innocent person's Google Account or create a ton of brand new Google Accounts to be used briefly and then thrown away. What the scammers do with the account is on Google Drive they'll create a presentation. There is no content in the presentation, it will be completely empty and then they'll share the document with me. Ingenious!
Shared presentation looks like this (hint: its completely blank):
The trick is in the comment of the share. If you add a new user to work on the same shared file, you can add own input. These guys put some spam into it.
When the mail arrives, it would contain something like this:
This approach will very likely pass a lot of different types of spam-filtering. People work with shared Google Drive documents all the time as their daily business and those share indications are not spam, its just day-to-day business for most.
Highlights from the mail headers:
Return-Path: <3FDxcYBAPAAcjvttlu0z-uvylws5kvjz.nvvnsl.jvt@docos.bounces.google.com>
Received-SPF: Pass (mailfrom) identity=mailfrom;
client-ip=209.85.166.198; helo=mail-il1-f198.google.com;
envelope-from=3fdxcybapaacjvttlu0z-uvylws5kvjz.nvvnsl.jvt@docos.bounces.google.com;
receiver=<UNKNOWN>
DKIM-Filter: OpenDKIM Filter v2.11.0 my-linux-box.example.com DF19A80A6D5
Authentication-Results: my-linux-box.example.com;
dkim=pass (2048-bit key) header.d=docs.google.com header.i=@docs.google.com header.b="JIWiIIIU"
Received: from mail-il1-f198.google.com (mail-il1-f198.google.com [209.85.166.198])
(using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
(No client certificate requested)
by my-linux-box.example.com (Postfix) with ESMTPS id DF19A80A6D5
for <me@example.com>; Thu, 25 Mar 2021 09:30:30 +0200 (EET)
Received: by mail-il1-f198.google.com with SMTP id o7so3481129ilt.5
for <me@example.com>; Thu, 25 Mar 2021 00:30:30 -0700 (PDT)
Reply-to: No Reply <p+noreply@docs.google.com>/code>
Briefly for those not fluent with RFC 821:
Nothing in the mail headers would indicate scam, fraud or even a whiff of spam. It's a fully legit, digitally signed (DKIM) email arriving via encrypted transport (TLS) from a Google-designated SMTP-server (SPF),
Given trusted source of mail, the only feasible attempt to detect this type of spam is via content analysis. Note: as an example of detecting and blocking unsolicited email, I've past written my thoughts how easy it is to block spam.
Well, until now it was. Darn!
Behind the scenes: Reality of running a blog - Story of a failure
Monday, March 22. 2021
... or any (un)social media activity.
IMHO the mentioned "social" media isn't. There are statistics and research to establish the un-social aspect of it. Dopamin-loop in your brain keeps feeding regular doses to make person's behaviour addicted to an activity and keep the person leeching for more material. This very effectively disconnects people from the real world and makes the dive deeper into the rabbit hole of (un)social media.
What most of the dopamin-dosed viewer of any published material keep ignoring is the peak-of-an-iceberg -phenomenon. What I mean is a random visitor gets to see something amazingly cool. A video or picture depicting something that's very impressive and assume that person's life consists of a series of such events. Also humans tend to compare. What that random visitor does next is compares the amazing thing to his/hers own "dull" personal life, which does not consist of a such imaginary sequence of wonderful events. Imaginary, because reality is always harsh. As most of the time we don't know the real story, it is possible for 15 seconds of video footage to take months or preparation, numerous failures, reasonable amounts of money and a lot of effort to happen.
An example of harsh reality, the story of me trying to get a wonderful piece of tech-blogging published.
I started tinkering with a Raspberry Pi 4B. That's something I've planned for a while, ordered some parts and most probably will publish the actual story of the success later. Current status of the project is, well planned, underway, but nowhere near finished.
What happened was for the console output of the Linux to look like this:
That's "interesting" at best. Broken to say the least.
For debugging of this, I rebooted the Raspi into previous Linux kernel of 5.8 and ta-daa! Everything was working again. Most of you are running Raspian, which has Linux 5.4. As I have the energy to burn into hating all of those crappy debians and ubuntus, my obvious choice is a Fedora Linux Workstation AArch64-build.
To clarify the naming: ARM build of Fedora Linux is a community driven effort, it is not run by Red Hat, Inc. nor The Fedora Project.
Ok, enough name/org -talk, back to Raspi.
When in a Linux graphics go that wrong, I always disable the graphical boot in Plymouth splash-screen. Running plymouth-set-default-theme details --rebuild-initrd
will do the trick of displaying all-text at the boot. However, it did not fix the problem on my display. Next I had a string of attempts doing all kinds of Kernel parameter tinkering, especially with deactivating Frame Buffer, learning all I could from KMS or Kernel Mode Setting, attempting to build Raspberry Pi's userland utilities to gain insight of EDID-information just to realize they'll never build on a 64-bit Linux, failing with nomodeset and vga=0 as Kernel Parameters to solve the problem. No matter what I told the kernel, display would fail. Every. Single. Time.
It hit me quite late in troubleshooting. While observing the sequence of boot-process, during early stages of boot everything worked and display was un-garbled. Then later when Feodra was starting system services everything fell. Obviously something funny happened with GPU-driver of Broadcom BCM2711 -chip of VideoCore 4, aka. vc4 in that particular Linux-build when the driver was loaded. Creating file /etc/modprobe.d/vc4-blacklist.conf
with contents of blacklist vc4
to prevent VideoCore4 driver from ever loading did solve the issue! Yay! Finally found the problem.
All of this took several hours, I'd say 4-5 hours straight work. What happened next was surprising. Now that I had the problem isolated into GPU-driver, on IRC's #fedora-arm -channel, people said vc4 HDMI-output was a known problem and was already fixed in Linux 5.11. Dumbfounded by this answer, I insisted version 5.10 of being the latest and 5.11 lacking availability. They insisted back. Couple hours before me asking, 5.11 was deployed into mirrors sites for everybody to receive. This happened while I was investigating failing and investigating more.
dnf update
, reboot and pooof. Problem was gone!
There is no real story here. In pursuit of getting the thing fixed, it fixed itself by time. All I had to do is wait (which obviously I did not do). Failure after failure, but no juicy story on how to fix the HDMI-output. On a typical scenario, this type of story would not get published. No sane person would shine any light on a failure and time wasted.
However, this is what most of us do with computers. Fail, retry and attempt to get results. No glory, just hard work.
Book club: Formula 1 All The Races - The First 1000
Sunday, March 21. 2021
This one is a simple directory or reference manual of first thousand F1 races. If you want your copy, go for Formula 1 All The Races - The First 1000 @ Veloce Publishing.
For a non-fan, the book is as dry as a phone book. For a true fan like me, there are short descriptions of seasons and every single grand prix driven. Personally, I love reading about the early days. TV was barely invented, but wasn't such a huge part of F1 as it is now. What I do is fix a rerefence point from the book and fill in the gaps by googling for additional information. This works well as not all of the races are that interesting.
In short: Definitely not for everyone, but only for fans (pun intended).
Windows 10 Aero: shaker minimize all windows - disable
Friday, February 26. 2021
I'm not sure, but Windows 10 minimizing all currently open windows when you drag a window to left and right has to be the worst feature.
Sometimes I love to arrange the windows properly to make work better on a multi-display environment. As nerds at Microsoft read me adjusting the window position as "shaking" they decide to minimize all of my open ones. Since today, I've been looking hard both left and right, but found no actual use for this "feature" (bug). The gesture guessing is inaccurate at best. Also, IF (emphasis on if) I want to minimize all of my currently open windows, I'd love to clearly indicate the wish for doing so. I hate these artificial stupidity systems which try to out-smart me. They never do.
If you're like me and want nothing to hear from that, there is no actual option for getting rid of the madness. The ONLY option seems to be to edit registry, so let's do that.
Registry-file is very trivial:
Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced]
"DisallowShaking"=dword:00000001
Import that and you're done. For further info, read article How to Enable or Disable Aero Shake in Windows 10 @ TenForums.
Camera tripod-connector thread
Sunday, January 31. 2021
I'm sure everybody who has ever held a camera knows there is a thread for tripod somewhere bottomside of the camera. This is what my Canon EOS would look like from the belly-side:
Why all cameras have that specific threading is because manufacturers want to be ISO 1222:2010 -compliant. Reading Wikipedia article Tripod (photography), Screw thread, reveals the spec having a 1/4-20 UNC or 3/8-16 UNC thread. There is also a phrase "Most consumer cameras are fitted with 1/4-20 UNC threads."
According to mandatory Pulp Fiction reference, in Europe we have the metric system. Personally I have no idea what is a 1/4 inch Whitworth UNC thread mentioned in Stackexchange article Why aren't tripod mounts metric? Following up on the history, to my surprise, that particular camera thread can be traced at least to year 1901. There seems to be suggestions to circa 1870 and name T. A. Edison being mentioned, but none of that can be corroborated.
Time warping back to today.
My work has been remote for many many years. Given the global pandemic, everybody else is also doing the same. As every meeting is done over the net, I've made choices to run with some serious hardware. If you're interested, there is a blog post about microphone upgrade from 2019.
The camera is a Logitech StreamCam. What I rigged it into is the cheapest mirophone table stand I could order online. The one I have is a Millenium MA-2040 from Thomann. The price is extremely cheap 20,- €. However, cheap it may be, but it does the job well.
It doesn't require much thinking to realize, the ISO-standard thread in StreamCam is 1/4", but a microphone stand will have 3/8" or 5/8" making the fitting require an adapter. Thomann page states "With 5/8" threaded connector for recording studios and multimedia workstations". Logitech provides options in their package for camera setup. The typical use would be to have monitor/laptop-bracket which makes the camera sit on top of your display. Second option is the 1/4" setup. To state the obvious: for that you'll need some sort of tripod/stand/thingie.
Here are pics from my solution:
When you go shopping for such adapter, don't do it like I did:
Above pic is proof, that I seriously suck at non-metric threads. From left to right:
- female 1/2" to male 1/4" adapter (bronze), not working for my setup
- female 3/8" to male 1/4" adapter (black, short), not working for my setup
- female 5/8" to male 1/4" adapter (black, long), yes! the one I needed
- female 5/8" to male 1/4" adapter /w non-UNC thread (silver), not working for my setup
For those wondering:
Yes. I did order 4 different adapters from four different stores until I managed to find the correct one.
Also, there is nothing wrong with my laptop's camera. I simply want to position the camera bit higher than regular laptop camera will be.
Python Windows: Pip requiring Build Tools for Visual Studio
Wednesday, January 13. 2021
Update 26th Feb 2023:
This information has been obsoleted during 2022. Information in this article regarding Python and Visual Studio Build Tools download location is inaccurate. For latest information, see my 2023 update.
Python, one of the most popular programming languages today. Every single even remotely useful language depends on extensions, libraries and stuff already written by somebody else you'll be needing to get your code to do its thing.
In Python these external dependencies are installed with command pip
. Some of them are installed as eggs, some as wheels. About the latter, read What are wheels? for more information.
Then there is the third kind. The kind having cPython in them needing a C-compiler on your machine to build and subsequent install when the binaries are done. What if your machine doesn't have a C-compiler installed? Yup. Your pip install
will fail. This story is about that.
Duh, it failed
I was tinkering some Python-code and with some googling found a suitable library I wanted to take for a spin. As I had a newly re-installed Windows 10, pip install
failed on a dependency of the library I wanted:
building 'package.name.here' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
Yes, nasty error that. I recall seeing this before, but how did I solve it the last time? No recollection, nothing.
The link mentioned in the error message is obsoleted. There is absolutely nothing useful available by visiting it. I'm guessing back in the days, there used to be. Today, not so much.
What others have done to navigate around this
Jacky Tsang @ Medium: Day060 — fix “error: Microsoft Visual C++ 14.0 is required.” Nope. Fail.
Stackoverflow:
-
How to install Visual C++ Build tools? Nope. Fail.
-
Microsoft Visual C++ 14.0 is required (Unable to find vcvarsall.bat) Yesh!
This problem is spread far and wide! Lot of people suffering from the same. Lot of misleading answers spreading for multiple years back.
Visual Studio 2019 Build Tools
Page is at https://visualstudio.microsoft.com/visual-cpp-build-tools/
(The link is in the vast river of comments in the last Stackoverflow-question)
Click Download Build Tools, get your 2019 installer and ...
BANG! Nope, it won't work. Failure will indicate a missing include-file:
c:\program files\python38\include\pyconfig.h(205): fatal error C1083: Cannot open include file: 'basetsd.h': No such file or directory
My solution with 2017 tools
Download link as given to you by Microsoft's website is https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16 (2019). As it happens, 2019 will contain 2015 build tools, we can assume 2017 to do the same.
If you hand edit to contain release 15 (2017): https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=15
Yaaash! It works: pip
will end with a Successfully installed -message.
Finally
Tricky subject, that. Looks like all the years have done so many changes nobody is able to keep a good track of all. What a mess! Uh.
New SSD for gaming PC - Samsung EVO 970 Plus
Tuesday, January 12. 2021
My gaming PC Windows 10 started acting out, it wouldn't successfully run any updates. I tried couple of tricks, but even data-preserving re-install wouldn't fix the problem. It was time to sort the problem, for good. The obvious sequence would be to re-install everything and get the system up and working properly again.
Doing such a radical thing wouldn't make any sense if I'd lose all of my precious data while doing it. So, I chose to put my hand to the wallet and go shopping for a new SSD. That way I could copy the files from old drive without losing anything.
A Samsung 970 EVO Plus SSD. It was supposed to be much faster than my old one, which was pretty fast already.
Installation into Asus motherboard:
The M.2-slot is behind a block of aluminium acting as a heat sink. All cards, including the GPU needed to be yanked off first, then three screws undone before getting a glipse of the old 1 TiB M.2 SSD.
Note: In my MoBo, there is a second, non-heatsink, slot for what I was about to do. Transfer data from old drive to new one. I think it is meant to be used as a temporary thing as the drive is sticking out and isn't properly fastened.
Putting it all together, installing Windows 10 20H2 and running a benchmark:
Twice the performance! Every time you can boost your PC into 2X of anything, you'll be sure to notice it. And yup! Booting, starting applications or games. Oh, the speed.
Note: In a few years, the above benchmark numbers will seem very slow and obsolete. Before that happens, I'll be sure to enjoy the doubled speed.
Let's Encrypt Transitioned to ISRG's Root
Sunday, January 10. 2021
My previous post @ Let's Encrypt Transitioning to ISRG's Root.
Let's Encrypt's announcement: Standing on Our Own Two Feet also note their support plan for old Android phones: Extending Android Device Compatibility for Let's Encrypt Certificates to maintain their IdenTrust root as an alternative source.
To put it short, plans made back in 2019 got refined and went to execution. The certificates issued by Let's E have their own root CA certificate and are not depending on a partner organization's root.
I'm getting certs from Let's E, why should I care?
If you're lucky, no need to.
If you're like me, stuff stops working.
An example: my OpenLDAP slapd
is configured to serve LDAPS (that's TLS-wrapped LDAP) from TCP/636 with a Let's E cert I'm getting from them every 60 days. Last week I did run the update and my recent cert update was issued by their new R3 intermediate server, as X3 having been phased out.
To state the obvious, lot of stuff in Linux depend on ability to access users. Now that it was gone, the previously mentioned "lot of stuff" ceased to function.
The exact message I managed to dig out with slapd -d 3
was:
TLS trace: SSL_accept:TLSv1.3 early data
TLS trace: SSL_accept:error in TLSv1.3 early data
5ff9a0ec connection_get(23): got connid=1008
5ff9a0ec connection_read(23): checking for input on id=1008
TLS trace: SSL3 alert read:fatal:unknown CA
TLS trace: SSL_accept:error in error
TLS: can't accept: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca.
5ff9a0ec connection_read(23): TLS accept failure error=-1 id=1008, closing
Note: fatal:unknown CA
Ok, my stuff got broken, what now?
Luckily the fix is easy, go get the new R3 cert from https://letsencrypt.org/certs/lets-encrypt-r3.pem. To see all of their certs in Let's E chain-of-trust, go to https://letsencrypt.org/certificates/
When targeting specifically OpenLDAP and slapd
, I went to /etc/openldap/certs/
and symlinked lets-encrypt-r3.pem
from /etc/pki/tls/certs/
which is the standard Fedora/CentOS/RedHat location for certificate PEM-files.
After downloading the cert, next thing was to get the hash of the downloaded R3-cert:
# openssl x509 -hash -noout -in /etc/pki/tls/certs/lets-encrypt-r3.pem
Which outputted the result of (your result must be identical to this):
8d33f237
This hash needs to point to the actual PEM-file, symlink:
# ln -s /etc/pki/tls/certs/lets-encrypt-r3.pem 8d33f237.0
That should be the fix. Next systemctl restart slapd
and observe functioning OpenLDAP-server.
After any changes to my LDAP-configuration, I'll verify the result with a query similar what my Linux system would do with a direct LDAP-search of:
$ ldapsearch -H "ldaps://my.ldap.server.example.com/" -x \
-b ou=People,dc=example,dc=com \
"(cn=Jari Turkia)"
and system wrapper for above:
$ getent passwd jatu
Confirmed as working!
Done.
Final thoughts
I and everybody else had over an year to prepare for this. Did I? Nope. Had other more "important" things to do instead. Fail!
Advent of Code 2020
Saturday, December 26. 2020
As I don't have too many projects on my hands during this COVID-19 ridden year, I decided to go for an ultimate time-sink of AoC 2020.
For the curious, here are my stats:
----Part 1----- ----Part 2-----
Day Time Rank Time Rank
23 03:35:13 5086 - -
19 09:53:13 8934 09:53:26 5961
18 03:08:25 6521 04:04:25 6063
17 16:12:13 16057 16:12:23 15108
16 03:01:21 9251 03:52:53 6641
15 02:14:07 8224 02:16:33 6855
14 02:54:23 8940 03:52:58 7359
13 04:20:46 13423 06:15:57 7818
12 04:26:10 12452 04:55:22 10616
11 02:34:45 9354 03:22:14 8110
10 02:46:44 15237 04:17:26 10408
9 01:52:12 11970 02:13:22 11396
8 01:49:09 12056 03:06:07 12907
7 04:12:28 14520 04:12:38 11238
6 03:30:29 17152 03:46:03 16033
5 04:28:02 18252 05:15:07 19367
4 02:17:40 14478 02:38:02 10416
3 02:41:11 16008 02:53:35 15164
2 04:30:05 23597 04:37:14 21925
1 >24h 77025 >24h 72031
My weapon-of-choice was Python. I'm a fan of IntelliJ, so I wrote my code with that.
As you can see, I didn't complete all of them. It's mostly about time required to complete the latter ones. As an example 19 took way too many hours in a Saturday, I chose to opt out at that point. I did have time to complete first part of 23.
1-9 were really trivial ones. Task in 7 was really badly worded, but after couple of failures manageable. 10 was very tricky for the optimization requirement. It is possible to populate an entire tree, but it is so heavy on resources and time-consuming, going for the math was the better way. 11 and anything after it was beyond trivial. 13 was a huge math problem and it took a while to solve. 17 was a 3D game-of-life (a 2D GoL was done in 11 already) and required really careful work. 18 involved solving reverse polish notation calculations and I considered that as rather easy. Then came 19 which involves parsing a set of rules, but given references to other rules, the approach becomes tricky and tangled soon. I completed it and decided it would take too much of my daily hours to complete any subsequent tasks. However, for 23 I did spend couple minutes just to realize my approach was badly optimized for any large set of data. At that point I churned.
Initially I did enjoy the tasks, but when the complexity ramped up I was torn. I didn't want to not do it just because the was complexity, but on the other hand writing code to be discarded for hours wasn't the best use of my time while Chrismas was nearing. At that point I didn't enjoy the tasks anymore, they were more like chores I "had" to do.
Next year, the AoC will probably be arranged as it has been since 2015. I may not participate on that one.
Merry Christmas 2020!
Friday, December 25. 2020
Merry Christmas!
Happy Holidays!
Hyvää Joulua!
Btw. as the maps by Jakub Marian are so cool, here is an another one:
Full attribution to his work. Go see the originals at https://jakubmarian.com/merry-christmas-in-european-languages-map/ and https://jakubmarian.com/christmas-gift-bringers-of-europe/. Mr. Marian fully deserves all the possible credit for permission to use his material with attribution and also for the really cool stuff he has made. Check it out yourself!
podman - Running containers in Fedora 31+
Tuesday, November 10. 2020
To clarify, I'll put the word here: Docker
Naming confusion
Next, I'll go and fail explaining why Docker isn't Docker anymore. There is an article from year 2017 OK, I give up. Is Docker now Moby? And what is LinuxKit? trying to do the explaining, nearly with a success. In that article, word "docker" is presented a number of times in different context. Word "docker" might mean the company, Docker Inc., the commercial techology with open source packaging Docker CE or paid version Docker EE. I'll add my own twist, there might be command docker
in your Linux which may or may not have something to do with Docker Inc.'s product.
In short: What you and I both call Docker isn't anymore. It's Moby.
Example, in Fedora 33:
# rpm -q -f /usr/bin/docker
moby-engine-19.03.13-1.ce.git4484c46.fc33.x86_64
Translation: Command docker
, located in /usr/bin/
is provided by a RPM-package called moby-engine.
Further, running dnf info moby-engine
in Fedora 33:
Name : moby-engine
Version : 19.03.13
Release : 1.ce.git4484c46.fc33
Architecture : x86_64
Size : 158 M
Source : moby-engine-19.03.13-1.ce.git4484c46.fc33.src.rpm
Repository : @System
From repo : fedora
Summary : The open-source application container engine
URL : https://www.docker.com
License : ASL 2.0
Description : Docker is an open source project to build, ship and run any
: application as a lightweight container.
This moby-thingie is good old docker
after all!
Fedora confusion
Installing Docker into a Fedora 33 with dnf install docker
, making sure the daemon runs with systemctl start docker
, pulling an image and in an attempt to debug what the container image about to be debugged has eaten by going with a classic:
docker run -it verycoolimagenamehere /bin/bash
... will blow up on your face! What!?
Error message you'll see states following:
docker: Error response from daemon: OCI runtime create failed: this version of runc doesn't work on cgroups v2: unknown.
Uh. Ok?
- Docker-daemon returned an error.
- OCI runtime create failed (btw. What's an OCI runtime?)
- runc failed (btw. What's a runc?)
- doesn't work on cgroups v2 (btw. What's cgroups and what other versions exist than v2?)
Lot of questions. No answers.
Why there is Fedora confusion?
Going to google-search will reveal following information: cgroups is the mechanism which makes Docker tick. There exist versions 1 and 2 of it.
Real nugget is article Fedora 31 and Control Group v2 by RedHat. I'm not going to copy/paste the contents entirely here, but to put it briefly: In Fedora 31 a decision was made to fall forward into cgroups v2. However, there is a price for doing this and one of them is broken backwards-compatiblity. cgroups v1 and v2 cannot co-exist at the same time. Running v2 has lots of benefits, but major drawback is with the specific softare by Docker Inc. which will not work with this newer tech and apparently will not start working in a near future.
Part of the confusion is that nobody else besides Fedora has the balls to do this. All other major distros are still running cgroups v1. This probably will change sometimes, but not soon. Whenever the most popular distros would go for v2, all others would follow suit. We've seen this happen in systemd and other similar advances.
Mitigating Fedora confusion
When Fedora-people chose to fall forward, they had some backing for it. They didn't simply throw us users out of the proverbial airplane without a parachute. For Fedora 31 (and 32 and 33 and ...) there exists a software package that is a replacement for docker
. It is called podman
. Website is at https://podman.io/ and it will contain more details. Source code is at https://github.com/containers/podman and it has explanation: "Podman (the POD MANager): A tool for managing OCI containers and pods". Shortly: It's docker
by RedHat.
Installing podman and running it feels like running Docker. Even the commands and their arguments match!
Something from earlier:
podman run -it verycoolimagenamehere /bin/bash
... will work! No errors! Expected Bash-prompt! Nice.
Mitigating differences
There exists lot of stuff in this world with full expectance of command docker
and it's configuration ~/.docker/config.json
.
A good example is Google Cloud Platform SDK accessing GCP Container Registry. (Somebody from the back row is yelling: AWS ECR! ... which I'll be skipping today. You'll have to figure out how aws ecr get-login-password
works by yourself.)
Having installed GCP SDK and running command gcloud auth configure-docker
(note! in Fedora 33: CLOUDSDK_PYTHON=python2 gcloud auth configure-docker
, to confirm Python 2.x is used) will modify the Docker config-file with appropriate settings. Podman won't read any of that! Uff. Doing a podman pull
or podman login
into GCR will politely ask for credentials. And nope, don't enter them. That's not a very secure way of going forward.
Throwing a little bit of GCP-magic here:
- (skip this, if you already logged in) Log into GCP:
gcloud auth login
- Display logged in GCP-users with a:
gcloud auth list
- Display the (rather long) OAuth2 credential:
gcloud auth print-access-token '
<account-id-here!>'
- Glue this into a podman-command:
podman login \
<account-id-here!>
-u oauth2accesstoken \
-p "$(gcloud auth print-access-token '')" \
https://gcr.io - Success: Login Succeeded!
Now you have successfully authenticated and a podman pull
will work from you private container repo.
Finally
Lot of confusion.
Lot of questions.
Hopefully you'll find some answers to yours.
Getting rid of Flash from Windows - For good
Wednesday, October 28. 2020
Today, Microsoft released KB4577586, Update for the removal of Adobe Flash Player. This is wonderful news! I've been waiting for this moment to happen for many many years. Many organizations will cease to support Adobe Flash end of this year and this release is an anticipated step on that path. Goodbye Flash!
Updates need to be manually downloaded from Microsoft Update Catalog, link: https://www.catalog.update.microsoft.com/search.aspx?q=kb4577586. The reason for this is the earliness. If you're not as impatient as I, the update will go its natural flow and eventually be automatically offered by your Windows.
As you can see from the listing, you need to be super-careful when picking the correct file to download. Also, to make things worse, there are some discrepancies in the descriptions:
To get the exact version of your Windows, run winver
. It will output something like this:
In that example, the package required is for Windows 10 release 1909. To download the correct package, you also need to know the processor architecture. Whether it's AMD-64, Intel-32 or ARM-64 cannot be determined from above information. If you have no idea, just go with x64 for processor architecture, that's a very likely match. Those not running x64 will know the answer without querying.
As a software engineer, I obviously wanted to extract all the required information programmatically. I automated the process of getting to know the exact version your particular Windows is running by writing and publishing a helper-script for PowerShell Core. If you are able to run PowerShell Core, the script is available at: https://gist.github.com/HQJaTu/640d0bb0b96215a9c5ce9807eccf3c19. Result will look something like this:
Result will differ on your machine, but that's the output on my Windows 10 Pro release 1909 (x64) build 18363
. Couple of other examples are:
Windows 10 Pro release 2004 (x64) build 20241
on my Windows Insider preview
Windows 8.1 Pro with Media Center (x64) build 9600
on my Windows 8.1
If you managed to pick out the correct .msu
-file, on running it, output will query if you want to proceed with installation:
In case of a mismatch, error will say something about that particular update not being applicable to your computer:
After installation, go verify the results of Flash being removed. Running your regular Chrome (or Firefox) won't do the trick. They won't support Flash anyways. What you just did was removed Flash from Windows, aka. Internet Explorer. Go to https://www.whatismybrowser.com/detect/is-flash-installed and observe results:
On a Windows, where KB4577586 hasn't been successfully applied, message will indicate existence of Adobe Flash player:
That's it. Flash-b-gone!