Custom X.509 certificate in Windows 10 Remote Desktop Service
Wednesday, October 21. 2020
On a non-AD environment, this is what your average Windows 10 Remote Desktop client will emit on a connection:
For those who are AD-domain admins, they may have seen this rarely. If an AD has Certification Authority installed, it is easy and typical to deploy certs from it to workstations and never see the above message. The Net is full of instructions like Replace RDP Default Self Sign Certificate.
Problem that needs to be solved
For anybody like me, not running an AD-domain, simply having couple of Windows 10 -boxes with occasional need to RDP into them, that popup becomes familiar.
Note: I'll NEVER EVER click on Don't ask me again -prompts. I need to know. I want to know. Suppressing such information is harmful. Getting to know and working on the problem is The WayⓇ.
Gathering information about solution
If this was easy, somebody had created simple instructions for updating RDP-certificates years ago. Decades even. But no. No proper and reasonable easy solution exists. Searching The Net far & wide results only in bits and pieces, but no real tangible turn-key(ish) solution.
While in quest for information, given existence of The Net, I find other people asking the same question. A good example is: How to provide a verified server certificate for Remote Desktop (RDP) connections to Windows 10.
As presented in the above StackExchange answer, the solution is a simple one (I think not!!). These five steps need to be done to complete the update:
- Purchase a genuine verified SSL-certificate
- Note: It's a TLS-certificate! The mentioned protocol has been deprecated for many years. Even TLS 1 and TLS 1.1 have been deprecated. So it's a X.509 TLS-certificate.
- Note 2: Ever heard of Let's Encrypt? ZeroSSL? Buypass? (and many others) They're giving away perfectly valid and trusted TLS-certificates for anybody who shows up and can prove they have control over a domain. No need to spend money on that.
- Wrap the precious certificate you obtained in step 1) into a PKCS#12-file. A
.pfx
as Windows imports it.- Note: Oh, that's easy! I think PKCS#12 is the favorite file format of every Regular Joe computer user. Not!
- Install the PKCS#12 from step 2) into Windows computer account and make sure user NETWORK SERVICE has access to it.
- Note: Aow come on! Steps 1) and 2) were tedious and complex, but this is wayyyyyy too hard to even begin to comprehend! Actually doing it is beyond most users.
- Open a Registry Editor and add the SHA-1 fingerprint of the certificate into
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\
into a binary value calledSSLCertificateSHA1Hash
.- Note: Oh really! Nobody knows what's a SHA-1 fingerprint nor how to extract that from a certificate in a format suitable to a registry binary value!
- Reboot the Windows!
- Note: As all Windows-operations, this requires a reboot.
Mission accomplished! Now the annoying message is gone. Do you want to guess what'll happen after 90 days passes? That's the allotted lifespan of a Let's Encrypt -certificate. Yup. You'll be doing the all of the above again. Every. Single. Painstaking. Step.
Problems needing to be solved
Let's break this down. As the phrase goes, an elephant is a mighty big creature and eating one is a big task. It needs to be done one bit at a time.
0: Which cert is being used currently?
If you simply want to get the SHA-1 hash of the currently installed RDP-certificate, a simple (or maybe not?) command of:
wmic /namespace:"\\root\cimv2\TerminalServices" PATH "Win32_TSGeneralSetting" get "SSLCertificateSHA1Hash"
... will do the trick. No admin-permissions needed or anything fancy.
To state the obvious problem: you'll be presented a hex-string but you have zero idea to where it points to and what to do with this information.
Hint:
You can to browse Windows Certificate Mchine Store. System certificates are not stored in your personal Certificate Store, so carefully point to a correct container. By default certificates are listed by subject, not SHA-1 hash. Luckily the self-signed RDP-cert is located in a folder "Remote Desktop" narrowing down the set.
To state the second obvious problem: WMI is a tricky beast. Poking around it from CLI isn't easy.
1: The certificate
Ok. With that you're on your own. If you cannot figure how Let's Encrypt works, doing this may not be your thing.
2: PKCS#12 wrapping
In most scenarios, a certificate is typically delivered in a PEM-formatted file or set of two files (one for public and second for private keys). PEM-format is native in *nix environment and all of tooling there can handle the data with ease. Converting the PEM-data into an interim (Microsoft created) format for Microsoft-world can be done, but is bit tricky in a *nix. Usage for this PKCS#12-formatted data is ephemeral, the certificate will stay in the file for only short while before being imported to Windows and there is no need for it anymore. A cert can be re-packaged if needed as long as the original PEM-formatted files exist. Also, the certificate data can be exported from Windows back to a file, if needed.
As PEM-format is native in *nix for certs, it is completely unsupported in Windows. A simple operation of "here is the PEM-file, import it to Windows" is literally impossible to accomplish! There is a reason why instructions have a rather complex spell with openssl
-command to get the job done.
2.1: What others have done
When talking about PowerShell-scripting and PEM-format, I'll definitely have to credit people of PKISolutions. They publish a very nice library of PSPKI (source code is at https://github.com/PKISolutions/PSPKI) with PEM-import capability. As Windows Certificate Store is a complex beast, that code doesn't directly work as I'd need it to be for importing into Machine Store. Given existence of source code, the logic they wrote can be lifted, modified and re-used to do what is needed for RDP-cert installation process.
Among PKISolutions' excellent work is blog post by Vadims Podāns, Accessing and using certificate private keys in .NET Framework/.NET Core. There he explains in detail dark/bright/weird ages about how Microsoft's libraries have approached the subject of PKI and how thing have evolved from undefined to well-behaving to current situation where everything is... well... weird.
Why I mention this is imperative for practical approach. PSPKI-library works perfectly in PowerShell 5.x, which is built on Microsoft .NET Framework 4.5. That particular framework version is bit old, and given its age, it falls into bright bracket of doing things.
However, not living in past, the relevant version of PowerShell is PowerShell Core. At the time of writing the LTS (or Long-Term-Support) version is 7.0. A version of 7.1 is in preview and version 6 is still actively used. Those versions run obviously on modern .Net Core, an open-source version of DotNet running in Windows, Linux and macOS. In transition of Microsoft .Net into open-source .Net Core, most operating system -dependant details have changed while bumping things from closed-source-Windows-only thingie. This most definitely include implementation and interface details of Public Key Infrastructure.
In short: PSPKI doesn't work anymore! It did in "bright ages" but not anymore in current "weird ages".
2.2: What I need to get done
Sticking with an obsoleted PowerShell-version is a no-go. I need to get stuff working with something I actually want to run.
In detail, this involves figuring out how .Net Core's System.Security.Cryptography.CngKey can import an existing key into Machine Store as an exportable one. That was bit tricky even in .Net Framework's System.Security.Cryptography.RSACryptoServiceProvider. Also note, that when I talk about PKI, I most definitely mean RSA and other practical algorithms like ECDSA, which is fully supported in Cryptography Next Generation (CNG). The biggest problem with CNG is, there is lack of usable documentation and practical examples.
Having elliptic curve support is important. RSA is absolutely not obsoleted and I don't see it being so in near future. It's the classic: having options. This is something I already addressed in a blog post this spring.
Most people are using only RSA and can get their job done using old libraries. I ran out of luck as I needed to do something complex. Using new libraries was the only way of getting forward. That meant lots of trial and error. Something R&D is about.
3: Import the PKCS#12-packaged certificate into a proper certificate store of a Windows-machine
When the very difficult part is done and a PKCS#12-file exists and contains a valid certificate and the private key of it, importing the information is surprisingly easy to accomplish with code.
On the other hand, this one is surprisingly complex to accomplish manually. Good thing I wasn't aiming for that.
4: Inform RDP-services which certificate to use
Getting a SHA-1 hash of a X.509 certificate is trivial. Stamping a well-known value into registry is easy. Having correct permissions set was nearly impossible, but ultimately it was doable.
5: Make sure RDP-services will use the certificate
For this, there are number of ways to do. Many of them will involve reboot or restarting the service with a PowerShell-spell of:
Restart-Service -DisplayName "Remote Desktop Services" -Force
Surprisingly, on a closer look there is a way to accomplish this steop without rebooting anything. It's just not well known nor well documented, but Windows Management Instrumentation (or wmic
) can do that too! Assuming the SHA-1 thumbprint of the certificate was in variable $certThumbprint
, running this single-line command will do the trick:
wmic /namespace:"\\root\cimv2\TerminalServices" PATH "Win32_TSGeneralSetting" Set "SSLCertificateSHA1Hash=$certThumbprint"
It will update registry to contain appropriate SHA-1 hash, confirm the access permissions and inform RDP-service about the change. All of which won't require a reboot nor an actual restart of the service. Imagine doing the cert update via RDP-session and restarting the service. Yup. You will get disconnected. Running this WMI-command won't kick you out. Nice!
Solution
Set of tools I wrote is published in GitHub: https://github.com/HQJaTu/RDP-cert-tools
As usual, this is something I'm using for my own systems, so I'll maintain the code and make sure it keeps working in this rapidly evolving world of ours.
Example run
On the target Windows 10 machine, this is how updating my cert would work (as an user with Administrator permissions):
PS C:\> .\update-RDP-cert.ps1 `
-certPath 'wildcard.example.com.cer' `
-keyPath 'wildcard.example.com.key'
Output will be:
Loaded certificate with thumbprint 1234567890833251DCCF992ACBD4E63929ABCDEF
Installing certificate 'CN=*.example.com' to Windows Certificate Store
All ok. RDP setup done.
That's it. You're done!
Example run with SSH
As I mentioned earlier, I'm using Let's Encrypt. There is a blog post about how I approach getting the certificates in my Linux wth Acme.sh. There is an obvious gap with getting certs from LE with a Linux and using the cert in Windows 10. Files in question need to be first transferred and then they can be used.
Realistic example command I'd run to first transfer the newly issued LE-cert from my Linux box to be used as RDP-cert would be:
PS C:\> .\get-RDP-cert.ps1 `
-serverHost server.example.com `
-serverUser joetheuser `
-serverAuthPrivateKeyPath id_ecdsa-sha2-nistp521 `
-remotePrivateKeyPath 'certs/*.example.com/*.example.com.key' `
-remoteCertificatePath 'certs/*.example.com/*.example.com.cer'
The obvious benefit is a simple single command to get and install an RDP-certificate from Linux to Windows. All of the complexity will be taken out. My script will even clean the temporary files to not leave any private key files floating around.
Finally
Enjoy!
Admins/users: If you enjoy this tool, let me know. Drop me a comment.
Developers: If you love my CNG-import code, feel free to use it in your application. The more people know how it works, the better.
openSUSE Leap 15.2 in-place upgrade
Sunday, July 12. 2020
Most operating systems have a mechanism to upgrade the existing version into a newer one. In most cases even thinking about upgrading without a fresh install-as-new makes me puke. The upgrade process is always complex and missing something while at it is more than likely to happen. These misses typically aren't too fatal, but may make your system emit weird messages while running or leave weird files into weird subdirectories. I run my systems clean and neat, so no leftovers for me, thanks.
There are two operating systems, which are exceptions to this rule of mine:
Windows 10 is capable of upgrading itself into a newer build successfully (upgrading a Windows 7 or 8 into a 10 is crap, do that to get your license transferred, then do a fresh install) and openSUSE. Upgrading a macOS is kinda working. It does leave weird files and weird subdirectories, but resulting upgraded OS is stable. Other Linuxes then openSUSE are simply incapable doing doing a good enough job of upgrading and I wouldn't recommend doing that. They'll leave turd, residue and junk behind from previous install and the only reasonable way is taking backups and doing a fresh install. openSUSE engineers seem to have mastered the skill of upgrade to my satisfaction, so that gets my thumbs up.
As openSUSE Leap 15.2 saw daylight on 2nd July, I felt the urge to update my Mac Book Pro into it. Some stories about my install of 15.1 are available here and and here.
The system upgrade of an openSUSE is well documented. See SDB:System upgrade for details.
To assess what needs to change, run zypper repos --uri
and see the list of your current RPM-repositories. Pretty much everything you see in the list will have an URL with a version number in it. If it doesn't, good luck! It may or may not work, but you don't know beforehand. My repo list has something like this in it:
# | Alias | Name
---+---------------------------+-----------------------------------
2 | google-chrome | google-chrome
3 | home_Sauerland | Sauerland's Home Project (openSUSE
4 | openSUSE_Leap_15.1 | Mozilla Firefox
5 | packman | packman
6 | repo-debug | Debug Repository
7 | repo-debug-non-oss | Debug Repository (Non-OSS)
8 | repo-debug-update | Update Repository (Debug)
9 | repo-debug-update-non-oss | Update Repository (Debug, Non-OSS)
10 | repo-non-oss | Non-OSS Repository
11 | repo-oss | Main Repository
12 | repo-source | Source Repository
13 | repo-source-non-oss | Source Repository (Non-OSS)
14 | repo-update | Main Update Repository
15 | repo-update-non-oss | Update Repository (Non-Oss)
What I always do, is a backup of the repo-configurations. Commands like this run as root will do the trick and create a file repos-15.1-backup.tar
into /etc/zypp/repos.d/
:
# cd /etc/zypp/repos.d/
# tar cf repos-15.1-backup.tar *.repo
Next, upgrade versions in the static URLs with a carefully crafted sed
-run:
# sed -i 's/15.1/15.2/g' /etc/zypp/repos.d/*.repo
A non-static .repo
-file (example: /etc/zypp/repos.d/repo-oss.repo
) will contain something like this:
[repo-oss]
name=Main Repository
enabled=1
autorefresh=1
baseurl=https://download.opensuse.org/distribution/leap/$releasever/repo/oss/
path=/
type=rpm-md
keeppackages=0
Notice the variable $releasever
. No amount of editing or running sed
will change that. Luckily there is an easier way. Run zypper
with an argument of --releasever 15.2
to override the value of the variable. More about repository variables like $releasever
are in documentation https://doc.opensuse.org/projects/libzypp/HEAD/zypp-repovars.html. zypper
arguments are in the man page at https://en.opensuse.org/SDB:Zypper_manual_(plain).
Additional:
As my system is depending on stuff found in Sauerland-repo, I did this to upgrade the entries:
# zypper removerepo home_Sauerland
# zypper addrepo https://download.opensuse.org/repositories/home:Sauerland/openSUSE_Leap_15.2/home:Sauerland.repo
Now all the repo URLs are set. As documented doing some preparations:
# zypper --gpg-auto-import-keys ref
# zypper --releasever 15.2 refresh
Finally going for the actual distro update:
# zypper --releasever 15.2 dist-upgrade --download-in-advance
This will resolve all conflicts between old and new packets. If necessary you'll need to decide a suitable course of action. When all is set, a lengthy download will start. When all the required packets are at your computer, following prompt will be presented for you:
The following product is going to be upgraded:
openSUSE Leap 15.1 15.1-1 -> 15.2-1
The following 7 packages require a system reboot:
dbus-1 glibc kernel-default-5.3.18-lp152.20.7.1 kernel-firmware libopenssl1_1 systemd udev
2210 packages to upgrade, 14 to downgrade, 169 new, 54 to remove, 2 to change arch.
Overall download size: 1.40 GiB. Already cached: 0 B. After the operation, additional 475.5 MiB will be used.
Note: System reboot required.
Continue? [y/n/v/...? shows all options] (y): y
Going for a Yes will start the actual process:
Loading repository data...
Reading installed packages...
Warning: You are about to do a distribution upgrade with all enabled repositories. Make sure these repositories are compatible before you continue. See 'man zypper' for more information about this command.
Computing distribution upgrade...
When everything is done, following message will be displayed:
Core libraries or services have been updated.
Reboot is required to ensure that your system benefits from these updates.
This is your cue. Reboot the system.
If your upgrade went ok, you'll end up in a successfully upgraded system. To confirm the version of openSUSE, you can as an exmple query which package owns /etc/motd
:
# rpm -q -f /etc/motd
The expected answer should be something like: openSUSE-release-15.2-lp152.575.1.x86_64
Also, second thing you need to verify is the version of Linux kernel your system is running with a:
# cat /proc/version
In openSUSE Leap 15.2 you'll get something like: Linux version 5.3.18-lp152.20.7-default (geeko@buildhost)
. If your kernel version isn't in the 5.3-series, something went wrong. 15.2 will use that. If you see that version, you're golden.
Congratulations! You did it!
Quite fast and painless, wasn't it?
HyperDrive PRO 8-in-2 firmware upgrade
Tuesday, May 26. 2020
As mentioned in my USB-C article, I'm describing how I managed to upgrade my 8-in-2 firmware WITHOUT installing Boot Camp into my macOS.
Hyper has a support article Screen flickering with Pro? Please check this out. What they ultimately suggest, and what I instantly rejected is:
- Install Boot Camp
- Boot the Mac into Windows 10
- Run the app to do the Hyper 8-in-2 firmware upgrade
- Enjoy flicker-free life!
My mind was targeted to jump directly into step 3 and 4. I attempted a number of things and kept failing. So, I dug out my humility-hat and went to step 2. That one was a great success! For step 1 I would have never gone. That's for sure. I'm not going to taint this precious Mac with a dual-boot.
Constraints
This is the list of restrictions I painfully figured out:
- HyperDrive 8-in-2 has dual USB-C -connector, making it impossible to physically attach to anything else than a MacBook Pro.
- USB-C extension cords do exist. Purchase one (you don't need two) and lift the physical connectivity limitation.
- Provided application,
VmmUpdater.exe
is a 32-bit Windows PE executable, more information can be found @ Hybrid Analysis - Firmware upgrade won't work on a random PC having USB-C -port.
VmmUpdater.exe
won't detect the Synaptics chip without Apple AMD-drivers.- The driver won't install if your hardware doesn't have a suitable GPU.
- A Mac won't boot from an USB, unless allowed to do so
- A Mac will boot only to a 64-bit operating system, a 32-bit Windows 10 won't work
- A 64-bit Windows 10 installer doesn't have WoW64 in it to emulate a 32-bit Windows
- To actually upgrade the Synaptics chip's firmware, it needs to be in use
- Windows needs to understand the existence of the video-chip
- Windows needs to actually use the video-chip via HDMI to produce output. Any output will do. No picture --> no upgrade.
- Apple keyboard and touchpad will not work in a default Windows 10 installation
- Some of the Windows drivers used by a MBP can be downloaded from Internet automatically, IF you manage to get an internet connection into a Windows 10 running on a Mac.
- Some of the Windows drivers are not available
- macOS cannot write to a NTFS-partition, it can read the data ok
Requirements
- Hardware:
- A HyperDrive 8-in-2 (to state the obvious)
- A MacBook Pro (to state the nearly obvious), I used 2019 model
- A Windows 10 running in a PC (to state the not-so-obvious)
- USB-stick with at least 8 GiB of storage, capable of booting (I think all of them can)
- USB-keyboard, during tinkering your Mac's keyboard won't work at all times. Any USB-keyboard will do.
- HDMI-cable connected to an external monitor.
- Software:
- Synaptics tool and EEPROM-file provided by Hyper.
- Rufus
- Ability to:
- Download files from The Internet
- Execute Rufus on a platform of your choice to write into the USB-stick. I did this on a Windows 10 PC.
- Run Boot Camp Assistant on a macOS, I used macOS 10.15 Catalina.
- Write files into NTFS-formatted USB-stick. Any Windows 10 can do this.
Steps
1. Save a Windows 10 ISO-image into USB-stick as Windows to Go
Option: You can do this as a second thing.
Windows to Go, that's something not many have used nor ever heard. It's already obsoleted by Microsoft, but seems to work ok. The general idea is to create an USB-bootable Windows you can run at any computer.
That's definitely something I'd love to use for this kind of upgrade!
The easiest way of injecting a Windows 10 ISO-image into USB in a suitable format is Rufus. Go to https://rufus.ie/ and get it. It's free (as speech)!
GPT-partition table is a critical one make sure to choose it. These are the options I had:
Warning: The process is slow. Writing image with Rufus will take a long time. Much longer than simply copying the bytes to USB.
2. Download Boot Camp support files
Option: You can do this first.
Recent macOS have limited Boot Camp Assistant features. Older ones could do much more, but modern versions can only Download Windows Support Software (it's in the Action-menu). Wait for the 1+ GiB download to complete.
3. Transfer files to USB-stick
Warning: The stick is formatted as NTFS. A Mac won't write into it. You need to first transfer the files into a Windows, and use the Windows to write the files into the USB.
Transfer the WindowsSupport\
folder downloaded by Boot Camp Assistant to the USB-stick prepared earlier. Subdirectory or not doesn't make a difference, I simply copied the directory into root of the USB-drive.
Also transfer the files from Hyper support article https://hypershop.zendesk.com/hc/en-us/articles/360038378871-Screen-flickering-with-Pro-Please-check-this-out-.
Note: Only the .exe
and .eeprom
are needed, I skipped the .docx
as they're not needed during the upgrade.
After successful transfer, you won't need a Windows PC anymore. All the next steps are on a Mac.
4. Enable Mac USB-booting
Study Apple support article HT208198 About Startup Security Utility.
You need to shutdown your Mac. Then plant your fingers onto Command and r keys, put the power on and then wait. Apple logo will appear, then a white progress bar will appear. You can release the Command-r at that point. If you'll be using an encrypted hard drive like I, you'll need to know a valid user on that Mac and enter that user's password to be allowed into Startup Security Utility.
The choices you need to have are:
- Secure boot: No security
- External boot: Allow booting from external or removable media is enabled
This is how Parallels would depict the settings in their KB-article 124579 How to allow Mac to boot from external media:
(Thanks Parallels for the pic!)
Note:
After you're done upgrading Synaptics firmware, you can go back to recovery, Startup Security Utility and put the settings back to higher security. For the one-shot upgrade the settings need to be at max. insecure settings.
5. Boot Windows 10 from USB
When you reboot a Mac, plant your finger on Option (some keyboards state Alt) key and wait. Pretty soon a boot menu will appear.
If you had the USB-stick already inserted, it will be displayed. If you didn't, this is your que to insert your Hyper 8-in-2. The Windows 10 USB can be inserted into the HyperDrive, your Mac will boot from there ok.
Your choice is to go for EFI Boot and wait for Windows 10 logo to appear.
Congrats! Now you're heading towards a Windows that won't respond to your keyboard nor touchpad.
6. Establish Windows to Go functionality
Make sure you have an USB keyboard available. This entire process can be done without Mac's own keyboard, it's just your own preference how you want to approach this. Getting the keyboard to work requires a keyboard, success is measured only on results. Make smart choices there!
If you can get the Windows 10 to connect to internet, that will solve some problems with missing drivers as they can be downloaded from a Microsoft driver repository. For Internet access, I used an USB-dongle to establish a Wi-Fi connection. Doing that requires selecting the correct Access Point and entering its password. On a machine without keyboard or mouse that WILL be difficult! Ultimately both the Apple keyboard and touchpad should start working and external keyboard won't be needed.
Note: The drivers for both are in WindowsSupport
downloaded by Boot Camp Assistant. If you can point Windows Device Manager to load driver upgrades from there.
Warning!
Your Windows to Go will create a massive hiberfil.sys
to enable hibernation. This can be a problem as typically the hibernate-file will be sized 75% of your RAM. This particular Mac has lots of RAM and the USB-stick would never be able to store such a file. So, eventually you're likely to run out of storage space. That is harmful, but can be easily remediated.
To fix, run command:
powercfg /hibernate off
Now the massive file is gone. Next, establish proper video. Your Windows Device Manager will look something like this:
There are bunch of device drivers missing. The most critical one is for GPU. I tried running WindowsSupport\setup.exe
, but it never progressed and I simply didn't do that at all. Instead, I executed WindowsSupport\BootCamp\Drivers\AMD\AMDGraphics\setup.exe
which enabled proper video to be displayed.
7. Go upgrade Synaptics firmware
All the hard work only for this! Make sure your Hyper 8-in-2 has HDMI-cable connected and monitor will display Windows 10 in it. If you don't have that, when executing the VmmUpdater.exe
an error will display:
VmmTool: Initialize GPU library failed.
Also different VmmTool error variants can occur. If your Windows will detect the monitor and display video, then you're set!
When VmmTool will display current firmware information, you're golden!
Initially my 8-in-2 had firmware version 3.12.105.
Select the .eeprom
file and a progress bar will indicate:
Running the upgrade won't take long. Something like 30 seconds or so. When upgrade is done, it will be indicated:
At this point, yank the 8-in-2 out of Mac and make sure you don't have USB-C power connected to it. The general idea is to power-cycle the recently updated Synaptics chip. When done, put everything back and run VmmUpdater.exe
again.
This time it will indicate the new firmware version:
The version 3.12.105 got bumped into 3.13.107. Nice! Hopefully it will do the trick.
8. Boot into macOS and test
This is it. Will it work?
In my case it did. When my Mac wakes up, a single blink will happen in the external display, but no flickering or other types of annoyances.
Done! Mission accomplished!
(phew. that was a lot of work for a simple upgrade)
USB-C Video adapters for MacBook Pro
Monday, May 25. 2020
In professional life, I stopped being a Windows-guy and went back being a Mac-man. The tool-of-trade provided by me is a MacBook Pro 2019. Those new Macs are notorius for having not-so-many ports. My Mac has two (2). Both are USB-C. In my previous MBP (that was a 2014 model), there were ports all over the left side and a bonus USB on right side.
The problem remains: How to hook up my external monitor to increase my productivity by the macigal 30%? Actually, I believe any developer will benefit even more by having a lot of display real estate on his/hers desk.
So, new Mac, new toys needed for it. I had one USB-C to DVI -converter before, but for this Mac, I went on-line shopping to get the good stuff:
From left to right:
- HyperDrive PRO 8-in-2: A pricey alternative offering a lot of connectivity, disappointing initial quality.
- HyperDrive USB-C Pro Video: An already obsoleted product. Small and does the job. This is my recommendation!
- No-name StarTech.Com CDP2DVI: Cheap thing working perfectly on a Lenovo PC. Unusable with a Mac!
Obsoleted HyperDrive shop screenshot:
The pricey alternative looks like this (on top of a complimentary leather pouch):
Cheap no-name one won't even blink in a MBP. The simple HyperDrive works like a charm! Mac sees it and monitor auto-detects the signal. The expensive one blinks when connected to a Mac. It blinks a lot. All the other ports work perfectly, but HDMI and mini-DP not-so-much. Hyper has a support article about 8-in-2- flickering. Lots of discussion in StackExchange about Apple being picky about cables and converters, examples: Dual monitor flickering: the secondary monitor does the flicker and External monitor flicker / signal loss. With the HyperDrive fix, I'll write an another post about the suggested firmware upgrade, it's complicated enough to deserve an own topic.
For me, the expensive HyperDrive works as a tiny docking-station. I have the USB-C charger connected to it among monitor cable. When I want to roam free of cables, all I have to do is detach the dual-USB-C dongle and that's it! Done. Mobility solved.
Final words:
With a Mac, do a lot of research beforehand and choose carefully, or alternatively make sure you'll get a full refund for your doesn't-work-in-my-Mac adapter.
Adobe Flash - Really?
Tuesday, March 31. 2020
I was updating a Windows Server 2019
Adobe Flash!! In 2020!! Really?
Hey Microsoft: Get rid of flash player already. Make it optional. I won't be needing it anytime soon.
Blog server upgrade to CentOS 8
Saturday, December 7. 2019
Since the inception of this blog back in January 2013, my weapon-of-choice has been CentOS Linux. When looking at the release chart @ https://en.wikipedia.org/wiki/CentOS#Latest_version_information it becomes obvious this is the 3rd major version of CentOS I'm running my blog on. In 2013 only version 6 was available, I must have upgraded into version 7 during 2014, and now 2019 I'm running on version 8. Given how RedHat and their organization(s) operate, the base Linux for my system is Fedora 28. See Fedora Project releases from https://fedoraproject.org/wiki/Releases.
The only motivation for me to upgrade is technology. RHEL/CentOS almost never upgrade their component versions. They do back-port any security patches even if authors of the original ones give up on their obsoleted stuff. RedHat does not. For people loving things how they are, that's a good thing. For people like me, its not that good.
Absolutely necessary things I had earlier, but lost and again have:
- HTTP/2
- For how and why this differs from HTTP/1.1 everybody else is still using, dive into Mr. Curl's book http2 explained. Its freely available @ https://http2-explained.haxx.se/content/en/
- TLS 1.3
- TLS versions 1 and 1.1 have been obsoleted. That leaves TLS 1.2 as the almost-only viable secure protocol.
- Obvious disclaimer for TLS 1.3: As of writing, it is still experimental. In reality not so much. Chrome and Firefox (among other platforms) support TLS 1.3 fully.
- Cloudflare's Head of Research Nick Sullivan is a known 1.3 enthusiast. Read his thoughts @ https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/.
Other highlights:
- PHP 7.3
- My blog software runs on PHP. I upgraded 7.2, but am too scared to go for 7.4 yet.
- Native dual-stack IPv6/IPv4 networking. This is courtesy of my service provider.
- TLS 1.2 configured to not support any CBC-ciphers, for details see Why did TLS 1.3 drop AES-CBC? as an example
- Inspiration for this taken from Cipherli.st and Security/Server Side TLS on Mozilla wiki.
- Apologies for anybody using IE 11 on Windows Phone 8.1, or Safari versions 6-8 on iOS 6-9/OS X 10.9 or 10.10. You won't see this text as your devices/operating systems won't support my reasonably secure settings.
- For everybody else: Congratulations on having a decently secure device to do your Internet browsing with.
- tmux
- Terminal multiplexer, https://github.com/tmux/tmux/wiki
- Most of you just SSH into a server and be happy with it. I almost always run my sessions trough something that will keep my work safe if a disconnection occurs. To my surprise I keep bumping into sysadmins who don't either know about this or don't see this as a necessary approach.
- I've ran GNU Screen for over 25 years now. Not anymore. Uff!
- nftables (https://wiki.nftables.org/), courtesy of RHEL 8 / CentOS 8
- the new packet classification framework that replaces the existing {ip,ip6,arp,eb}_tables infrastructure
- I've ran IPchains / IPtables for 21 years now. Not anymore. Arf!
Qualsys report on my blog now:
Nice!
Next up: CentOS Stream.
A new attempt to allow change of software versions. This will effectively detach CentOS from RHEL and gear it towards Fedora. This enables CentOS to get newer software as a rolling release Linux-distro, but keep the changes not-so-aggressive.
I won't run this yet on my blog server. This is so new at this point, but I'll have it running on a devel-box.
New Weather Station - Davis Vantage Vue - Part 2 of 2: Linux installation
Tuesday, November 5. 2019
This is a part two out of two on my Davis Vantage Vue weather station installation story. Previous part was about hardware installation.
Datalogger expansion
The indoors console has a slot for expansion devices. I went for the RS-232 WeatherLink datalogger expansion:
RS232 Cabling
The datalogger has a very short cable and a RJ-11 connector on the other end. The obvious good thing is the common availability of telephone extension cords to get past the 5 cm cable lenght of the datalogger. A regular landline telephone typically has such RJ-11 connectors in it and what I did was to get an inexpensive extension cord with suitable lenght.
For computer connectivity, with datalogger box has a blue RJ-11 to RS232 converter. The four connected pins of the converter are as follows:
RS232 and Linux
Most computers today don't have a RS232-port in them. To overcome this, years ago I bought a quad-RS232-port USB-thingie:
If you look closely, Port 1 of the unit has a DIY RS232-connector attached into it. That's my Vantage Vue cable connected to the indoors console. Also, note the lack of the blue RJ-11 to RS232 converter unit. I reverse engineered the pins and soldered my cable directly to a D-9 connector to get the same result.
Now the hardware parts is done. All the connectors are connected and attached to a PC.
Software
Half of the work is done. Next some software is needed to access the data in the data logger.
For Windows and macOS
Those not running Linux, there is WeatherLink software freely available at https://www.davisinstruments.com/product/weatherlink-computer-software/. It goes without saying, the software is useless without appropriate hardware it gets the inputs from. I never even installed the software, as using a non-Linux was never an option for me. So, I have no idea if the software is good or not.
For Linux
As you might expect, when going to Linux, there are no commercial software options available. However, number of open-source ones are. My personal choice is WeeWX, its available at http://www.weewx.com/ and source code at https://github.com/weewx/weewx.
Install WeeWX:
- Download software:
git clone https://github.com/weewx/weewx.git
- In git-directory, create RPM-package:
make -f makefile rpm-package SIGN=0
- As root, install the newly created RPM-package:
rpm --install -h dist/weewx-3.9.2-1.rhel.noarch.rpm
- That's it!
Configure WeeWX:
- (dependency) Python 2 is a requirement. Given Python version 2 deprecation (see https://pythonclock.org/ for details), at the time of writing, there is less than two months left of Python 2 lifetime, this project really should get an upgrade to 3. No such joy yet.
- (dependency) pyserial-package:
pip install pyserial
- Run autoconfig:
wee_config --install --dist-config /etc/weewx/weewx.conf.dist --output /etc/weewx/weewx.conf
- In the array of questions, when being asked to choose a driver, go for
Vantage (weewx.drivers.vantage)
- In the array of questions, when being asked to choose a driver, go for
- Inspect the resulting
/etc/weewx/weewx.conf
and edit if necessary:- Section
[Station]
week_start = 0
- Section [StdReport]
HTML_ROOT = /var/www/html/weewx
- (optional for Weather Underground users), Section
[StdRESTful]
- Subsection
[[Wunderground]]
- Enter station name and password
enable = true
- Subsection
- Section
- Configuring done!
Now you're ready (as root) to start the thing with a systemctl start weewx
.
On a working system, you should get LOOP-records out of weewxd
instantly. After a little while of gathering data, you should start having set of HTML/CSS/PNG-files in /var/www/html/weewx
. It's a good idea to set up a web server to publish those files for your own enjoyment. This is something you don't have to do, but I strongly advice to enable HTTP-endpoint to your results. I will immensly help determining if your system works or not.
One reason I love having a weather station around is to publish my data to The Net. I've been tagging along with Weather Underground for years. There has been good years and really bad years, when Wunderground's servers have been misbehaving a lot. Now that IBM owns the thing, there has been some improvements. The most important thing is there is somebody actually maintaining the system and making it run. Obviously, there has been some improvements in the service too.
Aggregation problem
When I got my system stable and running, I realized my wind data is flatline. In nature it is almost impossible for not to be windy for multiple days. I visually inspected the wind speed and direction gauges, they were working unobstructed. However, my console did not indicate any wind at all.
After multiple days of running, I was about to give up and RMA the thing back to Davis for replacement, the console started working! That was totally unexpected. Why: 1) the system did not work, 2) the system started working without any actions from me. That was the case for me. As the problem has not occurred since, it must have been some kind of newness.
What next
Now that everything is up, running and stays that way. There's always something to improve. What I'm planning is to do is pump the LOOP-records to Google BigQuery.
As a side note: when new information is available, it will be emitted by weewxd as LOOP: <the data here>
into your system logs. However, that data isn't used. After a period of time, your Vantage Vue will aggregate all those records into a time-slot record. That will be used as your measurement. Since those LOOP-records are simply discarded I thought it might be a good idea to base some analytics on those. I happen to know Google BigQuery well from past projects, all that I need is to write a suitable subsystem into weewx to pump the data into a correct place. Then it would be possible to do some analytics of my own on those records.
macOS Catalina upgrade from USB-stick
Saturday, November 2. 2019
macOS 10.15 Catalina is a controversial one. It caused lot of commotion already before launch.
Changes in 10.15 from previous versions
64-bit Only
For those using legacy 32-bit apps Catalina brought bad news, no more 32-bit. Me, personally, didn't find any problems with that. I like my software fresh and updated. Some noisy persons in The Net found this problematic.
Breaking hardware
For some, the installation simply bricked their hardware, see Limited reports of Catalina installation bricking some Macs via EFI firmware for more details. This is bad. When UEFI upgrade does this kind of damage, that's obviously bad. The broken macs can be salvaged, but it's very tricky as they won't boot.
Installer size
For that past many years, I've had my OS X / macOS USB-stick. I own couple of macs, so you'll never know when OS installer will be needed. For catalina:
8 GiB wasn't enough! Whaaaaat? How big the installer has gone. So, I had to get a new one. Maybe it was about time. At the USB-stick shop I realized, that minimum size of a stick is 32 GiB. You can go to 128 or 256, but for example 8 isn't an option anymore.
Stupid alerts
The worst part in Catalina is the increased level of nagging:
Every single app is asking me permission to do something. As usual, the questions are obscure at best. This same phenomenon is happening in Windows and Android. You're presented a question as a response to your action and the question has almost no relevance to anything you're trying to do. Yet the people designing these operating systems think it will vastly improve security to present user a Yes/No question without proper basis.
An example: For the above Spotify-question I chose to respond No. I have no idea why or for what reson Spotify needs to access any of the files in my Downloads
-folder. That's so weird. Playing music doesn't mean you get to access my stuff.
Get macOS Catalina installer
The process has not changed. Go to App Store of your mac and choose macOS Catalina. It will take a while to download all of 8 GiB of it.
When the installer has downloaded and automatically starts, you need to quit the installer.
If you'd continue, the installer would upgrade your mac and at the final phase of upgrade, it would delete the precious macOS files. You need to create the USB-stick before upgrading. This is how couple past macOS versions save your disc space.
Create your USB-stick from command line
For reason I don't fully understand, DiskMaker X version 9 failed to create the stick for me. It did process the files and seemed to do something, but ultimately my USB-stick was empty, so I chose to do this the old fashioned way. Maybe there is a bug in the DiskMaker X and a new version has thata one fixed. Check it yourself @ https://diskmakerx.com.
Insert the USB-stick into the mac and from a command prompt, see what the physical USB-drive is:
diskutil list
My mac has the USB-stick as:
/dev/disk3 (external, physical)
The other drives are flagged either 'internal, physical
', 'synthesized
' or 'disk image
'. None of them are suitable targets for creating Catalina installer stick.
Note:
Most of the commands below need root-access. So, either su -
to a root-shell or prefix the commands with sudo
.
Format the USB-stick. The obvious word of caution is: This will erase any data on the stick. Formatting doesn't wipe the sectors, but given a completely empty filesystem, your bits are quite lost there alone. Command is:
# diskutil partitionDisk /dev/disk3 1 GPT jhfs+ "macOS Catalina" 0b
The output is as follows:
Started partitioning on disk3
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Formatting disk3s2 as Mac OS Extended (Journaled) with name macOS Catalina
Initialized /dev/rdisk3s2 as a 28 GB case-insensitive HFS Plus volume with a 8192k journal
Mounting disk
Finished partitioning on disk3
/dev/disk3 (external, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *30.8 GB disk3
1: EFI EFI 209.7 MB disk3s1
2: Apple_HFS macOS Catalina 30.4 GB disk3s2
Now your stick is ready, go transfer the macOS installer files to the stick. The newly formatted stick is already mounted (run command mount
):
/dev/disk3s2 on /Volumes/macOS Catalina (hfs, local, nodev, nosuid, journaled, noowners)
Installer files are located at /Applications/Install\ macOS\ Catalina.app/Contents/Resources/
:
cd /Applications/Install\ macOS\ Catalina.app/Contents/Resources/
./createinstallmedia \
--volume /Volumes/macOS\ Catalina/ \
--nointeraction
Output is as follows:
Erasing disk: 0%... 10%... 20%... 30%... 100%
Copying to disk: 0%... 10%... 20%... 30%... 40%... 50%... 60%... 70%... 80%... 90%... 100%
Making disk bootable...
Copying boot files...
Install media now available at "/Volumes/Install macOS Catalina"
Now your bootable USB-media is ready!
Upgrade
Boot your mac, press and hold your option-key pressed to access the boot-menu:
A menu will appear, select Install macOS to upgrade:
Only after those yes, I agree -clicks your upgrade kicks in. This is the part you can go grab a large cup of coffee, no user interaction is required for 30 to 60 minutes:
During the upgrade, there will be couple of reboots. My thinking is, that first the upgrade will do any hardware upgrades, then the actual macOS upgrade.
When you see the login-screen, your upgrade is done:
That's it! You're done. Enjoy your upgraded mac operating system.
Windows 10 May 2019 update (1903) breaking sleep
Wednesday, October 2. 2019
This happens pretty much on every Windows 10 update. Last time I wrote about it was for Windows 10 Fall Creators Update (1709). The blog post is here.
Yes, again this absolutely ridiculous waking up begun. None of my previous fixes/hacks/duct tape/bubble gum patches worked. For weeks I either shut down my PC or just suffered on random power-ups.
When your Windows pops up from sleep, there is one very simple command to figure out why it happened. This is something that has worked for years in multiple Windows-versions: powercfg.exe /waketimers
(btw. you need to run that with Administrator permissions, regular user won't cut it.)
The result in each case was something I didn't expect to see:
Timer set by [PROCESS] \Device\HarddiskVolume6\Windows\SystemApps\Microsoft.Windows.StartMenuExperienceHost_cw5n1h2txyewy\StartMenuExperienceHost.exe expires at on .
What the hell is StartMenuExperienceHost.exe and why would I need it for anything? I have no clue. Also, why would I need that unknown piece of crap so badly it can wake my computer up? Again, no clue. Honestly, I would delete the darned file if it wasn't part of Windows system. Given the protected status of system files I really cannot rip the junk off. Unfortunately.
This harrasment kept going on for weeks as I was researching for a solution. Most of the suggested fixes are about Troubleshooter, Power or disabling Wake Timers or Automatic Maintenance's option Allow the scheduled maintenance to wake up the device at the set time. Neither of those really help.
Finally I cracked it! While looking for things to try, I stumbled into Microsoft community discussion System keeps waking. ShellExperienceHost.exe. Build 17074. There one of the suggestions was to check the Local Group Policy. There is Computer Configuration > Administrative Templates > Windows Components > Windows Updates > Enabling Windows Update Power Management to automatically wake up the system to install scheduled updates. Setting it to Disabled makes the difference!
I have to agree with a comment in Reddit:
Microsoft needs to provide a simple, global switch to NEVER EVER under any circumstances wake the PC up from sleep/hibernation as if the PC was shutdown and unplugged from the wall.
Its really puzzling to try and understand why such an option doesn't exist.
Python script for creating Linux iptables port forwarding rules based on an YAML-file
Saturday, September 14. 2019
I was trying to get Wreckfest multiplayer working. Unlike many modern games, the studio/publisher of this game does not run multiplayer servers. Instead, you need to do this the old-fashioned-way. I haven't done this since ...90s(?) and my port forwarding setup skills were rubbish. I remembered having pretty solid skills for managing stuff like this, but not needing that for past 20 years does its damage.
Instead of doing a quick look at the manuals and information what others did and hacking a single-time setup, I chose to create a generic tool to generate the required commands to be run as a root. Running the tool doesn't require any elevated privileges, but running the result will. For example, generating Wreckfest rules:
./portforward-yaml-to-iptables.py Wreckfest.yaml 192.168.0.5 eth1
will result:
iptables -t nat -F wreckfest > /dev/null || iptables -t nat -N wreckfest iptables -t nat -A PREROUTING -i eth1 -j wreckfest iptables -t nat -A wreckfest -p tcp -m multiport --dports 27015:27030 -j DNAT --to-destination 192.168.0.5 iptables -t nat -A wreckfest -p tcp -m multiport --dports 27036:27037 -j DNAT --to-destination 192.168.0.5 iptables -t nat -A wreckfest -p udp -m udp --dport 4380 -j DNAT --to-destination 192.168.0.5 iptables -t nat -A wreckfest -p udp -m multiport --dports 27000:27031 -j DNAT --to-destination 192.168.0.5 iptables -t nat -A wreckfest -p udp -m udp --dport 27036 -j DNAT --to-destination 192.168.0.5 iptables -t nat -A wreckfest -p udp -m udp --dport 33540 -j DNAT --to-destination 192.168.0.5
That's plenty, but will do the trick. Also my design choice is to create a per-game chain of the rules, so cleaning up is easy. Just run iptables -F wreckfest
and be gone with the forwarding.
GitHub project is at https://github.com/HQJaTu/iptables-portforwarding-rule-generator, go see it!
Update 15.9.2019:
I got comments from people: "Why do you need this? UPnP works fine. No hassle."
Well... It would take a million years for somebody to convince me to enable UPnP in my router. The entire concept has been designed from ground up to be insecure. The obvious goal was to create something that will create required port-forwardings into your router quickly and easily without asking you anything.
There is a lengthy discussion about UPnP security in multiple places of the Internet. My favorite one is at Is UPnP still insecure?. It will cover all the aspects of the issue.
The ultimate conclusion still remains: I do not want a random device/software to be able open access anything in my LAN from public Internet without me knowing about it. When such opening is required, I will open it for the duration needed and the appropriately close the access. UPnP doesn't tick any of the requirement boxes for me. That's why no. It won't be allowed in my systems.
Notes on running openSUSE on a MacBook Pro
Sunday, August 18. 2019
This one is related to a previous post of mine Installing openSUSE Leap 15.1 into a MacBook Pro with encrypted root drive.
kworker CPU-hog
Symptoms:
CPU-load is high. Top consumer is kworker
:
top - 11:16:47 up 6 min, 4 users, load average: 0.93, 0.70, 0.36
Tasks: 248 total, 2 running, 246 sleeping, 0 stopped, 0 zombie
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
58 root 20 0 0 0 0 R 73.33 0.000 4:29.74 kworker/0:
What a kernel worker (or kworker
) is, a sensible explanation can be found from Kworker, what is it and why is it hogging so much CPU?. In this case, high CPU on a kworker
is a symptom, not the fault.
This same issue is faced by multiple users on different hardware, for example Kworker is at 100% - I think I've tried everything!. Also Fedora Linux users on MBP are suffering from the same Bug 1192856 - ACPI Interrupt storm causes high kworker CPU usage.
Checking some kernel performance counters (more about those, see perf: Linux profiling with performance counters), perf-report:
Children Self Command Shared Object Symbol
+ 50.51% 0.01% swapper [kernel.kallsyms] [k] cpu_startup_entry
+ 41.67% 0.31% swapper [kernel.kallsyms] [k] acpi_hw_read_port
+ 39.99% 39.99% swapper [kernel.kallsyms] [k] acpi_os_read_port
+ 37.62% 0.00% kworker/0:2 [kernel.kallsyms] [k] ret_from_fork
Something really fishy is going on with acpi_os_read_port
. To get the amount of interrupts generated, most people run a simple grep . /sys/firmware/acpi/interrupts/*
, but not me. My solution is to do a simple(?) Perl one-liner wrapped here on multiple lines for readability:
perl -ne 'next if (!/^\s*(\d+)\s+/);
next if (!$1);
$intrs{$ARGV}=$1;
END {foreach (sort {$intrs{$b} <=> $intrs{$a}} keys(%intrs)) {
printf("%s: %d\n", $_, $intrs{$_});
};
}' /sys/firmware/acpi/interrupts/*
It will output interrupt counts. On my system, the counters are something like this:
/sys/firmware/acpi/interrupts/gpe_all: 4695534
/sys/firmware/acpi/interrupts/sci: 4694806
/sys/firmware/acpi/interrupts/gpe06: 4694582
/sys/firmware/acpi/interrupts/gpe17: 940
If you're really interested in what's happening inside ACPI and how General Purpose Events (GPE) work, read the document ACPI in LinuxArchitecture, Advances, and Challenges.
Anyway, it looks like ACPI event 06 is firing a lot. Way too lot keeping kworker
busy handling the interrupts. This interrupt-handling on the other hand, takes a lot of CPU-power making your system run hotter than expected.
Fix:
As root, a simple echo disable > /sys/firmware/acpi/interrupts/gpe06
will do the trick. Resulting in CPU cooling down:
The effect is instantaneous and easy to notice. To persist this setting over reboots, things get bit more trickier. A lot of people suggest putting the above disable into crontab to be run on a @reboot
. My opinion is clear: this is a system problem, system fix and needs to be addressed by system, not user. The @reboot
-magic doesn't work in /etc/crontab
, only on user's crontab-file.
A very good alternative is to go with acpi_mask_gpe
-kernel parameter. It is documented in The kernel’s command-line parameters. And docs state "This facility can be used to prevent such uncontrolled GPE floodings". Nice! Exactly what I need. The tricky part is to edit /etc/default/grub
and add acpi_mask_gpe=0x06 into GRUB_CMDLINE_LINUX-setting.
That's not all. Simply having the setting in a file won't change a anything yet. To make the new setting stick, on most Linuxes you would run update-grub
. Some smart brain chose not to implement that helper into openSUSE, so that's not an option. To achieve the same, go for a: grub2-mkconfig -o /boot/grub2/grub.cfg
Now you're all set. Reboot and confirm. Of course you didn't fix the actual problem with ACPI, you simply made kernel ignore the interrupt-flood. You may want to settle for that at this point. I did file a bug report about this (Bug 1146023), but it seems the problem is limited to a specific set of MBPs and Fedora kernel guys are having hard time reproducing the issue. So, I'm not expecting SuSE guys to fix this anytime soon either.
Touchpad scroll direction
In 2011 when OS X 10.7 Lion was released, Apple made a drastic change on how touchpad (or how Apple calls it: Trackpad) scrolling works. Personally I totally understand this new logic, they wanted the scrolling user experience to be exactly the same and you would do on your phone's touch screen. You place your finger on the screen and pull the finger to the direction you want the screen to scroll. Before macOS 10.7 (note: the name change from OS X to macOS happened on 10.12 sierra) touchpad scrolling was targeted for the scrollbar, not to the actual content like on your touchscreen effectively reversing the direction.
So, as a macOS / iPad user, I want my scrolling to happen correctly, not the Windows way. To change, navigate to Touchpad settings and reverse the vertical scrolling:
Reversing the reverse makes your head spin, but there is a logic there.
Touchpad gestures
On any typical use case, I would use a mouse. As in real external device, not the skin-on-my-fingertips damaging trackpad. For reasons unknown to me, it literally hurts to use those trackpads for too many hours. So, I normally steer away from them. It looks like I'm pretty alone with this one, but all I can do is to complain and use a mouse.
Typical this you may want to do:
- Right-click: On macOS, hold Ctrl and tap is a right click. This won't work on a Linux. You need to do the two-finger-tap for right-clicking.
- Scrolling: You can go with keyboard or keep dragging the scrollbars, that's ok. Doing a two-finger scroll with the touchpad, however, is much faster and less error prone.
- Zoom: Pinch won't work on a Linux, you need to use the actual zoom-control of whatever application you want to zoom
All the Apple's gestures are documented in HT204895 suppor article Use Multi-Touch gestures on your Mac, but as already mentioned, not all gestures are supported on a Linux.
Something else?
Nothing more pops into my mind. If you have anything to comment, please do so.
Installing openSUSE Leap 15.1 into a MacBook Pro with encrypted root drive
Thursday, August 15. 2019
I happen to have a spare 3rd generation 15" MBP. It is late 2013 model, used for years but recently just gathering dust. Back in the days, when the laptop was shiny and new, it was a serious piece of hardware and cost north of 2000 €. Even today, a fully working second hand 15" late 2013 would be priced around 25% of the original value. Couple years later Apple started manufacturing MBPs with faulty keyboards, faulty touch bar (the really weird top row) and faulty batteries. I'm in luck, as my unit pre-dates all of those. Yes, there are also some fatures missing which I'd love to have, fingerprint reader, USB-C -port and improved display to mention the most important ones.
Since the dust-gathering, I decided to see how a fresh Linux installation would go. Back in the days, I had a plastic MacBook running Linux. Given 32-bits and not-so-powerful CPU, I recycled it at the point I got tired of waiting for pages to load too long on Firefox. The amount of JavaScript modern web pages run is staggering. All that requires huge amounts of CPU to parse, process and run. Old low-end models simply won't cope making web browsing nearly impossible. So, I didn't have a Linux-Mac for couple years and now I wanted one.
Known facts
This I could figure out before doing any real installing:
- Installation can be done. openSUSE docs describe the process @ https://en.opensuse.org/SDB:Installation_on_a_Mac
- The built-in Broadcom WiFi isn't properly supported by most Linuxes, that includes openSUSE
- Debian Linux wiki @ https://wiki.debian.org/MacBook/Wireless states:
"MacBooks before the late 2008 models use Atheros chipsets, while late 2008 are now using Broadcom 4322 chipset." - StackExchange article @ https://unix.stackexchange.com/a/175813/241396:
"Broadcom 4360 actually comes with either of two distinct chips, 14E4:4360 and 14E4:43A0. There is no driver in Linux for the first one, while wl is an appropriate driver for the second one." - Arc Linux wiki @ https://wiki.archlinux.org/index.php/broadcom_wireless#broadcom-wl only states the fact wl-driver not being open-source.
- It is unclear which exact chipset is in the Mac, but it seems not to be properly supported in Linux.
- Debian Linux wiki @ https://wiki.debian.org/MacBook/Wireless states:
- My MBP doesn't have Ethernet RJ-45 port
- The existing USB-ports or Thunderbolt-ports could be used to host a network adapter, either wired or wireless, but I choose not to go down that path. Having an USB-dongle stick out of the Mac isn't something I'm willing to do.
- There is a good chance, that a Linux-driver for the built-in Broadcom WiFi exists. The existing driver can be downloaded from the net, but require some trickery to install during openSUSE installer.
- In macOS, drives are encrypted. In Windows using BitLocker is a good option to keep your data secure. Linux supports drive encryption as well.
Prerequisites
- A MacBook Pro with SSD full of data you don't care about. Also emptied SSD-drives will do.
- openSUSE Leap 15.1 ISO-image from https://www.opensuse.org/
- I personally will go for the full DVD, but it is not absolutely necessary. Small network installer will work as well.
- Skills to create a bootable USB-stick from the install image
- Your wireless network credentials and skills to connect a new device into it
- Two USB-sticks:
- One to boot openSUSE installer from. This stick will contain installation image and cannot be easily written into and must be attached to the Mac during installation.
- Second to transfer the WiFi-driver files
- Broadcom Linux wl-driver from http://download.opensuse.org/repositories/home:/Sauerland:/hardware/openSUSE_Leap_15.1/x86_64/
- Installer will need file
broadcom-wl-kmp-default-6.30.223.271_k4.12.14_lp151.27-lp151.121.1.x86_64.rpm
. - Store the file into the second USB-stick.
- Installer will need file
- Since kernel-default -package has an update, the above RPMs needed for installer won't work in your installed openSUSE. To fix this, you will need also Broadcom Linux wl-driver from http://download.opensuse.org/repositories/home:/Sauerland:/hardware/openSUSE_Leap_15.1_Update/x86_64/
- Your installed openSUSE will need two files
broadcom-wl-6.30.223.271-lp151.121.4.x86_64.rpm
andbroadcom-wl-kmp-default-6.30.223.271_k4.12.14_lp151.28.10-lp151.121.4.x86_64.rpm
. - Store these files into the second USB-stick.
- Your installed openSUSE will need two files
- To make your life easier, having access to another computer while doing the install will help the process greatly. Especially, if you need to get a newer version of the Broadcom-drivers.
- That's it!
Disclaimer for RPM-files:
Those versions of files are valid at the time of writing. Both the install ISO and installed & updated openSUSE Linux are likely to eventually get updated and the versions of those files WILL change. Keep reading! There are instructions for you on how to compensate for updated versions. However, download locations will not change for openSUSE 15.1. The locations won't be valid for 15.2, but I'm sure you got that already.
Step 1: Prepare openSUSE 15.1 install
As instructed in https://en.opensuse.org/SDB:Installation_on_a_Mac, copy the installer ISO-file to the USB-stick. On macOS and Linux, figuring out the exact output drive can be bit tricky, so be careful with that. On Windows, the best utility for USB-drives is Rufus. You can get it from https://rufus.ie/.
While at it, download the required driver from above URLs pointing to Sauerland repository. Both versions are eventually needed, so just download them.
Note: for installer, only broadcom-wl-kmp-default -package is needed. We can safely ignore any RPM dependencies, as it is not possible to actually install the package. For installed Linux both files are needed, they have a dependency between packages. More about that when you're actually tinkering with the files.
Step 2: Boot the Mac into openSUSE installer
Plug in the installer USB (the other USB isn't needed yet). Power up the Mac. When you hear the boot-sound, press and hold the Alt/option-key. When you see the boot-menu, you can release the key. Displayed on your screen, there is a boot drive selection menu:
Go for the USB UEFI, that will start openSUSE installer.
Old geezers like me will remember the times when Macs required special bootloaders and tons of tinkering just to get the USB-boot to work. For past years, thanks to UEFI, USB-booting a Mac isn't any different than booting a PC.
Step 3: Establish facts
When the graphical installer launches, you'll see the license agreement screen:
On top-right corner of that screen is also keyboard map choice. This is your chance to make sure your keys will work correctly on the GUI. This won't affect the text-console.
Switch to text-mode console
There are two critical pieces of information needing to be confirmed to get the WiFi-driver working. Since the openSUSE installer is running in graphical mode, you need to switch to text-mode. In a Linux, the console can be chosen with Alt-<function key> -combo. When graphical interface is running, key combo is Ctrl-Alt-<function key>. To make things even trickier, in a Mac <function key>s require the actual Fn-key to be pressed, so ultimately you'll need to press and hold Fn-Ctrl-Alt and then tap F2-key to go for a text-console.
Confirm kernel version
On a bash-prompt, check the exact Linux kernel version with a:
cat /proc/version
As a result, my install image will display:
Linux version 4.12.14-lp151.27-default
In Linux, all device modules are stamped with the specific kernel version they'll work with, so it is imperative to get the Broadcom driver for that exact kernel version.
Confirm broadcom WiFi hardware
On a bash-prompt, get a listing for wireless network devices. List PCI-devices with class 280, that's for all network controllers (listing is at https://pci-ids.ucw.cz/read/PD/), as already established, there shouldn't be too many of those:
lspci -d '::280' -vv -n
On my Mac, the response is:
02:00.0 0280: 14e4:43a0 (rev 03)
Remember from above: "Broadcom 4360 actually comes with either of two distinct chips, 14E4:4360 and 14E4:43A0. There is no driver in Linux for the first one, while wl is an appropriate driver for the second one." So, this is the confirmation we needed. wl is the driver needed and we can move forward with installing the driver.
Step 4: Install the Broadcom driver
This screenshot depicts my process:
Now that you know the kernel version (4.12.14_lp151.27 in my installer) and copied the downloaded RPMs into the other USB-stick, plug it in. Remember: the installer USB is needed and cannot be removed.
Determine the correct device and mount it
In the above screenhot, the USB-stick having the drivers is /dev/sdb
. First I checked out which storage devices Linux found. There are number of those for: Apple SSD, installer USB and driver USB. Since I knew, my driver USB is a Kingston Datatraveller, displaying the block-device partition table by running parted /dev/sdb print
helped to identify the correct drive.
I mounted the USB-stick with a:
mkdir /tmp/usb
mount /dev/sdb1 /tmp/usb
Locate the files from the USB-stick.
Unpack the driver RPM
At this point, the Broadcom RPMs versioned for openSUSE installer kernel need to be popped open. The ones for your running Linux will install as-is, so no need to unpack them. It would be easy & smart to just install the RPM, but given Linux installer, it is not possible. Most of the mounted filesystems in an installer are SquashFS filesystems (more info from https://en.wikipedia.org/wiki/SquashFS) and are read-only. Well... you CAN alter a SquashFS, but it is more complicated than just running couple commands to extract the driver from RPM and injecting it into a running system.
To unpack an RPM, in a temporary directory run following (this is a single-liner, I just wrapped it into two lines for readability):
rpm2cpio broadcom-wl-kmp-default-6.30.223.271_k4.12.14_lp151.27-lp151.121.1.x86_64.rpm \
| cpio -ivd
This will create any subdirectories and files into the current directory where cpio
-command is run. That's why a temporary location is suggested.
Install wl-driver
As already established, Broadcom on Linux is a messy business. And this Broadcom chip in your MBP is especially messed up. So beware! Trickery ahead.
The other file you won't be needing during installation, but you will for your running openSUSE would contain a file etc/modprobe.d/50-broadcom-wl-blacklist.conf
. This blacklisting mechanism exists in Linux to prevent device-modules from being loaded when a particular device is being used. Now, at the point you're running the commands on your Linux-installer its too late. Your installer already loaded bunch of device drivers you won't be needing. So, they must go. Contents of the file suggests what to remove:
# modules blacklisted for broadcom-wl
blacklist bcm43xx
blacklist ssb
blacklist b43
blacklist ndiswrapper
blacklist brcm80211
blacklist bcma
blacklist brcmsmac
You can run command rmmod
for each of those, but I choose to go for a for-loop:
for module in bcm43xx ssb b43 ndiswrapper brcm80211 bcma brcmsmac; do
rmmod $module
done
In my Mac, at least modules b43, ssb and bcma were loaded.
Warning: If you DON'T rmmod
conflicting modules, your wl-module will NOT work.
Finally, it's time to go for the good driver. When you unpacked the driver RPM, it contains a subdirectory. My driver is at lib/modules/4.12.14-lp151.27-default/updates
. Go there and run:
insmod wl.ko
There is zero feedback for your command. To get an indication of success, run iwconfig
. It will display something like:
wlan0 IEEE 802.11 ESSID:off/any
Mode:Managed Access Point: Not-Associated Tx-Power=200 dBm
Retry short limit:7 RTS thr:off Fragment thr:off
Encryption key:off
Power Management:off
When you see an existing wlan0, you're golden! Now you're good to go forward with graphical installer.
Step 5: Install openSUSE Leap 15.1
First, go back to graphical installer. The key combination for that is:
Press and hold Fn-Alt and then tap F7-key
Add your Mac's Broadcom to Linux
In the graphical installer, go Next from license screen and witness a blank Network Settings screen. The installer wasn't able to determine any operational network interfaces in your system. Not to worry, now that your Broadcom is on-line, you can add it. Click Add, and add a Wireless device using Kernel module wl:
Moving forward with Wireless Network Card Configuration, a screen to connect to your WiFi will appear:
Scanning for any available networks will provide you a drop-list of found networks. Your own network needs to be in that list. Just add your authentication details to the dialog and you're good to go.
Instruct openSUSE installer to encrypt your root drive
(This is completely optional, you don't NEED to have an encrypted hard-drive. I just love doing stuff like this.) When your installer wizard will reach Suggested Partitioning, that's your cue. Going for encrypted root partition is actually surprisingly easy on openSUSE:
Go to Expert Partitioner using the suggested partition as a baseline for your custom partitioning. The only thing you need to change is to enable encryption on your root partition. Find mount point / and Edit it. Just click Encrypt Device and choose a wise password for your encryption. That's it. Then you can Accept the partition plan and move forward.
Add an user to new Linux
For security reasons, I'll always separate my administrators and regular users. What level of security you're targeting for is completely up to you. I, personally won't use Automatic Login and won't use same passwords for super-user and regular users. The Local Users dialog will look like this:
Hang tight, you're almost there.
Install!
When you reach the Installation Settings, you're about to start destroying any previous data on your Mac and start installing the Linux:
Enabling SSH and drilling a hole for it into firewall is up to you. Also, the choice of tooling for network management is up to you. I do dislike NetworkManager and obviously choose to drop it and go with wicked. If you want to learn more, go read wicked FAQ at https://github.com/openSUSE/wicked/wiki/FAQ.
When you click Install in the Confirm Installation -dialog, the ball starts rolling. If you missed the previous warning, before this point your data is still intact in the drive, after this point it won't be anymore. This is the part where you can go grab a cup of coffee:
At the successful end, your system will automatically reboot. When the installer reboots the computer, you can remove the installer USB. It won't be needed anymore.
Step 6: Reboot into your newly installed Linux
If you went with encrypted partition, GRUB will require the password to decrypt the drive to load Linux kernel:
Hint: If you already didn't, just yank out the installer USB.
The "funny" thing about encrypted root partition is: when your kernel is loaded by GRUB, kernel will need the same password again!
Yes, punch in the same password again to reach login-screen.
Add your Mac's Broadcom to Linux - Again!
Once logged in all the hard work you did with the Broadcom-driver are lost. Your newly installed openSUSE will NOT have the Broadcom-driver installed. What! What! What?
But wait! This time installing is much easier. You actually CAN simply install the RPMs from the other USB-stick. On an installed Linux, the system can be written into.
Confirm the kernel version (again) and notice how it is different. My system has:
Linux version 4.12.14-lp151.28.10-default
Just like before, figure out which one the device is (with a great likelihood, it is not the same device anymore) and mount it. Once mounted, as root you can do something like:
rpm --install -h *.rpm
Installing the RPMs won't make your network operational. In a Linux, there are always ways to avoid rebooting, but in this instance I found it simply easier to go the Windows-way and reboot. You can do that at this point. Alternatively if you don't just yet need a network, you can also remove the second query for encryption password and then reboot to fix two problems on a single reboot.
Remove the 2nd password prompt
This is something I wouldn't do on a shared machine. Since I'm almost certain, nobody else will use my laptop, in this instance I'm willing to add ease of use with cost of potentially losing my security. If a logged-in user can access the encryption key-file, my encryption security is gone! Before allowing a second encryption key to be used, the plain-text password is not stored anywhere. To keep my system still secure, I need to make sure the permissions for the file are correctly set and also keep any malware or unwanted software from stealing my precious key-file.
Process for setting up encrypted root in openSUSE is described well in https://en.opensuse.org/SDB:Encrypted_root_file_system. As instructed there, create a keyfile, add the keyfile as a valid key into LUKS and secure the setup:
For relaxing my system's security, the general idea is to load some random bytes from /dev/urandom
and create a random-content key-file. The docs are using 1024 bytes, I went double that for a 2048 byte keyfile. In my sequence, I first create an empty file, then lock all other users out of it and then populate the file with random bytes:
touch /.partition.35d5293d-6fe0-4eaf-987f-4f2eca8ca6d7.key
chmod 600 /.partition.35d5293d-6fe0-4eaf-987f-4f2eca8ca6d7.key
dd if=/dev/urandom \
/.partition.35d5293d-6fe0-4eaf-987f-4f2eca8ca6d7.key \
bs=1024 count=2
Then this newly created filename needs to be added into two places. First /etc/crypttab
. After setup, there are only two columns. As instructed in the documentation, a third column needs to be added. Note: This single line having three columns is split into three lines here for readability. If you fail to have this as a single line, your system will fail to reboot:
cr_ata-APPLE_SSD_SM0256F_S1K4NYBF401257-part3
UUID=7e49a147-b626-41cc-bc89-49e8a13b9cb0
/.partition.35d5293d-6fe0-4eaf-987f-4f2eca8ca6d7.key
Second into initRD by creating a new Dracut configuration segment /etc/dracut.conf.d/99-root-key.conf
:
echo -e 'install_items+=" /.partition.35d5293d-6fe0-4eaf-987f-4f2eca8ca6d7.key
"' \
| sudo tee --append /etc/dracut.conf.d/99-root-key.conf
Yet again, to keep your system secure, make sure nobody else can access your /boot
-stuff:
chmod -R go= /boot/
To have this Dracut segment have any effect, you need to run mkinitrd
as root. If you want more information about initRDs or Dracut, go to https://dracut.wiki.kernel.org/.
Now, you're ready to reboot the Linux. Both WiFi and password setups are done.
Step 7: Reboot and verify setup
You will need to enter the encryption password for GRUB, no change there. However, your Linux should go directly to login-screen. If it doesn't, you failed somewhere.
Also, after a login, your WiFi should work. To confirm, go to a prompt:
Since installer saved your wireless connection details, now you have network connectivity also in your installed Linux!
Step 8: Finalize
Remember the part where your installer used a different version of Linux kernel than your ready system? Kinda hard to not remember it, right. Now some additional work is needed to keep your system working:
As root, run this in a bash-prompt (again, a single-liner):
zypper addrepo \
https://download.opensuse.org/repositories/home:Sauerland/openSUSE_Leap_15.1_Update/home:Sauerland.repo
The setup information is available at https://software.opensuse.org/download.html?project=home%3ASauerland&package=broadcom-wl.
Finally: Go to YaST2 and use the GUI to inspect all the software repositories your system has. It must have Sauerland's repo enabled for the Broadcom driver to get updates as openSUSE's kernel-default -package does.
Step last: All done
That's it. Now your Mac is successfully running an openSUSE 15.1 Leap Linux.
Enjoy!
Zebex Z-3151HS bar code reader - Windows 10 - Unknown USB, USB device not recognized
Thursday, July 11. 2019
Among the number of toys I have, are couple of Zebex bar code readers. Since forever (or 2013), I've had issues with Windows not detecting the newly attached USB-device correctly. See my previous article Windows 7 unable to detect a HID barcode scanner for details. That was on a Zebex Z-3000 to be exact.
During all these years, the problem has not solved itself and all my attempts to fix have been not-so-successful. The reader I've been most recently used is an improved one using a laser to read the data, a Z-3151HS (the HS stands for High Speed). The version query will return a single-line string containing following:
Model Z3151HS S01.01
Ver
01.27
Date
01-18-2019
On a plugin, the USB human-interface-device (or HID, or keyboard) does this:
Nobody loves seeing an USB device not recognized -popup, or Device Descriptor Request Failed on device manager. That's a sign of a major malfunction.
Luckily, in this case without spending too much time, little bit of troubleshooting became fruitful and both problem reason and a fix were found. The bar code reader loves being in a wrong mode. This is an USB-device, but no matter what I try, on plugin the device will be in IBM PC/AT/PS/2 Keyboard emulation mode. That incorrect mode makes the newly plugged in USB-device not respond to a Windows intiated device query. As you might think, your Windows will be pissed about the failure and bark about it. Obviously, the device should be in USB-mode to handshake properly with Windows, that's a no-brainer.
Again, the mode can be changed easily by scanning some bar codes (these are actually in the manual too):
Enter configuration mode, switch to USB-mode and end configuration mode. Poof! Now Windows 10 sees the HID USB-thingie ok and it does work correctly. The obvious problem is, that you really do need to do it EVERY SINGLE TIME you plugin the device to your computer. Not cool! Numerous attempts to fix the mode-select have failed. If you have a suggestion, I'd be happy to try it.
Bonus: Firmware upgrade
My unit had a measly 1.03 firmware when I bought it. Obviously, I've been upgrading firmwares while hoping the problem would be solved somehow. No such joy yet.
New firmwares can be downloaded from Zebex website @ https://www.zebex.com/en/product/index/20/Z-3151HS with appropriate tooling to do the update. Since this is on a computer, nothing will ever work as smoothly as you'd like. On an attempt to run the firmware upgrade tool, this will happen:
In front of your eyes, a failure will occur and a message will state: Make sure the device work under USB-Boot mode. I'm not sure whose English grammar is better, mine or that person who wrote that sentence, but the problem stands. You need to get your device into USB-Boot -mode and you have zero clue on how to do that.
Tons of googling, reading forum posts and manuals will reveal, that such a sequence does exist. You need to scan following bar codes:
Yet again, go for configuration mode, and place the device into USB update -mode. After that, your firmware update will proceed to actual update.
I'll hope these tips will help some poor user trying to get his/hers reader working.
Enjoy scanning!
systemd setting overrides
Monday, June 17. 2019
In couple past posts of mine, I've instructed people to do alter systemd service descriptions. Examples from Randomness in computers and Handling /run
with systemd. As we all know, systemd is quite a beast. It just keeps on giving complexity as we stumble along trying to grasp all of it.
For example:
To add more command arguments to rngd.service, you can do what I instructed earlier. You can copy /usr/lib/systemd/system/rngd.service
to /etc/systemd/system/
, edit the file and do a systemctl daemon-reload
to update the changes.
If you want to choose more systemd'ic (Python developers always say "pythonic") way, you can do a:
systemctl edit rngd
This command will open a text-editor. The editor is your default one from EDITOR
environment variable, or if you haven't picked one, a default will be chosen for you by your favorite Linux distro vendor.
What you should make the text file contain, is following three (3) lines:
[Service]
ExecStart=
ExecStart=/sbin/rngd -f --rng-device=/dev/TrueRNG --fill-watermark=4000
Those lines will re-define service description. First by clearing out the previous value(s) and then defining a new one. On save, this setting will take automatically effect and you don't have to do anything else. Also, this setting is persisted in a file called /etc/systemd/system/rngd.service.d/override.conf
. Of course, you can create that particular file yourself, but then the familiar systemctl daemon-reload
is needed to make your changes stick.
Nice and easy!
Adding a source of randomness to a Linux
Monday, June 3. 2019
Randomness in computers
You don't need to know much about computers to understand, that computers cannot do random things. Yes, all programming languages and libraries do offer you a rand()
-function to emulate randomness. However, the resulting output will follow the carefully crafted programming implementing this "randomness". The most trivial pseudo-random functions will merely provide a sequence of numbers appearing random, but this sequence can be reset to start from beginning making the "randomness" predicatable. That's not really very random, huh!
Improved randomness in computers
To be fair, there does exist improved pseudo-random algorithms which take their initial seed-values from something volatile (time is one such volatile parameter) making the quality of randomness better. Still, even high-quality pseudo-random algorithm is just complex sequence of operations, which will produce duplicate results on same input values. Sometimes its just very tricky to craft a situation where all of the input values would match.
If somebody is capable of doing that, your randomness changes into predictability. Read the story of Dual_EC_DRBG on Wikipedia https://en.wikipedia.org/wiki/Dual_EC_DRBG. When you're generating your precious private keys, you don't want anybody (including NSA) to be able to guess what you have there.
Random source in Linux
Since a proper random source is something every single user, developer and sysadmin would love to have, the problem has been approached on your Linux by authors of the operating system. An excellent description can be found from Wikipedia article https://en.wikipedia.org/wiki//dev/random#Linux. Briefly put, your Linux will collect environmental entropy from number of sources (including human interaction with keyboard and mouse) to a pool, which can then be used to produce naturally random numbers. It actually works very well, the quality of randomness is top-notch.
Obvious problem with this approach is, that you cannot pull too many random numbers out of this source without exhausting it. The fix is to keep typing something while moving your mouse (not a joke!) to generate entropy for the random source. This will eventually help fill the entropy pool and /dev/random
will spit couple bytes more.
Those users who have exhausted their /dev/random
on an idling rack server without a console keyboard, mouse and video know that it takes painfully long for the entropy pool to fill. A busy server doing something will be able to fill the pool much faster.
A real random source
If you need a real proper random source, which works without human intervention and can provide really good randomness as a stream, there are possibilities on hardware. I know of two good ones, Simtec Electronics Entropy Key and ubld.it TrueRNG Hardware Random Number Generator.
Note: if you consider getting one, get the TrueRNG version 3 (http://ubld.it/truerng_v3). Its just that I have the 1st gen version at hand and haven't found the reason to upgrade.
It is essentially an USB-stick.
Linux lsusb
info essentially identifies it as a Microchip (vendor ID 0x04d8) manufactured USB-device (with ID 0xf5fe) providing RS-232 communications:
Bus 002 Device 009: ID 04d8:f5fe Microchip Technology, Inc.
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 2 Communications
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 8
idVendor 0x04d8 Microchip Technology, Inc.
idProduct 0xf5fe
bcdDevice 1.00
iManufacturer 1 ubld.it
iProduct 2 TrueRNG
iSerial 0
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 0x0043
bNumInterfaces 2
bConfigurationValue 1
iConfiguration 0
bmAttributes 0x80
(Bus Powered)
MaxPower 100mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 1
bInterfaceClass 2 Communications
bInterfaceSubClass 2 Abstract (modem)
bInterfaceProtocol 1 AT-commands (v.25ter)
iInterface 0
CDC Header:
bcdCDC 1.10
CDC ACM:
bmCapabilities 0x02
line coding and serial state
CDC Union:
bMasterInterface 0
bSlaveInterface 1
CDC Call Management:
bmCapabilities 0x00
bDataInterface 1
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 3
Transfer Type Interrupt
Synch Type None
Usage Type Data
wMaxPacketSize 0x000a 1x 10 bytes
bInterval 1
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 1
bAlternateSetting 0
bNumEndpoints 2
bInterfaceClass 10 CDC Data
bInterfaceSubClass 0
bInterfaceProtocol 0
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x02 EP 2 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x82 EP 2 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 0
Device Status: 0x0001
Self Powered
And by looking at /dev/
, there is a /dev/ttyACM0
. That's how udevd will populate a CDC-device when it sees one.
How is this a "true" random source?
Oh, that's easy. The device will produce a random 0 or 1 bit constantly when its on. Or to be precise, there is an internal algorithm producing those based on a constant flow of electrons on a transistor PN-surface. The exact phenomenon is called avalance effect or avalance breakdown. For those who can do electronics, there is a good explanation about this in Difference Between Avalanche Breakdown and Zener Breakdown (I borrowed the visualisation pic from above link).
To (over)simplify that, in a carefully constructed electronic circuit, inside a transistor an electron may or may not be emitted on the other side of a semiconducting surface. The occurrence is as random as it can be in nature. Other circuitry will detect this random flow of electrons (or lack of flow) to produce ones and zeros.
What makes this a really good for randomness, as it is well established that this avalance of electrons will happen. Also, it will happen often enough to produce a stream of events. It's just that we don't know exactly WHEN the avalance of electrons will happen. If you time-slice this to slots, a slot can be empty (no avalance) or full (electrons avalanching).
Linux tweaking:
udev
Anybody having multiple devices in their Linuxes knows, that you really cannot control which device name some specific device will get on reboot. To overcome that, udevd can be instructed to do things when it sees a device. My rules for TrueRNG include setting it to highest possible speed and creating a symlink-device so, that I can point to a known source of random. Also, I'm loosening access to that source of randomness to any users belonging to dialout-group. If I wouldn't do that, only root would have access to this fine random-source.
My /etc/udev/rules.d/99-TrueRNG.rules
contains:
SUBSYSTEM=="tty", ATTRS{product}=="TrueRNG", SYMLINK+="TrueRNG", RUN+="/bin/stty raw -echo -ixoff -F /dev/%k speed 3000000"
ATTRS{idVendor}=="04d8", ATTRS{idProduct}=="f5fe", ENV{ID_MM_DEVICE_IGNORE}="1", GROUP="dialout", MODE="0664"
If you want to take your random-device for a spin, you can do something like:
dd if=/dev/TrueRNG of=random.bytes bs=64 count=1024
That would create a file of 64 KiB containing very very random bytes. In theory you can just cp
data out of the character device, but since it has an infite flow, you'll need to cut it at one point.
rngd
Remember the part I said earlier about Linux using your keypresses and mouse movements as entropy source for randomness. Even with the USB-stick popped into a PC, that still remains the case. What needs to be done next is to offer a helping hand to the Linux kernel and make sure the entropy pool is always full.
My Fedora has package called rng-tools. It is packaged from Mr. Horman's https://github.com/nhorman/rng-tools. What's in there are the tools for pumping those precious truly random bits out of the USB-source to Linux kernel's entropy pool. As default, rngd will use /dev/hwrng
as the source for randomness. Some Linuxes don't have that device at all, some Linuxes point that into CPU's random source. What's guaranteed, it will not point to your USB-stick! We need to change that.
Btw. you might be horrified by the fact, that something is fidding with your randomness. The exact bits transferred from USB to entropy pool won't be the actual bits getting out of /dev/random
. Your keypresses and many other events are still a factor. Its still a good idea to not run randomness-monitoring malware or spyware in your Linux.
Systemd works so, that I did create a copy of /usr/lib/systemd/system/rngd.service
into /etc/systemd/system/rngd.service
. The contents of the copy in /etc/systemd/system/
can be freely modified and it has priority over the /usr/lib/systemd/system/
one. The only change I made was to have the ExecStart
-line say as:
ExecStart=/sbin/rngd -f --rng-device=/dev/TrueRNG --fill-watermark=4000
When rngd-service would be started, it will use the USB-stick as source and make sure, there are at least 4000 bits of entropy in the pool.
Making sure rngd setup works
At any given point, you can query how many bits are available in the Linux entropy-pool:
cat /proc/sys/kernel/random/entropy_avail
Since my setup is working correctly, it will display a number greater than 4000 and smaller than 4096. The upper limit comes from /proc/sys/kernel/random/poolsize
, which is a hard-coded number from Linux kernel source.
Hint: If you do the stupid thing like I did and set the /proc/sys/kernel/random/write_wakeup_threshold
(using --fill-watermark) into 4096 (or above), your rngd will keep hogging CPU like there is no tomorrow. It is impossible for the pool to contain maximum number of bits at any given time. Give your system a break and set the threshold bit lower than max.
Finally
It's always nice to know for a fact, that random numbers are random. This fact can be verified and has been verified by number of other people.
Enjoy!