Tracking your location via Mobile network
Sunday, October 25. 2020
Privacy, especially your/mine/everybody's, has been an always actual topic ever since The Internet begun its commercial expansion in the beginning of 90s. Being an important topic, it has been thought everybody should have the right for privacy. In EU, the latest regulation is called on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, or General Data Protection Regulation. In California, USA they'd call it California Consumer Privacy Act.
Privacy protection gone wrong
Both above mentioned regulations have good intention. Forcing websites to bombard all of their users with stupid questions are a complete waste of bandwidth! Everybody is tired of seeing popups like this while surfing:
Somehow the good intention turned on itself. Literally nobody wants to make those choices before entering a site. There should be a setting in my browser and all those sites should read the setting and act on it without bothering me.
Mobile phone tracking
Meanwhile, your cell service provider is using your data and you won't be offered a set of checkboxes to tick.
As an example, Telia’s anonymized location data helps Finnish Government fight the coronavirus (April 2020). This corporation has a product called Crowd Insights. Anybody with a reasonable amount of money (actual price of the service is not known) can purchase location data of actual persons moving around a city. There is a brief investigation by Finnish Chancellor of Justice stating that the service is legal and won't divulge any protected data. The decision (unfortunately, only in Finnish) Paikannustietojen hyödyntäminen COVID-19 –epidemian hillinnässä states, the service's data is daily or hourly and while a reasonable accurate location of a mobile device can be obtained, if that location data doesn't identify a person, obviously, it's not protected by any laws.
On the topic of COVID-19, Future of Privacy Forum has published an article A Closer Look at Location Data: Privacy and Pandemics, where they raise points from ethics and privacy perspective of such tracking. A good read, that!
Application of mobile movement tracking
Here is one. A newspaper article titled "Puhelinten sijaintitiedot paljastavat, kuinka ihmismassojen liikkuminen yöllisessä Helsingissä muuttui" (https://www.hs.fi/kaupunki/art-2000006698336.html):
For non-Finnish -speaking readers: this article is about movement of people in center of Helsinki. For a reference point, there is Crowd Insights data from September 2019, time before COVID-19. Movement data from pandemic is from September 5th, October 10th and 17th. To state the obvious: in 2019 between Saturday 9pm and Sunday 9am people moved a lot. What's also visible is how this global pandemic changed this behaviour. In September 2020 there were no strict regulations for night clubs and bars, which is clearly visible in the data.
Anyway, this is the kind of data of you walking around in your hometown streets, your mobile in pocket, can be easily gathered. Doing the same walk without a cell network -connect mobile device wouldn't show in that data set.
What! Are they tracking my movements via my cell phone?
Short: Yes.
Long: Yes. Your cell network provider knows every second to which cell tower all devices in their network are connected to. They also know the exact location of that identified cell tower. This coarse location information can be used by somebody or not.
Everybody knows the Hollywood movie cliché where a phone call is being tracked and somebody throws the frase "Keep them talking longer, we havent' got the trace yet!". In reality they'll know your location if your phone is turned on. It doesn't have to have an ongoing call or message being received. This is how all cell networks are designed and that's how they have always operated. They know your coarse location at all times. How exact a "coarse" location is, depends. Who has access to the location information is protected by multiple laws. The point is: they have the information. At all times.
Example illustration from 3GPP spec TR 23.912 Technical report on Super-Charger:
I'm skipping most the TLA (Three-Letter Acronyms) from the pic, but the main concept is having the car (with a phone in it) moving around the network. A HLR (or Home Location Register) will always keep track, on which BSS (note: I think it's called Radio Network Subsysten, RNS in UMTS and LTE) the mobile device talks to. This BSS (or RNS) will send updates on any jumping between the serving cells.
To simplify this further: Just replace the car with a phone in your pocket and this fully applies to people bar-hopping in center of Helsinki.
Database of cell towers
As the cell tower locations are the key component when pinpointing somebody's location, we need to know which cell towers exist and exact locations of those. Unfortunately telcos think that's a trade secret and won't release such information to general public. At the same time, from our phones we can see the identifier of the cell tower a phone is connected to and some hints to neighbouring cells. I wrote about iPhone field test mode couple years back. What a phone also has is a GPS pinpointing the exact location where a particual cell tower and couple of its friends are located at. When added with the information of, a phone typically connecting to the tower with best signal, it is possible to apply some logic. Further gathering couple of data points more, it is possible to calculate a coarse location of a cell tower your phone connects to.
Being an iPhone user, I'm sorry to say an iPhone is not technically suitable for such information gathering. Fortunately, an Android being much more open (to malware) is. Necessary interfaces exist in Android-system to query for cell tower information with an app like Tower Collector. This kind of software it is possible to create records of cell tower information and send them to OpenCelliD to be further processed and distributed to other interested parties.
OpenCelliD website contains an interactive map:
The above example is from my home town of Lappeenranta, Finland. What it depicts is approximation of a LTE cell tower location having following attributes:
- MCC: 244
- MNC: 91
- LAC: 37322
- CID: 7984918
MCC and MNC indicate the telco. CellID Finder has following information for Finland at https://cellidfinder.com/mcc-mnc#F:
MCC | MNC | Network | Operator or brand name | Status |
244 | 3 | DNA Oy | DNA | Operational |
244 | 5 | Elisa Oyj | Elisa | Operational |
244 | 9 | Finnet Group | Finnet | Operational |
244 | 10 | TDC Oy | TDC | Operational |
244 | 12 | DNA Oy | DNA | Operational |
244 | 14 | Alands Mobiltelefon AB | AMT | Operational |
244 | 15 | Samk student network | Samk | Operational |
244 | 21 | Saunalahti | Saunalahti | Operational |
244 | 29 | Scnl Truphone | Operational | |
244 | 91 | TeliaSonera Finland Oyj | Sonera | Operational |
What a LAC (Location Area Code) and CID indicate cannot be decoded without a database like OpenCelliD. Wikipedia article GSM Cell ID gives some hints about LAC and CID. The page also lists other databases you may want to take a look.
COVID-19 tracking
Apple and Google being the manufactures of major operating systems for mobile devices combined forces and created Exposure Notifications. This technology does NOT utilize cell towers nor GPS it it. It works only on Bluetooth LE.
As mentioned in appropriate Wikipedia article, the protocol is called Decentralized Privacy-Preserving Proximity Tracing (or DP-3T for short).
Finally
The key takeaway from all this is:
Location of your mobile device is always known.
Your location, movements between locations and timestamp you did the moving (or not moving) are actively being used to track everybody of us. That's because the technology in mobile networks requires the information.
If this information is shared to somebody else, that's a completely different story.
Mountain biking in Lappeenranta /w GoPro
Friday, October 23. 2020
To test my new GoPro, I published a track of some bicycling into Jälki.fi.
GPS-track is at https://jalki.fi/routes/4070-tyyskan-rantareitti-2020-09-24.
4K video is at https://youtu.be/TUIbstiFisE.
Advance-fee scam - 2.0 upgrade /w Bitcoin
Thursday, October 22. 2020
From Wikipedia https://en.wikipedia.org/wiki/Advance-fee_scam:
An advance-fee scam is a form of fraud and one of the most common types of confidence tricks. The scam typically involves promising the victim a significant share of a large sum of money, in return for a small up-front payment, which the fraudster requires in order to obtain the large sum.
Any Internet user knows this loved scam is very common, used actively all the time and has number of aliases, including Nigerian scam and 419 scam.
I was on my computer minding my own business when a bot approached me in Discord (that chat-thing gamers use):
Basically what they're saying, I'd get ~7000 € worth in Bitcoins by going to their scam-site, registering as new user and applying the given promo code.
Ok. For those whose bullshit detector wouldn't start dinging already think this for a second:
Why would a complete stranger offer you 7k€ in Discord!!
no
they
wouldn't.
Being interested on their scam, I went for it. Clicked the link to their website, registered a new account, followed instructions and applied the promo code. Hey presto! I was rich!
I was a proud owner of 0.711 BTC. Serious money that!
Further following the instructions:
Obviously I wanted to access my newly found riches. Those precious Bitcoins were calling my name and I wanted to transfer them out to a wallet I controlled and could use them for buying new and shiny things.
Not minding the 0.005 BTC transaction fee, this is what happens when you try accessing your Bitcoin giveaway -price:
Now they're claiming my new account has been frozen, because they think I'm a bot. Thawing my funds would be easy, simply transfer ~300€ worth of my money to them! As I wanted to keep my own hard-earned money, I did not send them the requested 0.03 BTC. I'm 100% sure, they'll keep inventing "surprising" costs and other things requiring subsequent transfer of funds. I would never ever be able to access the fake-price they awarded me.
Nice scam!
Custom X.509 certificate in Windows 10 Remote Desktop Service
Wednesday, October 21. 2020
On a non-AD environment, this is what your average Windows 10 Remote Desktop client will emit on a connection:
For those who are AD-domain admins, they may have seen this rarely. If an AD has Certification Authority installed, it is easy and typical to deploy certs from it to workstations and never see the above message. The Net is full of instructions like Replace RDP Default Self Sign Certificate.
Problem that needs to be solved
For anybody like me, not running an AD-domain, simply having couple of Windows 10 -boxes with occasional need to RDP into them, that popup becomes familiar.
Note: I'll NEVER EVER click on Don't ask me again -prompts. I need to know. I want to know. Suppressing such information is harmful. Getting to know and working on the problem is The WayⓇ.
Gathering information about solution
If this was easy, somebody had created simple instructions for updating RDP-certificates years ago. Decades even. But no. No proper and reasonable easy solution exists. Searching The Net far & wide results only in bits and pieces, but no real tangible turn-key(ish) solution.
While in quest for information, given existence of The Net, I find other people asking the same question. A good example is: How to provide a verified server certificate for Remote Desktop (RDP) connections to Windows 10.
As presented in the above StackExchange answer, the solution is a simple one (I think not!!). These five steps need to be done to complete the update:
- Purchase a genuine verified SSL-certificate
- Note: It's a TLS-certificate! The mentioned protocol has been deprecated for many years. Even TLS 1 and TLS 1.1 have been deprecated. So it's a X.509 TLS-certificate.
- Note 2: Ever heard of Let's Encrypt? ZeroSSL? Buypass? (and many others) They're giving away perfectly valid and trusted TLS-certificates for anybody who shows up and can prove they have control over a domain. No need to spend money on that.
- Wrap the precious certificate you obtained in step 1) into a PKCS#12-file. A
.pfx
as Windows imports it.- Note: Oh, that's easy! I think PKCS#12 is the favorite file format of every Regular Joe computer user. Not!
- Install the PKCS#12 from step 2) into Windows computer account and make sure user NETWORK SERVICE has access to it.
- Note: Aow come on! Steps 1) and 2) were tedious and complex, but this is wayyyyyy too hard to even begin to comprehend! Actually doing it is beyond most users.
- Open a Registry Editor and add the SHA-1 fingerprint of the certificate into
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\
into a binary value calledSSLCertificateSHA1Hash
.- Note: Oh really! Nobody knows what's a SHA-1 fingerprint nor how to extract that from a certificate in a format suitable to a registry binary value!
- Reboot the Windows!
- Note: As all Windows-operations, this requires a reboot.
Mission accomplished! Now the annoying message is gone. Do you want to guess what'll happen after 90 days passes? That's the allotted lifespan of a Let's Encrypt -certificate. Yup. You'll be doing the all of the above again. Every. Single. Painstaking. Step.
Problems needing to be solved
Let's break this down. As the phrase goes, an elephant is a mighty big creature and eating one is a big task. It needs to be done one bit at a time.
0: Which cert is being used currently?
If you simply want to get the SHA-1 hash of the currently installed RDP-certificate, a simple (or maybe not?) command of:
wmic /namespace:"\\root\cimv2\TerminalServices" PATH "Win32_TSGeneralSetting" get "SSLCertificateSHA1Hash"
... will do the trick. No admin-permissions needed or anything fancy.
To state the obvious problem: you'll be presented a hex-string but you have zero idea to where it points to and what to do with this information.
Hint:
You can to browse Windows Certificate Mchine Store. System certificates are not stored in your personal Certificate Store, so carefully point to a correct container. By default certificates are listed by subject, not SHA-1 hash. Luckily the self-signed RDP-cert is located in a folder "Remote Desktop" narrowing down the set.
To state the second obvious problem: WMI is a tricky beast. Poking around it from CLI isn't easy.
1: The certificate
Ok. With that you're on your own. If you cannot figure how Let's Encrypt works, doing this may not be your thing.
2: PKCS#12 wrapping
In most scenarios, a certificate is typically delivered in a PEM-formatted file or set of two files (one for public and second for private keys). PEM-format is native in *nix environment and all of tooling there can handle the data with ease. Converting the PEM-data into an interim (Microsoft created) format for Microsoft-world can be done, but is bit tricky in a *nix. Usage for this PKCS#12-formatted data is ephemeral, the certificate will stay in the file for only short while before being imported to Windows and there is no need for it anymore. A cert can be re-packaged if needed as long as the original PEM-formatted files exist. Also, the certificate data can be exported from Windows back to a file, if needed.
As PEM-format is native in *nix for certs, it is completely unsupported in Windows. A simple operation of "here is the PEM-file, import it to Windows" is literally impossible to accomplish! There is a reason why instructions have a rather complex spell with openssl
-command to get the job done.
2.1: What others have done
When talking about PowerShell-scripting and PEM-format, I'll definitely have to credit people of PKISolutions. They publish a very nice library of PSPKI (source code is at https://github.com/PKISolutions/PSPKI) with PEM-import capability. As Windows Certificate Store is a complex beast, that code doesn't directly work as I'd need it to be for importing into Machine Store. Given existence of source code, the logic they wrote can be lifted, modified and re-used to do what is needed for RDP-cert installation process.
Among PKISolutions' excellent work is blog post by Vadims Podāns, Accessing and using certificate private keys in .NET Framework/.NET Core. There he explains in detail dark/bright/weird ages about how Microsoft's libraries have approached the subject of PKI and how thing have evolved from undefined to well-behaving to current situation where everything is... well... weird.
Why I mention this is imperative for practical approach. PSPKI-library works perfectly in PowerShell 5.x, which is built on Microsoft .NET Framework 4.5. That particular framework version is bit old, and given its age, it falls into bright bracket of doing things.
However, not living in past, the relevant version of PowerShell is PowerShell Core. At the time of writing the LTS (or Long-Term-Support) version is 7.0. A version of 7.1 is in preview and version 6 is still actively used. Those versions run obviously on modern .Net Core, an open-source version of DotNet running in Windows, Linux and macOS. In transition of Microsoft .Net into open-source .Net Core, most operating system -dependant details have changed while bumping things from closed-source-Windows-only thingie. This most definitely include implementation and interface details of Public Key Infrastructure.
In short: PSPKI doesn't work anymore! It did in "bright ages" but not anymore in current "weird ages".
2.2: What I need to get done
Sticking with an obsoleted PowerShell-version is a no-go. I need to get stuff working with something I actually want to run.
In detail, this involves figuring out how .Net Core's System.Security.Cryptography.CngKey can import an existing key into Machine Store as an exportable one. That was bit tricky even in .Net Framework's System.Security.Cryptography.RSACryptoServiceProvider. Also note, that when I talk about PKI, I most definitely mean RSA and other practical algorithms like ECDSA, which is fully supported in Cryptography Next Generation (CNG). The biggest problem with CNG is, there is lack of usable documentation and practical examples.
Having elliptic curve support is important. RSA is absolutely not obsoleted and I don't see it being so in near future. It's the classic: having options. This is something I already addressed in a blog post this spring.
Most people are using only RSA and can get their job done using old libraries. I ran out of luck as I needed to do something complex. Using new libraries was the only way of getting forward. That meant lots of trial and error. Something R&D is about.
3: Import the PKCS#12-packaged certificate into a proper certificate store of a Windows-machine
When the very difficult part is done and a PKCS#12-file exists and contains a valid certificate and the private key of it, importing the information is surprisingly easy to accomplish with code.
On the other hand, this one is surprisingly complex to accomplish manually. Good thing I wasn't aiming for that.
4: Inform RDP-services which certificate to use
Getting a SHA-1 hash of a X.509 certificate is trivial. Stamping a well-known value into registry is easy. Having correct permissions set was nearly impossible, but ultimately it was doable.
5: Make sure RDP-services will use the certificate
For this, there are number of ways to do. Many of them will involve reboot or restarting the service with a PowerShell-spell of:
Restart-Service -DisplayName "Remote Desktop Services" -Force
Surprisingly, on a closer look there is a way to accomplish this steop without rebooting anything. It's just not well known nor well documented, but Windows Management Instrumentation (or wmic
) can do that too! Assuming the SHA-1 thumbprint of the certificate was in variable $certThumbprint
, running this single-line command will do the trick:
wmic /namespace:"\\root\cimv2\TerminalServices" PATH "Win32_TSGeneralSetting" Set "SSLCertificateSHA1Hash=$certThumbprint"
It will update registry to contain appropriate SHA-1 hash, confirm the access permissions and inform RDP-service about the change. All of which won't require a reboot nor an actual restart of the service. Imagine doing the cert update via RDP-session and restarting the service. Yup. You will get disconnected. Running this WMI-command won't kick you out. Nice!
Solution
Set of tools I wrote is published in GitHub: https://github.com/HQJaTu/RDP-cert-tools
As usual, this is something I'm using for my own systems, so I'll maintain the code and make sure it keeps working in this rapidly evolving world of ours.
Example run
On the target Windows 10 machine, this is how updating my cert would work (as an user with Administrator permissions):
PS C:\> .\update-RDP-cert.ps1 `
-certPath 'wildcard.example.com.cer' `
-keyPath 'wildcard.example.com.key'
Output will be:
Loaded certificate with thumbprint 1234567890833251DCCF992ACBD4E63929ABCDEF
Installing certificate 'CN=*.example.com' to Windows Certificate Store
All ok. RDP setup done.
That's it. You're done!
Example run with SSH
As I mentioned earlier, I'm using Let's Encrypt. There is a blog post about how I approach getting the certificates in my Linux wth Acme.sh. There is an obvious gap with getting certs from LE with a Linux and using the cert in Windows 10. Files in question need to be first transferred and then they can be used.
Realistic example command I'd run to first transfer the newly issued LE-cert from my Linux box to be used as RDP-cert would be:
PS C:\> .\get-RDP-cert.ps1 `
-serverHost server.example.com `
-serverUser joetheuser `
-serverAuthPrivateKeyPath id_ecdsa-sha2-nistp521 `
-remotePrivateKeyPath 'certs/*.example.com/*.example.com.key' `
-remoteCertificatePath 'certs/*.example.com/*.example.com.cer'
The obvious benefit is a simple single command to get and install an RDP-certificate from Linux to Windows. All of the complexity will be taken out. My script will even clean the temporary files to not leave any private key files floating around.
Finally
Enjoy!
Admins/users: If you enjoy this tool, let me know. Drop me a comment.
Developers: If you love my CNG-import code, feel free to use it in your application. The more people know how it works, the better.
OpenSSH 8.3 client fails with: load pubkey invalid format - Part 2
Sunday, September 13. 2020
load pubkey: invalid format
The original blog post is here.
Now Mr. Stott approached me with a comment. He suggested to check the new OpenSSH file format.
I was like "What?! New what? What new format!".
The obvious next move was to go googling the topic. And yes, indeed there exists two common formats for stored OpenSSH keys. Two pieces of articles I found most helpful were The OpenSSH Private Key Format and Openssh Private Key to RSA Private Key. Reading ssh-keygen
man-page states:
-m key_format
Specify a key format for key generation, the -i (import), -e (export) conversion options, and the -p change passphrase operation.
The latter may be used to convert between OpenSSH private key and PEM private key formats.
The supported key formats are: “RFC4716” (RFC 4716/SSH2 public or private key), “PKCS8” (PKCS8 public or private key) or “PEM” (PEM public key).
The commonly used two formats can be identified by first line of the private key. Old format has the header of
-----BEGIN EC PRIVATE KEY-----
. Obviously, those who are using RSA or Ed25519 keys, the word "EC
" would be different. I've been using ECDSA for a while and am considering moving forward with Ed25519 when all of my clients and servers have proper support for it.
I've always "loved" (to hate) SSH's (non-)intuitive user experience. As suggested by all source, to convert my existing key to new PEM-format all I need is to whip up a key generator and use it to change the passphrase. Yeah.
As my OpenSSH-client is OpenSSH_8.3p1 31 Mar 2020, its ssh-keygen
will default to output keys in the new format. Depending on your version, the defaults might vary. Anyway, if you're on a really old version, you won't be having the mentioned problem in the first place.
Warning: Changing private key passphrase will execute an in-place replace of the file. If you'll be needing the files in old format, best backup them first. Also, you can convert the format back if you want. It won't produce an exact copy of the original file, but it will be in old format.
For those not using passphrases in their private files: you can always enter the same passphrase (nothing) to re-format the files. The operation doesn't require the keys to have any.
Example conversion:
$ ssh-keygen -p -f my-precious-SSH-key
Enter old passphrase:
Enter new passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved with the new passphrase.
Now the first line of the private key stands at:
-----BEGIN OPENSSH PRIVATE KEY-----
Notice how the key type has been changed into "OPENSSH
". The key is still an ECDSA-key, but has been stored in a different.
Testing with the new key indicates a success. Now more warnings, but connectivity is still there. After the conversion, my curves are still elliptic in the right places for the server to grant access! Nice.
Summer pasttime - construction
Tuesday, September 8. 2020
Every summer I tend to do some construction work. By construction, I don't mean writing software or fiddling around with computers. By this I actually mean the act of building something from timber and bricks by attaching stuff together with screws and nails to form something new. Any person who owns property knows there is always something needing fixing, facelift or demolition. Also, anybody who has taken such a venture will also know how you can sink your time and money while at it. In case you didn't get the hint: what I'm trying to do here is explain my absence of blogging.
This year, I tore down the back terrace and re-built it. While at it (btw. its not completed yet), I found number of analogies with software engineering. Initially I had a perfectly good back terrace which (almost) served its purpose. It wasn't perfectly architected nor implemented, it was kinda thrown together like your basic PHP-website. It kinda worked, but there were a few kinks here and there. And to be absolutely clear: I didn't architect nor implement the original one. I just happened to be there, use it and eventually alter the original spec.
On moving in, I ordered really nice glassing to the terrace. Everything worked fine for many years and I was happy. This same thing happens with your really cheap hosting provider, years pass by and eventually it will the hit the fan. When it happens, you're left alone without any kind of support wondering what happened and how you're going to fix the site. I found out that by adding the terrace glassing, I had altered the requirements. Now there existed an implicit requirement for the terrace to stay level, as in not move. At all. Any minuscule movement will be ... well ... not good for your glassing making the glasses not slide in their assigned rails as well as originally intended. Exactly like your cheap website, I had no idea how the entire thing was architected. And any new requirements would de-rail the implementation (in this case: literally) making reality hit me into forehead (in this case: literally). During those years of successful living the terrace had moved and sunk a bit into soft sand. Not much, but enough for the glassings to mis-fit.
Upon realizing this, there was no real alternative. Old design had to go and new one needed to be made. Like in a software project, I begun by investigating what was implemented. In construction you would read this as: removing already constructed materials enough to be able to determine how the terrace was founded and how it was put together. In software engineering investigation is always easier and less intrusive leaving no gaping holes to structure. In this project I simply took a crowbar and let it rip. Also, during re-thinking period I came up with completely new requirements. Obviously I didn't want the thing to be sinking nor moving, I also wanted to have the bottom rails of the glassing on top of something hard instead of wood. Any organic material, like wood, has the tendency to twist, warp, shrink, expand and rot. When talking about millimeter accuracy of a glassing, that's not an optimal attribute in a construction material. Experience has shown, that when wood does all of the mentioned things, it does it in the wrong way making your life miserable. So, no more wood. More bricks.
This is what it nearly looked like in the beginning and how it looks like now (I'm skipping the in-between pics simply because they're boring):
Now everything is back and my new spec has been implemented. During the process of demolition, I yanked out couple of kilos of rusty nails:
Personally, I don't use much nails, not even with a nail gun. My prefenrece, when it makes sense, is always to attach everything with a screw and I think equal amount of screws have been put into appropriate places to hold the thing tightly together.
Moving foward, I obviously want to complete the new terrace extension. Also, I'd love to get back to computer. Blogging, Snowrunner and such.
Memory lane: My C-64 source code from -87
Saturday, July 25. 2020
When I was a kid growing up with computers, there was one (1) definite medium for a Finnish nerd to read. MikroBitti.
Wikipedia describes MikroBitti as:
a Finnish computer magazine published in Helsinki, Finland
For any youngster it will be exteremely difficult to comprehend an era of computing without Internet. It did exist, I did live (and almost survive) through it.
Among the scarce resources was code. Magazines printed on paper published source code for platforms of that era. Regular people copied the code by typing (mostly incorrectly) in an attempt to obtain more working software. As we know, a single mis-typed character in a 10000 character code will crash the entire thing, at minimum produce unpredictable results. Then came modems and era of BBSes. Before that happened, I was sure to learn everything I could from those magazine-published codes. I did that even for platforms I didn't own nor have access to.
Cover and contents.
A trivial C-64 BASIC application by an unknown software engineer wanna-be producing a calendar for a given year. It even could print the calendar if you owned a printer.
Background info:
The code was written with a C-128 in C-64 mode. I was aiming for a larger audience C-64 had at the time. I don't remember the fee I received from this, but in my imagination it must have been something around 300 FIM. By using the Value of money converter @ stat.fi, 300 FIM in 1987 would equal to ~95 € in 2019. At the time, the low amount didn't matter! That was the first ever monetary compensation I received for doing something I was doing anyway. For all day, every day.
The brief intro for the calendar app was cut half by editor. What remains is a brief Finnish introduction about the purpose of the app and for other wanna-be software engineers a description what the variables in the code do.
Enjoy! If you find bugs, please report.
Arpwatch - Upgraded and explained
Friday, July 24. 2020
For many years I've run my own systems at home. Its given most of you do much less system running than I. There are servers and network switches and wireless routers, battery-backed power supplies and so on. Most of that I've written about in this blog earlier.
There is many security aspects any regular Jane lay-person won't spend a second thinking of. One of them is: What hardware runs on my home network? In my thinking that question is in top 3 -list.
The answer to that one is very trivial and can be found easily from your own network. Ask the network! It knows.
ARP - Address Resolution Protocol
This is in basics of IPv4 networking. A really good explanation can be found from a CCNA (Cisco Certified Network Associate) study site https://study-ccna.com/arp/: a network protocol used to find out the hardware (MAC) address of a device from an IP address. Well, to elaborate on that. Every single piece of hardware has an unique identifier in it. You may have heard of IMEI in your 3G/4G/5G phone, but as your phone also supports Wi-Fi, it needs to have an identifier for Wi-Fi too. A MAC-address.
Since Internet doesn't work with MAC-addresses, a translation to an IP-address is needed. Hence, ARP.
Why would you want to watch ARPs?
Simple: security.
If you know every single MAC-address in your own network, you'll know which devices are connected into it. If you think of it, there exists a limited set of devices you WANT to have in your network. Most of them are most probably your own, but what if one isn't? Wouldn't it be cool to get an alert quickly every time your network sees a device it has never seen before. In my thinking, yes! That would be really cool.
OUIs
Like in shopping-TV, there is more! A 48-bit MAC-address uniquely identifies the hardware connected to an Ethernet network, but it also identifies the manufacturer. Since IEEE is the standards body for both wired and wireless Ethernet (aka. Wi-Fi), they maintain a database of Organizationally unique identifiers.
An organizationally unique identifier (OUI) is a 24-bit number that uniquely identifies a vendor, manufacturer, or other organization.
OUIs are purchased from the Institute of Electrical and Electronics Engineers (IEEE) Registration Authority by the assignee (IEEE term for the vendor, manufacturer, or other organization).
The list is freely available at http://standards-oui.ieee.org/oui/oui.csv in CSV-format. Running couple sample queries for hardware seen in my own network:
$ fgrep "MA-L,544249," oui.csv
MA-L,544249,Sony Corporation,Gotenyama Tec 5-1-2 Tokyo Shinagawa-ku JP 141-0001
$ fgrep "MA-L,3C15C2," oui.csv
MA-L,3C15C2,"Apple, Inc.",1 Infinite Loop Cupertino CA US 95014
As we all know, CSV is handy but ugly. My favorite tool Wireshark does pre-process the ugly CSV into something it can chew without gagging. In Wireshark source code there is a tool, make-manuf.py
producing output file of manuf
containing the information in a more user-friendly way.
Same queries there against Wireshark-processed database:
$ egrep "(54:42:49|3C:15:C2)" manuf
3C:15:C2 Apple Apple, Inc.
54:42:49 Sony Sony Corporation
However, arpwatch doesn't read that file, a minor tweak is required. I'm running following:
perl -ne 'next if (!/^([0-9A-F:]+)\s+(\S+)\s+(.+)$/); print "$1\t$3\n"' manuf
... and it will produce a new database usable for arpwatch.
Trivial piece of information: Apple, Inc. has 789 OUI-blocks in the manuf
-file. Given 24-bit addressing they have 789 times 16M addresses available for their devices. That's over 13 billion device MAC-addresses reserved. Nokia has only 248 blocks.
Practical ARP with a Blu-ray -player
Let's take a snapshot of traffic.
This s a typical boot sequence of a Sony Blu-ray player BDP-S370. What happens is:
- (Frames 1 & 2) Device will obtain an IPv4-address with DHCP, Discover / Offer / Request is missing the middle piece. Hm. weird.
- (Frame 3) Instantly after knowing the own IPv4-address, the device will ARP-request the router (192.168.1.1) MAC-address as the device wants to talk into Internet.
- (Frames 5 & 6) Device will ping (ICMP echo request) the router to verify its existence and availability.
- (Frames 7-9) Device won't use DHCP-assigned DNS, but will do some querying of its own (discouraged!) and check if a new firmware is available at
blu-ray.update.sony.net
. - (Frame 12) Device starts populating its own ARP-cache and will query for a device it saw in the network. Response is not displayed.
- (Frames 13 & 14) Router at 192.168.1.1 needs to populate its ARP-cache and will query for the Blu-ray player's IPv4-address. Device will respond to request.
- Other parts of the capture will contain ARP-requests going back and forth.
Practical ARP with a Linux 5.3
Internet & computers do evolve. What we saw there in a 10 year old device is simply the old way of doing things. This is how ARP works in a modern operating system:
In this a typical boot sequence. I omitted all the weird and unrelated stuff and that makes the first frame as #8. What happens in the sequence is:
- (Frames 8-11) Device will obtain an IPv4-address with DHCP, Discover / Offer /Request / Ack -sequence is captured in full.
- (Frames 12-14) Instantly after knowing the own IPv4-address, the device will ARP-request the IPv4 address assigned into it. This is a collision-check to confirm nobody else in the same LAN is using the same address.
- (Frame 15) Go for a Gratuitous ARP to make everybody else's life easier in the network.
- Merriam-Webster will define "gratuitous" as:
not called for by the circumstances :
not necessary, appropriate, or justified :
unwarranted - No matter what, Gratuitous ARP is a good thing!
- Merriam-Webster will define "gratuitous" as:
- (Frame 16) Join IGMPv3 group to enable multicast. This has nothing to do with ARP, though.
The obvious difference is the existence of Gratuitous ARP "request" the device did instantly after joining the network.
- A gratuitous ARP request is an Address Resolution Protocol request packet where the source and destination IP are both set to the IP of the machine issuing the packet and the destination MAC is the broadcast address ff:ff:ff:ff:ff:ff. A new device literally is asking questions regarding the network it just joined from itself! However, the question asking is done in a very public manner, everybody in the network will be able to participate.
- Ordinarily, no reply packet will occur. There is no need to respond to an own question into the network.
- In other words: A gratuitous ARP reply is a reply to which no request has been made.
- Doing this seems no-so-smart, but gratuitous ARPs are useful for four reasons:
- They can help detect IP conflicts. Note how Linux does aggressive collision checking by its own too.
- They assist in the updating of other machines' ARP tables. Given Gratuitous ARP, in the network capture, there are nobody doing traditional ARPing for the new device. They already have the information. The crazy public-talking did the trick.
- They inform switches of the MAC address of the machine on a given switch port. My LAN-topology is trivial enough for my switches to know which port is hosting which MAC-addresses, but when eyeballing the network capture, sometimes switches need to ARP for a host to update their MAC-cache.
- Every time an IP interface or link goes up, the driver for that interface will typically send a gratuitous ARP to preload the ARP tables of all other local hosts. This sums up reasons 1-3.
How can you watch ARPs in a network?
Simple: run arpwatch in your Linux-router.
Nice people at Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California have written a piece of software and are publishing it (among others) at https://ee.lbl.gov/. This ancient, but maintained, daemon has been packaged into many Linux-distros since dawn of time (or Linux, pick the one which suits you).
As already established, all devices will ARP on boot. They will ARP also later during normal operations, but that's beside the point. All a device needs to do is to ARP once and it's existence is revealed. When the daemon sees a previously unknown device in your network, it will emit a notification in form of an email. Example:
Here, my router running arpwatch saw a Sony Blu-ray player BDP-S370. The ethernet address contains the 24-bit OUI-part of 54:42:49
and remaining 24-bits of a 48-bit MAC will identify the device. Any new devices are recorded into a time-stamped database and no other notifications will be made for that device.
Having the information logged into a system log and receiving the notification enables me to ignore or investigate the device. For any devices I know can be ignored, but anything suspicious I'll always track.
IPv6 and ARP
Waitaminute! IPv6 doesn't do ARP, it does Neighbor Discovery Protocol (NDP).
True. Any practical implementation does use dual-stack IPv4 and IPv6 making ARP still a viable option for tracking MAC-addresses. In case you use a pure-IPv6 -network, then go for addrwatch https://github.com/fln/addrwatch. It will support both ARP and NDP in same tool. There are some shortcomings in the reporting side, but maybe I should take some time to tinker with this and create a patch and a pull-request to the author.
Avoiding ARP completely?
Entirely possible. All a stealth device needs to do is to piggy-back an existing device's MAC-address in the same wire (or wireless) and impersonate that device to remain hidden-in-plain-sight. ARP-watching is not foolproof.
Fedora updated arpwatch 3.1 RPM
All these years passed and nobody at Fedora / Red Hat did anything to arpwatch.
Three big problems:
- No proper support for
/etc/sysconfig/
in systemd-service. - Completely outdated list of Organizationally Unique Identifier (OUIs) used as Ethernet manufacturers list displaying as unknown for anything not 10 years old.
- Packaged version was 2.1 from year 2006. Latest is 3.1 from April 2020.
Here you go. Now there is an updated version available, Bug 1857980 - Update arpwatch into latest upstream contains all the new changes, fixes and latest upstream version.
Given systemd, for running arpwatch my accumulated command-line seems to be:
/usr/sbin/arpwatch -F -w 'root (Arpwatch)' -Z -i eth0
That will target only my own LAN, both wired and wireless.
Finally
Happy ARPing!
openSUSE Leap 15.2 in-place upgrade
Sunday, July 12. 2020
Most operating systems have a mechanism to upgrade the existing version into a newer one. In most cases even thinking about upgrading without a fresh install-as-new makes me puke. The upgrade process is always complex and missing something while at it is more than likely to happen. These misses typically aren't too fatal, but may make your system emit weird messages while running or leave weird files into weird subdirectories. I run my systems clean and neat, so no leftovers for me, thanks.
There are two operating systems, which are exceptions to this rule of mine:
Windows 10 is capable of upgrading itself into a newer build successfully (upgrading a Windows 7 or 8 into a 10 is crap, do that to get your license transferred, then do a fresh install) and openSUSE. Upgrading a macOS is kinda working. It does leave weird files and weird subdirectories, but resulting upgraded OS is stable. Other Linuxes then openSUSE are simply incapable doing doing a good enough job of upgrading and I wouldn't recommend doing that. They'll leave turd, residue and junk behind from previous install and the only reasonable way is taking backups and doing a fresh install. openSUSE engineers seem to have mastered the skill of upgrade to my satisfaction, so that gets my thumbs up.
As openSUSE Leap 15.2 saw daylight on 2nd July, I felt the urge to update my Mac Book Pro into it. Some stories about my install of 15.1 are available here and and here.
The system upgrade of an openSUSE is well documented. See SDB:System upgrade for details.
To assess what needs to change, run zypper repos --uri
and see the list of your current RPM-repositories. Pretty much everything you see in the list will have an URL with a version number in it. If it doesn't, good luck! It may or may not work, but you don't know beforehand. My repo list has something like this in it:
# | Alias | Name
---+---------------------------+-----------------------------------
2 | google-chrome | google-chrome
3 | home_Sauerland | Sauerland's Home Project (openSUSE
4 | openSUSE_Leap_15.1 | Mozilla Firefox
5 | packman | packman
6 | repo-debug | Debug Repository
7 | repo-debug-non-oss | Debug Repository (Non-OSS)
8 | repo-debug-update | Update Repository (Debug)
9 | repo-debug-update-non-oss | Update Repository (Debug, Non-OSS)
10 | repo-non-oss | Non-OSS Repository
11 | repo-oss | Main Repository
12 | repo-source | Source Repository
13 | repo-source-non-oss | Source Repository (Non-OSS)
14 | repo-update | Main Update Repository
15 | repo-update-non-oss | Update Repository (Non-Oss)
What I always do, is a backup of the repo-configurations. Commands like this run as root will do the trick and create a file repos-15.1-backup.tar
into /etc/zypp/repos.d/
:
# cd /etc/zypp/repos.d/
# tar cf repos-15.1-backup.tar *.repo
Next, upgrade versions in the static URLs with a carefully crafted sed
-run:
# sed -i 's/15.1/15.2/g' /etc/zypp/repos.d/*.repo
A non-static .repo
-file (example: /etc/zypp/repos.d/repo-oss.repo
) will contain something like this:
[repo-oss]
name=Main Repository
enabled=1
autorefresh=1
baseurl=https://download.opensuse.org/distribution/leap/$releasever/repo/oss/
path=/
type=rpm-md
keeppackages=0
Notice the variable $releasever
. No amount of editing or running sed
will change that. Luckily there is an easier way. Run zypper
with an argument of --releasever 15.2
to override the value of the variable. More about repository variables like $releasever
are in documentation https://doc.opensuse.org/projects/libzypp/HEAD/zypp-repovars.html. zypper
arguments are in the man page at https://en.opensuse.org/SDB:Zypper_manual_(plain).
Additional:
As my system is depending on stuff found in Sauerland-repo, I did this to upgrade the entries:
# zypper removerepo home_Sauerland
# zypper addrepo https://download.opensuse.org/repositories/home:Sauerland/openSUSE_Leap_15.2/home:Sauerland.repo
Now all the repo URLs are set. As documented doing some preparations:
# zypper --gpg-auto-import-keys ref
# zypper --releasever 15.2 refresh
Finally going for the actual distro update:
# zypper --releasever 15.2 dist-upgrade --download-in-advance
This will resolve all conflicts between old and new packets. If necessary you'll need to decide a suitable course of action. When all is set, a lengthy download will start. When all the required packets are at your computer, following prompt will be presented for you:
The following product is going to be upgraded:
openSUSE Leap 15.1 15.1-1 -> 15.2-1
The following 7 packages require a system reboot:
dbus-1 glibc kernel-default-5.3.18-lp152.20.7.1 kernel-firmware libopenssl1_1 systemd udev
2210 packages to upgrade, 14 to downgrade, 169 new, 54 to remove, 2 to change arch.
Overall download size: 1.40 GiB. Already cached: 0 B. After the operation, additional 475.5 MiB will be used.
Note: System reboot required.
Continue? [y/n/v/...? shows all options] (y): y
Going for a Yes will start the actual process:
Loading repository data...
Reading installed packages...
Warning: You are about to do a distribution upgrade with all enabled repositories. Make sure these repositories are compatible before you continue. See 'man zypper' for more information about this command.
Computing distribution upgrade...
When everything is done, following message will be displayed:
Core libraries or services have been updated.
Reboot is required to ensure that your system benefits from these updates.
This is your cue. Reboot the system.
If your upgrade went ok, you'll end up in a successfully upgraded system. To confirm the version of openSUSE, you can as an exmple query which package owns /etc/motd
:
# rpm -q -f /etc/motd
The expected answer should be something like: openSUSE-release-15.2-lp152.575.1.x86_64
Also, second thing you need to verify is the version of Linux kernel your system is running with a:
# cat /proc/version
In openSUSE Leap 15.2 you'll get something like: Linux version 5.3.18-lp152.20.7-default (geeko@buildhost)
. If your kernel version isn't in the 5.3-series, something went wrong. 15.2 will use that. If you see that version, you're golden.
Congratulations! You did it!
Quite fast and painless, wasn't it?
OpenSSH 8.3 client fails with: load pubkey invalid format
Saturday, July 11. 2020
Update 13th Sep 2020:
There is a follow-up article with a key format conversion infromation.
Ever since updating into OpenSSH 8.3, I started getting this on a connection:
$ ssh my-great-linux-server
load pubkey "/home/me/.ssh/id_ecdsa-my-great-linux-server": invalid format
Whaaaat!
Double what on the fact, that connection works. There is no change in connection besided the warning.
8.3 release notes won't mention anything about that (OpenSSH 8.3 released (and ssh-rsa deprecation notice)). My key-pairs have been elliptic for years and this hasn't bothered me. What's going on!?
Adding verbosity to output with a -vvv reveals absolutely nothing:
debug1: Connecting to my-great-linux-server [192.168.244.1] port 22.
debug1: Connection established.
load pubkey "/home/me/.ssh/id_ecdsa-ecdsa-my-great-linux-server": invalid format
debug1: identity file /home/me/.ssh/id_ecdsa-ecdsa-my-great-linux-server type -1
debug1: identity file /home/me/.ssh/id_ecdsa-ecdsa-my-great-linux-server-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.3
Poking around, I found this article from Arch Linux forums: [SOLVED] openssh load pubkey "mykeyfilepath": invalid format
Apparently OpenSSH-client now requires both the private AND public keys to be available for connecting. Mathematically the public key isn't a factor. Why would it be needed? I cannot understand the decision to throw a warning about assumed missing key. I do have the key, but as I won't need it in my client, I don't have it available.
Simply touching an empty file with correct name won't clear the warning. The actual public key of the pair needs to be available to make the ridiculous message go away.
After little bit of debugging points to the problem in ssh.c
:
check_load(sshkey_load_public(cp, &public, NULL),
filename, "pubkey");
Link: https://github.com/openssh/openssh-portable/blob/V_8_3_P1/ssh.c#L2207
Tracking the change:
$ git checkout V_8_3_P1
$ git log -L 2207,2207:ssh.c
.. points to a commit 5467fbcb09528ecdcb914f4f2452216c24796790 (Github link), which was made exactly two years ago in July 11th 2018 to introduce this checking of loaded public key and emitting a hugely misleading error message on failure.
To repeat:
Connecting to a server requires only private key. The public key is used only at the server end and is not mathematically required to establish encrypted connection from a client.
So, this change is nothing new. Still the actual reason for introducing the check_load()
-call with most likely non-existing public key is a mystery. None of the changes made in the mentioned commit or before it explains this addition, nor there are no significant changes made in the actual public key loading. A check is added, nothing more.
Fast forward two years to present day. Now that the 8.3 is actually used by a LOT of people, less than a month ago the problem was fixed. Commit c514f3c0522855b4d548286eaa113e209051a6d2 (Github link) fixes the problem by simulating a Posix ENOENT
when the public key was not found from expected locations. More details about that error are in errno (7) man page.
Problem solved. All we all need to do is wait for this change to propagate to the new clients. Nobody knows how long that will take as I just updated this.
SIM cards
Friday, July 10. 2020
When I got a new Android-phone, it struck me that you can actually place an SD-card into the SIM-card -tray.
The upper tray is from a Huawei Honor phone, lower is from my new Nokia 5.3.
Huawei-approach is to place the SD-card into SIM2-slot making it either/or, but taking much less space from the guts of the phone. Nokia approach is to allow having all three cards in place at the same time. Funnily, Nokia has also both IMEI-codes in the tray. Not really sure why, but they are there.
I'm a known iPhone man, but Android has its benefits on the developer-side. Platform has much more open APIs for example to telecomms-side. Also when doing any web-development, running the new stuff from development workstation in an Android is easy via USB-cable. With a Mac, you can do the same with an iPhone. For proper testing, both need to be done.
While at it, I cleaned out my collection of various SIM-cards:
Not really needing expired and obsolete set of SIM-cards from USA, Australia, Finland, etc. To trash it goes.
Apparently this blog post was on a lighter side. No real message to convey, just couple of pics from SIM-cards.
Agree to disagree
Monday, June 29. 2020
Summer. Always lots of things to do. Building new deck to backyard. Finally got the motorcycle back to road. Some Snowrunner. Some coding. Lots of reading.
While reading, I bumped into something.
I'll second Mr. Hawkes' opinion. This is my message to all of us during these times of polarization. World leaders all over the Globe are boosting their own popularity by dividing nations to "us" and "them". Not cool. There is no us nor them, there is only we. Everybody needs to burst their own bubble and LISTEN to what others are saying. It's ok to disagree.
22nd June 2020 letters to Editor courtesy of The Times.
Credits to Annamari Sipilä for bringing this up in her column.
HyperDrive PRO 8-in-2 firmware upgrade
Tuesday, May 26. 2020
As mentioned in my USB-C article, I'm describing how I managed to upgrade my 8-in-2 firmware WITHOUT installing Boot Camp into my macOS.
Hyper has a support article Screen flickering with Pro? Please check this out. What they ultimately suggest, and what I instantly rejected is:
- Install Boot Camp
- Boot the Mac into Windows 10
- Run the app to do the Hyper 8-in-2 firmware upgrade
- Enjoy flicker-free life!
My mind was targeted to jump directly into step 3 and 4. I attempted a number of things and kept failing. So, I dug out my humility-hat and went to step 2. That one was a great success! For step 1 I would have never gone. That's for sure. I'm not going to taint this precious Mac with a dual-boot.
Constraints
This is the list of restrictions I painfully figured out:
- HyperDrive 8-in-2 has dual USB-C -connector, making it impossible to physically attach to anything else than a MacBook Pro.
- USB-C extension cords do exist. Purchase one (you don't need two) and lift the physical connectivity limitation.
- Provided application,
VmmUpdater.exe
is a 32-bit Windows PE executable, more information can be found @ Hybrid Analysis - Firmware upgrade won't work on a random PC having USB-C -port.
VmmUpdater.exe
won't detect the Synaptics chip without Apple AMD-drivers.- The driver won't install if your hardware doesn't have a suitable GPU.
- A Mac won't boot from an USB, unless allowed to do so
- A Mac will boot only to a 64-bit operating system, a 32-bit Windows 10 won't work
- A 64-bit Windows 10 installer doesn't have WoW64 in it to emulate a 32-bit Windows
- To actually upgrade the Synaptics chip's firmware, it needs to be in use
- Windows needs to understand the existence of the video-chip
- Windows needs to actually use the video-chip via HDMI to produce output. Any output will do. No picture --> no upgrade.
- Apple keyboard and touchpad will not work in a default Windows 10 installation
- Some of the Windows drivers used by a MBP can be downloaded from Internet automatically, IF you manage to get an internet connection into a Windows 10 running on a Mac.
- Some of the Windows drivers are not available
- macOS cannot write to a NTFS-partition, it can read the data ok
Requirements
- Hardware:
- A HyperDrive 8-in-2 (to state the obvious)
- A MacBook Pro (to state the nearly obvious), I used 2019 model
- A Windows 10 running in a PC (to state the not-so-obvious)
- USB-stick with at least 8 GiB of storage, capable of booting (I think all of them can)
- USB-keyboard, during tinkering your Mac's keyboard won't work at all times. Any USB-keyboard will do.
- HDMI-cable connected to an external monitor.
- Software:
- Synaptics tool and EEPROM-file provided by Hyper.
- Rufus
- Ability to:
- Download files from The Internet
- Execute Rufus on a platform of your choice to write into the USB-stick. I did this on a Windows 10 PC.
- Run Boot Camp Assistant on a macOS, I used macOS 10.15 Catalina.
- Write files into NTFS-formatted USB-stick. Any Windows 10 can do this.
Steps
1. Save a Windows 10 ISO-image into USB-stick as Windows to Go
Option: You can do this as a second thing.
Windows to Go, that's something not many have used nor ever heard. It's already obsoleted by Microsoft, but seems to work ok. The general idea is to create an USB-bootable Windows you can run at any computer.
That's definitely something I'd love to use for this kind of upgrade!
The easiest way of injecting a Windows 10 ISO-image into USB in a suitable format is Rufus. Go to https://rufus.ie/ and get it. It's free (as speech)!
GPT-partition table is a critical one make sure to choose it. These are the options I had:
Warning: The process is slow. Writing image with Rufus will take a long time. Much longer than simply copying the bytes to USB.
2. Download Boot Camp support files
Option: You can do this first.
Recent macOS have limited Boot Camp Assistant features. Older ones could do much more, but modern versions can only Download Windows Support Software (it's in the Action-menu). Wait for the 1+ GiB download to complete.
3. Transfer files to USB-stick
Warning: The stick is formatted as NTFS. A Mac won't write into it. You need to first transfer the files into a Windows, and use the Windows to write the files into the USB.
Transfer the WindowsSupport\
folder downloaded by Boot Camp Assistant to the USB-stick prepared earlier. Subdirectory or not doesn't make a difference, I simply copied the directory into root of the USB-drive.
Also transfer the files from Hyper support article https://hypershop.zendesk.com/hc/en-us/articles/360038378871-Screen-flickering-with-Pro-Please-check-this-out-.
Note: Only the .exe
and .eeprom
are needed, I skipped the .docx
as they're not needed during the upgrade.
After successful transfer, you won't need a Windows PC anymore. All the next steps are on a Mac.
4. Enable Mac USB-booting
Study Apple support article HT208198 About Startup Security Utility.
You need to shutdown your Mac. Then plant your fingers onto Command and r keys, put the power on and then wait. Apple logo will appear, then a white progress bar will appear. You can release the Command-r at that point. If you'll be using an encrypted hard drive like I, you'll need to know a valid user on that Mac and enter that user's password to be allowed into Startup Security Utility.
The choices you need to have are:
- Secure boot: No security
- External boot: Allow booting from external or removable media is enabled
This is how Parallels would depict the settings in their KB-article 124579 How to allow Mac to boot from external media:
(Thanks Parallels for the pic!)
Note:
After you're done upgrading Synaptics firmware, you can go back to recovery, Startup Security Utility and put the settings back to higher security. For the one-shot upgrade the settings need to be at max. insecure settings.
5. Boot Windows 10 from USB
When you reboot a Mac, plant your finger on Option (some keyboards state Alt) key and wait. Pretty soon a boot menu will appear.
If you had the USB-stick already inserted, it will be displayed. If you didn't, this is your que to insert your Hyper 8-in-2. The Windows 10 USB can be inserted into the HyperDrive, your Mac will boot from there ok.
Your choice is to go for EFI Boot and wait for Windows 10 logo to appear.
Congrats! Now you're heading towards a Windows that won't respond to your keyboard nor touchpad.
6. Establish Windows to Go functionality
Make sure you have an USB keyboard available. This entire process can be done without Mac's own keyboard, it's just your own preference how you want to approach this. Getting the keyboard to work requires a keyboard, success is measured only on results. Make smart choices there!
If you can get the Windows 10 to connect to internet, that will solve some problems with missing drivers as they can be downloaded from a Microsoft driver repository. For Internet access, I used an USB-dongle to establish a Wi-Fi connection. Doing that requires selecting the correct Access Point and entering its password. On a machine without keyboard or mouse that WILL be difficult! Ultimately both the Apple keyboard and touchpad should start working and external keyboard won't be needed.
Note: The drivers for both are in WindowsSupport
downloaded by Boot Camp Assistant. If you can point Windows Device Manager to load driver upgrades from there.
Warning!
Your Windows to Go will create a massive hiberfil.sys
to enable hibernation. This can be a problem as typically the hibernate-file will be sized 75% of your RAM. This particular Mac has lots of RAM and the USB-stick would never be able to store such a file. So, eventually you're likely to run out of storage space. That is harmful, but can be easily remediated.
To fix, run command:
powercfg /hibernate off
Now the massive file is gone. Next, establish proper video. Your Windows Device Manager will look something like this:
There are bunch of device drivers missing. The most critical one is for GPU. I tried running WindowsSupport\setup.exe
, but it never progressed and I simply didn't do that at all. Instead, I executed WindowsSupport\BootCamp\Drivers\AMD\AMDGraphics\setup.exe
which enabled proper video to be displayed.
7. Go upgrade Synaptics firmware
All the hard work only for this! Make sure your Hyper 8-in-2 has HDMI-cable connected and monitor will display Windows 10 in it. If you don't have that, when executing the VmmUpdater.exe
an error will display:
VmmTool: Initialize GPU library failed.
Also different VmmTool error variants can occur. If your Windows will detect the monitor and display video, then you're set!
When VmmTool will display current firmware information, you're golden!
Initially my 8-in-2 had firmware version 3.12.105.
Select the .eeprom
file and a progress bar will indicate:
Running the upgrade won't take long. Something like 30 seconds or so. When upgrade is done, it will be indicated:
At this point, yank the 8-in-2 out of Mac and make sure you don't have USB-C power connected to it. The general idea is to power-cycle the recently updated Synaptics chip. When done, put everything back and run VmmUpdater.exe
again.
This time it will indicate the new firmware version:
The version 3.12.105 got bumped into 3.13.107. Nice! Hopefully it will do the trick.
8. Boot into macOS and test
This is it. Will it work?
In my case it did. When my Mac wakes up, a single blink will happen in the external display, but no flickering or other types of annoyances.
Done! Mission accomplished!
(phew. that was a lot of work for a simple upgrade)
USB-C Video adapters for MacBook Pro
Monday, May 25. 2020
In professional life, I stopped being a Windows-guy and went back being a Mac-man. The tool-of-trade provided by me is a MacBook Pro 2019. Those new Macs are notorius for having not-so-many ports. My Mac has two (2). Both are USB-C. In my previous MBP (that was a 2014 model), there were ports all over the left side and a bonus USB on right side.
The problem remains: How to hook up my external monitor to increase my productivity by the macigal 30%? Actually, I believe any developer will benefit even more by having a lot of display real estate on his/hers desk.
So, new Mac, new toys needed for it. I had one USB-C to DVI -converter before, but for this Mac, I went on-line shopping to get the good stuff:
From left to right:
- HyperDrive PRO 8-in-2: A pricey alternative offering a lot of connectivity, disappointing initial quality.
- HyperDrive USB-C Pro Video: An already obsoleted product. Small and does the job. This is my recommendation!
- No-name StarTech.Com CDP2DVI: Cheap thing working perfectly on a Lenovo PC. Unusable with a Mac!
Obsoleted HyperDrive shop screenshot:
The pricey alternative looks like this (on top of a complimentary leather pouch):
Cheap no-name one won't even blink in a MBP. The simple HyperDrive works like a charm! Mac sees it and monitor auto-detects the signal. The expensive one blinks when connected to a Mac. It blinks a lot. All the other ports work perfectly, but HDMI and mini-DP not-so-much. Hyper has a support article about 8-in-2- flickering. Lots of discussion in StackExchange about Apple being picky about cables and converters, examples: Dual monitor flickering: the secondary monitor does the flicker and External monitor flicker / signal loss. With the HyperDrive fix, I'll write an another post about the suggested firmware upgrade, it's complicated enough to deserve an own topic.
For me, the expensive HyperDrive works as a tiny docking-station. I have the USB-C charger connected to it among monitor cable. When I want to roam free of cables, all I have to do is detach the dual-USB-C dongle and that's it! Done. Mobility solved.
Final words:
With a Mac, do a lot of research beforehand and choose carefully, or alternatively make sure you'll get a full refund for your doesn't-work-in-my-Mac adapter.
Going Elliptic on TLS-certificates
Monday, April 27. 2020
The TLS-certificate for this blog was up for expiration. As I'm always eager to investigate TLS and test things, I'm doing an experiment of my own and went for more modern stuff. Also I've been wanting to obsolete TLS versions 1 and 1.1, so I went for an Elliptic-Curve private key. If you can read this, you probably operate reasonable modern hardware, operating system and browser. If you for some reason stop to see my writings, then you're out of luck. I don't support your obsoleted stuff anymore!
Here are the old and new certs side-by-side:
Obvious differences are:
- Expiry: 2+ years of lifetime left. Note: They sold the cert as a 5 year one, but I know about Apple's recent decision to shorten the lifetime of a TLS-cert. Read all about it from About upcoming limits on trusted certificates.
- Issuer: Sectigo ECC Domain Validation Secure Server CA, RapidSSL/DigiCert won't issue ECDSA on cheap certs
- Certificate chain: ECDSA and SHA-2 256-bits on certificate, intermediate-CA and root-CA
- Key-pair type: Now there is a 384-bit secp384r1 curve instead of plain-old-RSA
If you want, you can do almost the same with Let's Encrypt. Getting an ECDSA-cert out of Let's E used to be a tedious manual task, but I'm using acme.sh for my LEing. Its readme says:
Let's Encrypt can now issue ECDSA certificates.
And we support them too!
Just set the keylength parameter with a prefix ec-.
Example command to get a 384-bit ECDSA certificate from Let's Encrypt with acme.sh
:
$ ./acme.sh --issue --dns dns_rackspace --keylength ec-384 -d example.com
That's no different than getting a RSA-certificate. The obvious difference in Let's Encrypt cert and my paid cert is in the certificate chain. In Let's E, chain's CA-types won't change from RSA, but your own cert will have elliptic-curve math in it.
Note:
Some people say ECDSA is more secure (example: ECC is faster and more secure than RSA. Here's where you (still) can't use it). ECDSA is modern and faster than RSA, that's for sure. But about security there is controversy. If you read Wikipedia article Elliptic-curve cryptography - Quantum computing attacks, there is note: "... suggesting that ECC is an easier target for quantum computers than RSA". There exists a theory, that a quantum computer might be able to crack your curve-math easier than your prime math. We don't know if that's true yet.