HP Color LaserJet custom X.509 certificate - Part 2
Sunday, June 18. 2023
Earlier this year, I wrote how it was impossible to replace X.509 certificate on a HP printer.
Let's state the good news first: Me and many other people suspected, it was a broken firmware. Indeed, that was the case. Now there is a more recent firmware which contains a fix for the issue.
Determining current firmware version
Version can be verified in System --> Administration menu.
How Laserjet's on-line firmware check works
While tinkering on this, it became evident my HP Laserjet unit isn't especially good at figuring out if there was a new firmware available or not. After some wiresharking, it was possible to determine the logic.
Main endpoint is at http://h19005.www1.hp.com/pub/laserjet/updates/me/manifest. Note: yes it is HTTP, not HTTPS. Also, if you click the link, a file will be downloaded to your machine. This is because most browsers refuse to display octet-streams on screen.
Response will contain a human-readable proprietary application/octet-stream
with information about available Laserjet firmwares. Among the data is a mapping table:
20190819: 20211122
20200324: 20211122
20200603: 20211122
20201022: 20211122
20211122: 20230321
20211221: 20230321
20220224: 20230321
20220309: 20230321
20220411: 20230321
20220729: 20230321
My interpretation is as follows:
- Firmware versions 20190819, 20200324, 20200603 and 20201022 can be upgraded into version 20211122.
- Any firmware versions beginning from 20211122 up to most recent one can be upgraded 20230321
As we know, latest frimware 20230321 contains the fix for certificate uploading, there is an upgrade path for any firmware to have the latest and the brightest version.
If I wanted to, using this URL it would be possible to create a daily/weekly/monthly check if I should take a look at my printer. The own mechanism isn't especially good, and I'd rather do the upgrading manually anyway.
Certificate / PKCS #12
When the bug prevented upload of any new certificates, I did investigate pretty much all available options OpenSSL has for packing PKCS#12-files.
Fixed firmware does accept any reasonable PKCS#12-file. This will work with full chain certificate in the file or not. In Laserjet, only the certificate will be picked and used, any intermediate CA certs will be ignored.
Finally
It took HP half a year to fix this. Unacceptable!
Custom authentication with Postman Pre-request script
Friday, May 19. 2023
Postman. Not the guy delivering letters to your mailbox, the software which makes API-development much easier.
With APIs, there are number of authentication mechanisms. All kinds of passwords and keys are used to make sure I'm allowed to make the request. Also, modern authentication mechanisms include OAuth 2.0, OpenID Connect and such. Postman does support OAuth 2.0 out-of-the-box. For those less informed, here is a brief description by Auth0:
OAuth 2.0, which stands for “Open Authorization”, is a standard designed to allow a website or application to access resources hosted by other web apps on behalf of a user.
For the record, OAuth 2.0 is a good protocol. Getting a JWT and making subsequent requests with it is among many things secure & sensible. If you didn't guess it by this far, yes, there is a glitch. RFC 6749 states following in subsection 4.3.2. Access Token Request:
The client makes a request to the token endpoint by adding the following parameters using the "application/x-www-form-urlencoded" format per Appendix B with a character encoding of UTF-8 in the HTTP request entity-body:
grant_type REQUIRED.
Value MUST be set to "password".
username REQUIRED.
The resource owner username.
password REQUIRED.
The resource owner password.
Since 1999 when RFC 2617 was written, we kinda have grown out of sending URL-encoded payloads as form responses. Today, it is 20s and JSON is the way to go. This is the glitch I mentioned, computer says: NO! Spec doesn't state "you may use JSON if you prefer to use it". I do. Many others do too. As Postman is per spec, they don't support JSON-encoded request bodies in OAuth 2.0 in their boxed authentication code. They do support all of the bells/whistles OAuth 2.0 has, but only per-spec.
Again, Postman is a good product, it does support freedom in form of a Pre-request script. I chose to exercise my freedom and crafted a plan is to following:
- Write couple lines of JavaScript to be run as Postman Pre-request script to handle authentication
- Script will be run for every request.
- It is possible to override this mechanism by writing a per-request pre-request script, in which case the collection one will be ignored.
- Per-request script can be as short as "
return
" to do a no-op.
- Required authentication credential in form of JWT is stored into Postman collection variables and is generally available for authenticating any request in the collection
- Pre-request script needs to be smart:
- It will contain logic to determine if any further steps to execute an authentication is required or already existing token can be used to save time and resources.
- We don't want to flood the authentication provider by requesting a JWT for everey single time. They might get angry and kick us out if we're too greedy.
- Script is capable of reading the required authentication credentials for username and password from Postman environment.
- NOTE! Credentials are not read from Collection, from Environment.
- These credentials will be exhcnaged into a token. The token will be used for actual authentication.
- Authentication protocol used is OAuth 2.0 ish, using JSON-body and expecting a JSON-response.
- That's it. Done!
This is my Postman request-collection with the pre-request script:
Further, I have collection variables and the environment (the name I have for this environment is Production):
Here is one request using the pre-request -script from collection:
There isn't much setup needed. It just inherits the authorization setting from collection. Simple!
Full script I have is as follows:
const authTokenCollectionVariableName = "AuthToken";
// Verify, an OAUTH access token exists
if (pm.collectionVariables.get(authTokenCollectionVariableName)) {
const tokStr = atob(pm.collectionVariables.get(authTokenCollectionVariableName).split('.')[1]);
const tok = JSON.parse(tokStr);
const expiry = new Date(tok.exp * 1000);
const now = new Date();
console.log(`Expiry: ${expiry}`);
if (expiry > now) {
console.log(`Auth - DEBUG: Has existing token. It is still valid. No need to renew.`);
return;
}
console.log(`Auth - DEBUG: Has existing token. Renewing expired one.`);
} else {
console.log(`Auth - DEBUG: No existing token found. Requesting one.`);
}
// Refresh the access token and set it into environment variable
pm.sendRequest({
url: pm.collectionVariables.get("Account_API_BaseURL") + "/api/authenticate",
method: 'POST',
header: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: {
mode: 'raw',
raw: JSON.stringify({
username: pm.environment.get("User"),
password: pm.environment.get("Password")
})
}
}, (error, response) => {
pm.expect(error).to.equal(null);
pm.expect(response).to.have.property('code', 200);
pm.collectionVariables.set(authTokenCollectionVariableName, response.json().accessToken);
console.log(`Auth - INFO: Saved token ok.`);
});
To repeat: This script will run only inside Postman as embedded script. If you just whip up a random JavaScript environment, it most certainly will not have a pm
-object in it and if it does, you cannot do all those fancy things I'm doing there.
Now the problem is solved. My requests run as smoothly and effortlessy as they would run with 100% per-RFC OAuth 2.0.
AMD Software: Adrenalin Edition - Error 202
Saturday, April 29. 2023
GPU drivers are updated often. The surprise comes from the fact, the update fails. What! Why! How is this possible?
Error 202 – AMD Software Installer Cannot Proceed as a Windows Update Is Currently in Progress
Making sure, this isn't a fluke. I did retry. A few times, actually. Nope. Didn't help.
I'm not alone with this. In AMD community, there is a thread Error 202 but there's no pending Windows Update. Also, on my Windows, there was nothing pending:
As this was the first time I had a hiccup, I realized I knew nothing about the installer. Always on Windows, it pays off to run setup.exe
with switch /?
. This is what will be displayed:
Haa! Options. Going for: .\Setup.exe -install -boot -log inst.log
After, the expected failure, log reveals:
InstallMan::isDisplayDriverEligibleToInstall :6090 Display driver Eligible to Install
isWindowsServiceStopped :4102 Queried Windows Update Status: 4
pauseWindowsUpdate :5244 drvInst.exe is currently running
InstallMan::performMyState :5177 ERROR --- InstallMan -> Caught an AMD Exception. Error code to display to user: 202. Debug hint: "drvInst.exe is currently running"
No idea where the claim of drvInst.exe is currently running
comes from. It isn't! Obviously something is reading Windows Update status wrong. Let's see what'll happen if I'll bring the Windows Update service down with a simple PowerShell-command: Stop-Service -Name wuauserv
Ta-daa! It works! Now installation will complete.
Assembly of my new PC 2023
Friday, April 28. 2023
It's funny how fast you stop paying much attention to your car or sofa. First you spend reasonable amount of money on it. Then they just exist as everyday items. Then, slowly but surely, there is the urge: Should I get a new one?
Exactly the same thing happens with your PC. Maybe one of those precious resources starts running out, SSD or RAM. Maybe CPU or GPU starts feeling bit slow. Then you remember how good everything was when you first got the thing. Yup, the urge is there. Must obtain new.
Roughly five years ago, I was doing my PC on Twitch. As Larpdog doesn't stream anymore, no public appearances this time. I did assemble a new rig, though.
During those years, things have evolved. This happens all the time.
- EATX is a real thing, see Wikipedia for ATX info
- The fact that both cases and motherboard are sold is a big deal.
- This is not a real standard!
- CPU is LGA1700
- Back in the days, I thought 1200 pins connecting my CPU to motherboard was a lot.
- DDR-5
- Having slight overclock on DDR-4 made it pretty fast.
- Now slowest DDR-5 begins way above OCd DDR-4.
- PCIe is 5.0
- I don't own any extension cards supporting 5.0. Good thing 4.0 and 3.0 cards work ok.
- M.2 NMVe with 5.0 is generally available.
- Every single case has a plexi-glass side
- What! Why? For what purpose!!
- If I wanted to see into my PC, I'd also wanted to see into my own stomach and would install plexiglass 6-pack into myself.
- Well. I don't want either. Stupid idea!
- There are almost no USB-connectors in cases
- There used to be. Plenty of those connectors.
- Motherboard box contains coded messages in swag
I ordered the first case I found without ridiculous see-through side. It is from Swedish manufacturer Fractal, Meshify 2. The entire case is steel mesh. Like full of tiny holes. It's bad if you'd spill Pepsi into it, but it's good for ventilation.
This is what Republic of Games mobo box has in it, some swag with a message:
The first thing I had to do is put 25°07'29.5"N 121°28'15.6"E into Google Maps. It results in this link: https://goo.gl/maps/diiBipMcFfz5oBgz7
Ah! Those coordinates point to Asus headquarters in Taiwan. It was a nice gift. Also, it was a fun "spy game" trying to figure out what those numbers translate into. 👍
I/O performance of M.2 is satisfactory:
Those ATTO Benchmark results are very good! I'm happy with my investment.
This is a solid PC for my next 5-year period. I know I will upgrade the GPU in next couple years. At the moment, my existing ATI does a fine job running Dead Island 2.
Secure Torx - Drive decommission trouble
Tuesday, March 28. 2023
I'm a known owner of an angle grider:
Above pic is from my blog post about making sure my data won't be read off on a decommissioned hard drive.
One day I had an untypical burst of let's-clean-the-storage-to-make-room-for-new-stuff -energy. In storege, there were two rack-servers, which hadn't been running for many years, so it was time to let them to a greener pastures. I have an absolute policy of recycling obsoleted electronics without storage media. Drives will be a "special" treatment. See the above pic.
This is what I wanted to do, get the drive removed from the hot-swap cage. No avail!
Blocker was a screw head which looked like a T-10 Torx, but not exactly. There was an unexpected post in the middle of the head making a T-10 fit really badly:
By reading the Wikipedia page for Torx, I realized the problem. This was an infamous Security Torx! I had a faint recollection of such thing existing, but those are so rare, I'd never seen one. Quickly browsing trough my tools and bits, it seemed I didn't own anything to make the task possible.
This meant I got to go to a hardware store to get new toys:
Right tools for the job and problem was solved!
Now there is a stack of rack-server drives ready to bite the grinder disc.
File 'repomd.xml' from repository is unsigned
Thursday, March 23. 2023
For all those years I've been running SUSE-Linux, I've never bumped into this one while running a trivial zypper update
:
Warning: File 'repomd.xml' from repository 'Update repository with updates from SUSE Linux Enterprise 15' is unsigned.
Note: Signing data enables the recipient to verify that no modifications occurred
after the data were signed. Accepting data with no, wrong or unknown signature can
lead to a corrupted system and in extreme cases even to a system compromise.
Note: File 'repomd.xml' is the repositories master index file. It ensures the
integrity of the whole repo.
Warning: We can't verify that no one meddled with this file, so it might not be
trustworthy anymore! You should not continue unless you know it's safe.
continue? [yes/no] (no):
This error slash warning being weird and potentially dangerous, my obvious reaction was to hit ctrl-c and go investigate. First, my package verification mechanism should be intact and be able to verify if updates downloaded are unaltered or not. Second, there should have not been any breaking changes into my system, at least I didn't make any. As my system didn't seem to breached, I assumed a system malfunction and went investigating.
Quite soon, I learned this is less than rare event. It has happeded multiple times for other people. According to article Signature verification failed for file ‘repomd.xml’ from repository ‘openSUSE-Leap-42.2-Update’ there exists a simple fix.
By running two command zypper clean --all
and zypper ref
, the problem should dissolve.
Yes, that is the case. After a simple wash/clean/rinse -cycle zypper update
worked again.
It was just weird to bump into that for the first time I'd assume this would have occurred some time earlier.
Writing a secure Systemd daemon with Python
Sunday, March 5. 2023
This is a deep dive into systems programing using Python. For those unfamiliar with programming, systems programming sits on top of hardware / electronics design, firmware programming and operating system programming. However, it is not applications programming which targets mostly end users. Systems programming targets the running system. Mr. Yadav of Dark Bears has an article Systems Programming is Hard to Do – But Somebody’s Got to Do it, where he describes the limitations and requirements of doing so.
Typically systems programming is done with C, C++, Perl or Bash. As Python is gaining popularity, I definitely want to take a swing at systems programming with Python. In fact, there aren''t many resources about the topic in the entire Internet.
Requirements
This is the list of basic requirements I have for a Python-based system daemon:
- Run as service: Must run as a Linux daemon, https://man7.org/linux/man-pages/man7/daemon.7.html
- Start runing on system boot and stop running on system shutdown
- Modern: systemd-compatible, https://systemd.io/
- Not interested in ancient SysV init support anymore, https://danielmiessler.com/study/the-difference-between-system-v-and-systemd/
- Modern: D-bus -connected
- Service provided will have an interface on system D-bus, https://www.freedesktop.org/wiki/Software/dbus/
- All Linux systems are built on top of D-bus, I absolutely want to be compatible
- Monitoring: Must support systemd watchdog, https://0pointer.de/blog/projects/watchdog.html
- Surprisingly many out-of-box Linux daemons won't support this. This is most likely because they're still SysV-init based and haven't modernized their operation.
- I most definitely want to have this feature!
- Security: Must use Linux capabilities to run only with necessary permissions, https://man7.org/linux/man-pages/man7/capabilities.7.html
- Security: Must support SElinux to run only with required permissions, https://github.com/SELinuxProject/selinux-notebook/blob/main/src/selinux_overview.md
- Isolation: Must be independent from system Python
- venv
- virtualenv
- Any possible changes to system Python won't affect daemon or its dependencies at all.
- Modern: Asynchronous Python
- Event-based is the key to success.
- D-bus and systemd watchdog pretty much nail this. Absolutely must be asynchronous.
- Packaging: Installation from RPM-package
- This is the only one I'll support for any forseeable future.
- The package will contain all necessary parts, libraries and dependencies to run a self-contained daemon.
That's a tall order. Selecting only two or three of those are enough to add tons of complexity to my project. Also, I initially expected somebody else in The Net to be doing this same or something similar. Looks like I was wrong. Most systems programmers love sticking to their old habits and being at SysV-init state with their synchronous C / Perl -daemons.
Scope / Target daemon
I've previously blogged about running an own email server and fighting spam. Let's automate lot of those tasks and while automating, create a Maildir monitor of junk mail -folder.
This is the project I wrote for that purpose: Spammer Blocker
Toolkit will query for AS-number of spam-sending SMTP-server. Typically I'll copy/paste the IP-address from SpamCop's report and produce a CIDR-table for Postfix. The table will add headers to email to-be-stored so that possible Procmail / Maildrop can act on it, if so needed. As junk mail -folder is constantly monitored, any manually moved mail will be processed also.
Having these features bring your own Linux box spam handling capabilities pretty close to any of those free-but-spy-all -services commonly used by everybody.
Addressing the list of requirements
Let's take a peek on what I did to meet above requirements.
Systemd daemon
This is nearly trivial. See service definition in spammer-reporter.service.
What's in the file is your run-of-the-mill systemd service with appropriate unit, service and install -definitions. That triplet makes a Linux run a systemd service as a daemon.
Python venv isolation
For any Python-developer this is somewhat trivial. You create the environment box/jail, install requirements via setup.py and that's it. You're done. This same isolation mechanism will be used later for packaging and deploying the ready-made daemon into a system.
What's missing or to-do is start using a pyproject.toml
. That is something I'm yet to learn. Obviously there is always something. Nobody, nowhere is "ready". Ever.
Asynchronous code
Talking to systemd watchdog and providing a service endpoint on system D-bus requires little bit effort. Read: lots of it.
To get a D-bus service properly running, I'll first become asynchronous. For that I'll initiate an event-loop dbus.mainloop.glib
. As there are multiple options for event loop, that is the only one actually working. Majority of Python-code won't work with Glib, they need asyncio. For that I'll use asyncio_glib to pair the GLib's loop with asyncio. It took me while to learn and understand how to actually achieve that. When successfully done, everything needed will run in a single asynchronous event loop. Great success!
With solid foundation, As the main task I'll create an asynchronous task for monitoring filesystem changes and run it in a forever-loop. See inotify(7) for non-Python command. See asyncinotify-library for more details of the Pythonic version. What I'll be monitoring is users' Maildirs configured to receive junk/spam. When there is a change, a check is made to see if change is about new spam received.
For side tasks, there is a D-bus service provider. If daemon is running under systemd also the required watchdog -handler is attached to event loop as a periodic task. Out-of-box my service-definition will state max. 20 seconds between watchdog notifications (See service Type=notify
and WatchdogSec=20s
). In daemon configuration file spammer-reporter.toml
, I'll use 15 seconds as the interval. That 5 seconds should be plenty of headroom.
Documentation of systemd.service for WatchdogSec states following:
If the time between two such calls is larger than the configured time, then the service is placed in a failed state and it will be terminated
For any failed service, there is the obvious Restart=on-failure
.
If for ANY possible reason, the process is stuck, systemd scaffolding will take control of the failure and act instantly. That's resiliency and self-healing in my books!
Security: Capabilities
My obvious choice would be to not run as root. As my main task is to provide a D-bus service while reading all users' mailboxes, there is absolutely no way of avoiding root permissions. Trust me! I tried everything I could find to grant nobody's process with enough permissions to do all that. No avail. See the documenatiion about UID / GID rationale.
As standard root will have waaaaaay too much power (especially if mis-applied) I'll take some of those away. There is a yet rarely used mechanism of capabilities(7) in a Linux. My documentation of what CapabilityBoundingSet=CAP_AUDIT_WRITE CAP_DAC_READ_SEARCH CAP_IPC_LOCK CAP_SYS_NICE
means in system service definition is also in the source code. That set grants the process for permission to monitor, read and write any users' files.
There are couple of other super-user permissions left. Most not needed powers a regular root would have are stripped, tough. If there is a security leak and my daemon is used to do something funny, quite lot of the potential impact is already mitigated as the process isn't allowed to do much.
Security: SElinux
For even more improved security, I do define a policy in spammer-block_policy.te
. All my systems are hardened and run SElinux in enforcing -mode. If something leaks, there is a limiting box in-place already. In 2014, I wrote a post about a security flaw with no impact on a SElinux-hardened system.
The policy will allow my daemon to:
- read and write FIFO-files
- create new unix-sockets
- use STDIN, STDOUT and STDERR streams
- read files in
/etc/
, note: read! not write - read I18n (or internationalization) files on the system
- use capabilities
- use TCP and UDP -sockets
- acess D-bus -sockets in
/run/
- access D-bus watchdog UDP-socket
- access user
passwd
information on the system via SSSd - read and search user's home directories as mail is stored in them, note: not write
- send email via SMTPd
- create, write, read and delete temporary files in
/tmp/
Above list is a comprehensive requirement of accesses in a system to meet the given task of monitoring received emails and act on determined junk/spam. As the policy is very carefully crafted not to allow any destruction, writing, deletion or manging outside /tmp/
, in my thinking having such a hardening will make the daemon very secure.
Yes, in /tmp/
, there is stuff that can be altered with potential security implications. First you have to access the process. While hacking the daemon, make sure to keep the event-loop running or systemd will zap the process within next 20 seconds or less. I really did consider quite a few scenarios if, and only if, something/somebody pops the cork on my daemon.
RPM Packaging
To wrap all of this into a nice packages, I'm using rpmenv. This toolkit will automatically wrap anything needed by the daemon into a nice virtualenv and deploy that to /usr/libexec/spammer-block/
. See rpm.json
for details.
SElinux-policy has an own spammer-block_policy_selinux.spec
. Having these two in separate packages is mandatory as the mechanisms to build the boxes are completely different. Also, this is the typical approach on other pieces of software. Not everybody has strict requirements to harden their systems.
Where to place an entire virtualenv in a Linux? That one is a ball-buster. RPM Packaging Guide really doesn't say how to handle your Python-based system daemons. Remember? Up there ↑, I tried explaining how all of this is rather novel and there isn't much information in The Net regarding this. However, I found somebody asking What is the purpose of /usr/libexec? on Stackexchange and decided that libexec/
is fine for this purpose. I do install shell wrappers into /bin/
to make everybody's life easier. Having entire Python environment there wouldn't be sensible.
Final words
Only time will tell if I made the right design choices. I totally see Python and Rust -based deamons gaining popularity in the future. The obvious difference is Rust is a compiled language like Go, C and C++. Python isn't.
Satellite Internet — Past & Present
Saturday, March 4. 2023
About satellites
ESA has a really good information package Space Transportation - Types of orbits.
When context is satellite Internet, there are pretty much three options: LEO, MEO or GEO. As satellites fly high, latency from Earth's surface to satellite and back is a factor. Low Earth Orbit is the most common choice, altough the lower you fly, the more satellites are required for reasonable coverage.
Past
Inspiration for this blog post came from my 1998 LUT course presentation of Satellite Internet. 25 years ago IT-bubble was still growing and it was about two years before it bursted. As I was a telecommunications major, I took a seminar course which contained a presentation. From the list of topics available, apparently I chose satellite Internet. Back in those days getting the bandwidth delivered to everybody was in rapid growth. Mobile Internet was mostly non-existent, fiber-to-the-home was non-existent, dial-up model, ADSL or cable-TV internet were the methods for getting to The Net.
Here is my PDF presentation about satellite networks Iridium, Globalstar and Teledesic (in Finnish):
This past can be considered as Round #1 of satellite internet.
Retrospective: Iridium
Went into production. 2nd generation begun deploying in 2017. Not generally known, nor common. Originally, not financially viable.
Retrospective: Globalstar
https://www.globalstar.com/, LEO
Went into production. 2013 2nd generation, 2018 3rd generation announcement. Apple iPhone 14 emergency messages are using 2nd gen Globalstar. Not generally known, nor common. Originally, not financially viable.
Retrospective: Teledesic
Never saw production. Project suspended 2002 as IT-bubble bursted and saw financial problems of Iridium and Globalstar. Completely vaporware and nobody knows this ever existed. Plan was to have hundreds of LEO satellites for really fast Internet connectivity.
I have to mention their tech. Instead of traditional approach all others have, relying on ground station being able to find a passing satellite, Teledesic originally planned having 840 satellites. As it was extremely expensive, they later reduced the number to 288. Their idea was to map Earth into regions and having enough capacity, multiple satellites were available at one "square" at a time to provide massive speeds. In theory, really good idea!
Other services
My presentation contained only three companies/services as the topic given to me dictates. At the time, other satellite services did exist.
Still today, Inmarsat Broadband is well known for its voice capabilities and mobile units made famous in popular culture. They do support broadband connection in their GEO satellites. This Satphone 2 is a well-known product:
Another not well-known service is ORBCOMM, an IoT service provider. As their service is targeted to hardware manufacturers, not consumers, nobody has not ever heard of this one. Their 50+ satellites are LEO.
Present
Fast-forward to today.
Obviously Iridium, Globalstar, Inmarsat and ORBCOMM do still exist. As there are some changes in service offering for the past 25 years, let's do a recap of the new players.
SpaceX Starlink
https://www.starlink.com/, very low LEO
Mr. Musk and his Starlink is getting tons of media attention, especially for his donation of terminals to Ukraine to help them in war against Russia. This well-known service is farthest from vaporware, is affordable and easily available. Broadband speed is reasonable, see Ookla Speedtest results comparison.
Notable fact about Starlink is, they are the only satellite broadband provider so far to have their own vehicle for orbiting their own satellites. This capability will give them much required longevity. Satellite Internet is not a sprint, after all.
HughesNet / EchoStar
https://www.hughesnet.com/, GEO
Originally Hughes Communications was owned by Hughes Aircraft, the company founded by Howard Hughes. They are a major service provider in Americas. Here in Europe, they're not well known.
Viasat
https://www.viasat.com/space-innovation/satellite-fleet/viasat-3/, GEO
This is an upcoming broadband service. Viasat is a known satellite-TV operator expanding its services to Internet.
Amazon / Project Kuiper
Obviously Mr. Bezos want's to compete against his nemesis, Mr. Musk and launch his own satellite Internet. His project is the most recent one and really doesn't even have a proper website.
Boeing
LEO
The legendary aircraft manufacturer wants their share of satellite Internet business. This very recent project has FCC approval for 147 satellites, but not much is known about the project.
Future?
As we learned back in 2002 when Teledesic folded, there isn't room for all above players. Some of them will fold and/or merge. Especially SpaceX is gaining lot of customers in rural areas with their modern service offering. Having a fierce battle-of-pricing won't be happening. Launching those satellites is v e r y expensive. Lacking financial stability, not everybody will survive in this Round #2.
There is plenty of space in the sky. It is already crowded and traffic jam is likely to get worse as more and more satellites are launched. Just take a look at Wikipedia's Satellites orbiting Earth -article.
Nokia 5.3 de-bricking after reset
Friday, March 3. 2023
Given the vast differences between Apple's iOS and Google's Android platforms, I own, run and operate both. For those interested: Apple I have as my daily mobile, Android, the more popular platform, I use for more experimental features which are not available on the other one. These features include: access to mobile radios, access to NFC and Bluetooth.
Nokia (or HMD Global to be precise) is a really good Android mobile manufacturer. Generally speasking, they don't bloat their firmware with mandatory always forced-on Facebook or any such crap. Also, my years old 5.3 got Android 12 update. Obviously, this was nice as most manufacturers sell you forgetware getting no updates, but ... (there is always a but-part). What typically happens with electronics, is the hardware becomes obsolete faster than consumer would like to. This puppy doesn't pack the oomph in it's Snapdragon CPU to fluently run Android 12. I had no problems with Androids 10 or 11, with 12 everything started feeling too sluggish. To shopping I went. I came back with a Nokia G21.
Resetting an Android
Onboarding new phone was almost painless. Most icons on my start screen were lost, apps were loaded from Play, icons not so much. Such a thing is easy to fix, so I made the call to do a full reset to the old mobile. That is the standard procedure when you're about to donate/sell/hand out your old computing hardware.
Aftermath – Reset bricked my Nokia!
Crap! The thing failed to boot after reset.
What! What? How is this possible?
Yes, I wasn't alone. Nokia Community forum has following post: WARNING - Do NOT factory reset Nokia 5.3 -- Bricked phone. Factory resetting stuff is such a basic operation done commonly, I didn't much do any research for it. On hindsight (it is 20-20 always) I should have done some.
De-brick
On above thread Mr. Adam Howard faced the same situation and presented a solution.
Prerequisites
Following is needed:
- A computer capable of running Android SDK.
- I used macOS, no drivers or such needed
- I know Linux will work fine, my understanding is no drivers are needed there either
- Windows is known to work, but will require device driver for Android. Which one? No idea here.
- Enough permissions and skills to run Android tools on your computer.
- USB-C cable to connect Android to your computer
- Make sure the device is unconnected, it will be connected later
- Android SDK Platform Tools
- Available @ https://developer.android.com/studio/releases/platform-tools
- Install and test run Android Remote Debugger
adb
- Nokia 5.3 Android 12 firmware
- Available @ https://android.googleapis.com/packages/ota-api/package/d50cb0137919fd20d43cb67a7cb47a073966269d.zip
- Do NOT unzip! Package is needed by
adb
in zipped form.
- That's it! Time and your favorite beverages (don't spill, electronics and liquids won't match).
Hard reset / Recovery mode
Apparently you can manually reset any Nokia 5.3 enough to force it into a mode suitable for force installing a new firmware. In this situation, obviously very helpful for recovery purposes. Scary as hell if you have a habit of losing your mobile to dishonest people. They can do nasty stuff to your mobile.
Instructions are here: HardReset.info: How to put NOKIA 5.3 in recovery mode?
Here is the sequence:
- Power off device
- Power on. This is your typical turn-it-on -sequence. Press power-button for ~4 seconds.
- This is your typical turn-it-on -sequence. Release power button.
- Press and hold: power button & volume down.
- Keep pressing the buttons until recovery screen appears: "START"
- Tap volume down 2 times: "Recovery mode"
- Press power button to select Recovery mode
- Device will restart.
- Wait for Android with side open to appear. Note: there are no options in this screen.
- Press and release: power button & volume up.
- Android Recovery menu will appear
- Tap volume down 3 times, "Apply update from ADB"
- Connect cable
- Press power button to select Apply update from ADB
- Leave you mobile be, next operation will be done on your computer.
On your computer: Upload firmware
Here is the sequence:
- Requirement: Your mobile must be waiting for firmware to be uploaded
- Info: Android Platform Tools (directory
platform-tools
) will contain utilityadb
- Info: You will be using sideload-function of
adb
. Info @ Sideload ROMs and Mods Using ADB Sideload - Run adb and point it to downloaded firmware, adapt your filename:
adb sideload ../Nokia\ 5.3\ firmware.zip
- On your mobile following will happen:
- As time passes, progress will be updated:
- Firmware update done
- That's it!
Done
Observe out-of-box -experience on your mobile:
This is a major blooper by HMD guys. The community forum is full of angry people who bricked their 5.3 with Android 12.
Python Windows update 2023: Pip requiring Build Tools for Visual Studio
Sunday, February 26. 2023
Couple years ago, I got a new Windows computer and my Python pip
failed to install required modules as no suitable C/C++ -compiler was found. I obviously figured out what was wrong and how to right the wrong and posted an article about the fix.
As always, things change, new tools are made available and old tools are obsoleted. This fact was pointed out by an aggressive reader. Without doubt, he did fumble with versions and became irritated when I pointed that out. However, also without doubt, the blog post's expiration date was in the past. What worked in 2021 didn't work anymore.
Here's the update.
Given Wheel, build tools aren't required as often as they used to. See What Are Python Wheels and Why Should You Care? for details on this modern approach. Copy/pasting from the article: "A wheel is a type of built distribution. In this case, built means that the wheel comes in a ready-to-install format and allows you to skip the build stage required with source distributions."
On my Python 3.10 and 3.11, Visual Studio Build Tools 2017 do work. I tested the latest 2022 version and it does not. Older build tools come harder and harder to obtain from Microsoft. Visual Studio 2022 - Downloads -page @ Microsoft doesn't seem to carry old stuff nymore. To not confuse/irritate other people, I won't even post the link there.
As many people have this same exact problem, there is for example a question in StackOverflow: Are Visual Studio 2017 Build Tools still available for download? [closed]. Mr. Chris Wright's answer will have the link https://aka.ms/vs/15/release/vs_buildtools.exe. So, download isn't lost forever. Little bit obscured, yes, but not lost.
To test new Build Tools, I deliberately uninstalled my perfectly working Visual Studio 2017 build tools and Visual Studio 2022 and whatnot. A completely clean slate. Next, make a point on how build will fail. See how much effort it takes for not to use wheels! Running following will emit the dreaded error:
pip.exe install pyOpenSSL --no-binary :all: --no-cache-dir
Yay, the expected error will be emitted: error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools"
This part isn't changed, I'm using the old images from two years back. Download and install Visual Studio Build Tools 2017. Make sure to install a few gigabytes of binaries by selecting C/C++ compiler version 140:
Ta-daa! Unsurprisingly re-running the forced build-from-source -command will work:
Collecting pyOpenSSL
Downloading pyOpenSSL-23.0.0.tar.gz (182 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: pyOpenSSL
Building wheel for pyOpenSSL(pyproject.toml) ... done
Successfully built pyOpenSSL
Installing collected packages: pyOpenSSL
Successfully installed pyOpenssl-23.0.0
This works in all those Python versions I tried. While testing, I did find some packages which wouldn't compile at all. The wheel installed just fine, so I wouldn't worry too much about that.
Final words:
It is weird how regular people in the Internet assume, that I as a blogger would have massive resources and money to maintain their systems. Why would I want to be a target of their inability to ask smart questions or ever explain what they attempted and what failed. Somehow I'm supposed to read their minds and see what's on their computer screens telepathically. Free of charge, of course!
RAID1 Disc Drive upgrade
Tuesday, January 31. 2023
On my home Linux server I ran out of disc space on my 2 TB hard drive. Or, technically speaking there were hundred or so unallocated megs left on my Logical Volume Manager. That translates as I hadn't yet ran out, I was about to run out of space. There was some reserves left, but it would have required some LVM-tinkering to do to unleash what was left into actual use.
--->
To hardware shopping!
That's a pair of 8 TB Seagate Barracudas hooked up into my old-but-trustworthy LSI MegaRAID.
Yeah, you read it right. BIOS is from year 2011. The logical volume / virtual drive created by 90s-looking WebBIOS looks really nice with all those terabytes:
Hint: Don't do what I did and forgot to hook up one S-ATA power cable properly after finalizing installations. The mirrored RAID-1 -drive will need rebuil. On this particular LSI MegaRAID such rebuild takes ~20 hours to complete. Good thing, the drive was fully available during the operation. It did respond bit slowly during rebuild, but that's what spinning platters do anyways.
Amounts of data I seem to have lying around at my home server is a handful. This simple LVM-tinkering sequence of vgextend
/ pvmove
took nearly five hours to complete. This is one of the multiple advantages of having a logical volume, there exists capability to tell at which physical drive a volume resides at.
When I made the purchase order for new drives, I was considering whether I should not use LVM anymore and go for btrfs. Obvious advantage if upgrade would give me even more flexibility on disc space allocation. On negatives, such transition would require for me to copy all data from old LVM/Ext4 drives to new btrfs-drive. LVM's transition simplicity of entire file system without touching individual files did it for me and I chose to not go for The New Thing™.
Hopefully these platters keep spinning for many years to come.
HP Color LaserJet custom X.509 certificate
Monday, January 30. 2023
Update 18th June 2023: See part 2.
One of the pieces of hardware I own and opereate is a HP printer. Most of the time it acts as a ... well, paperweight. Then there is an urgent need to have an A4 with information to be delivered somewhere.
As a keen enthusiast for custom TLS certificates, I always take the option to install one. Especially to a LAN-connected device like printer. This one, however, is broken:
All I can manage from it is: "The format of the file is invalid."
Not so cool. Uh!
For troubleshooting, I looked at Error message "The format of the file is invalid" when attempting to import certificate on HP printer and No more ssl certificate update possible. Both are pretty much stating it doesn't work. Couple years ago in Installing TLS certificates on HP printers automatically the thing worked.
In an attempt to solve this, I exported the generated self-signed key as PKCS #12. Certificate has rather "interesting" crypto, pbeWithSHA1And40BitRC2-CBC, Iteration 2048. That is a seriously obsoleted one! Private key has pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 2048, which is also on the tired side. MAC verification has iteration count 1, which is well aligned with the other insecurity.
No matter what I do, AES, DES, 3-DES, RC2, the PKCS #12 won't import. Neither will CA root cert as PEM.
My conclusion has to be the same: recent firmware upgrades have broken the import.
Multiple hours wasted on that! Darn.
Happy New Year 2023!
Saturday, December 31. 2022
Year 2022 wasn't especially good.
Entire globe had been suffering from COVID-19 for way too long. At that point a greedy old man started a war to gain more land area into his ridiculous dictatorship. There are not many more stupid things than that. Everybody loses on such move. Everybody.
Hopefully 2023 will be a better one!
Monitor Arm installation
Sunday, November 20. 2022
This is one of my computing setups at home office:
This one has a MacBook Pro 16" at desk and some extra display real estate on top of it. It's a Samsung 4K screen. As there are other computers on this desk, this display has no room anywhere else and it has to go high.
An open lid of this MBP will take ~24 cm in height. On this particular Samsung, there is couple centimeters of frame. For the display area to start at 24 cm, bottom of screen has to be at ~22 cm. Like this:
Most inexpensive no-brand solutions (like my previous one) at their arm can go up to to 50 cm. Then at the end of arm, there is the VESA mount, which will be attached to back of the monitor. Like this:
Again, on this particular Samsung the VESA D-mount is a 75x75 mm and it is located at nearly at the top of the screen making the bottom go low. Most displays have a 100x100 and the VESA-mount is typically at center of the thing. Looking at the picture, notice how commonly sold monitor arms have both 100x100 and 75x75 end. This Samsung has a really beautiful round space for all connectors and VESA-mount (invisible to user at the back of the thing). Nobody at Samsung thought how a bigger 100x100 mount would fit inside this obviously-too-small-ring. I couldn't use the regular M4 screws and had to use extra long ones.
Here is a picture of the problem with this Samsung's let's-place-the-mount-at-top -design:
When I go to max height of 50 cm on my arm, I'm missing ~3 cm at the bottom. Monitor is only at 19, but need to be at 22 for the MBP's lid to not obstruct anything at the 4K display. Crap! My old arm clearly has grown too small. There is an illustration of the problem:
Should my MBP be a 13" model or Samsung have its mount in a more reasonable location, this old monitor arm would do it for me. To monitor arm shopping, then. This is the one I got:
It's a Kensington SmartFit® One-Touch Height Adjustable Single Monitor Arm. This more expensive baby can go the required height and much more! Problem solved by throwing money at it.
Davis Vantage Vue battery life
Sunday, November 13. 2022
Over three years ago I had to get a new weather station. See my blog post about it.
Yesterday a storm hit and I wanted to see the measured wind and gust speeds. They were zero. As in no wind at all!
This measurement of nothingness vastly differed what my eyeballs and ears could measure. Wind was strong enough to rip shingles of my roof. Any storm effects were also very audible indoors. --->
To troubleshooting the issue.
My indoors console of Davis Vantage Vue reported: Low battery in sensor 1. Darn!
VeeWX web UI confirmed the fact. Signal quality was flaky:
Notice how signal quality would plummet on dark bars (no sunlight) and recover on light bars (sun started shining into the panel). The battery had gotten its share of -30° C winters and +38° C summers. Sustaining Finnish weather isn't easy for man made objects.
I dug up pictures from my 2019 blog post:
It all came back to me vividly. Indeed, in the bottom of the outdoors unit there is a lithium CR-123 battery. Integrated Sensor Suite (or ISS) has a plastic cover which can be easily removed by opening the plastic screw. As the unit is well designed, the battery pack is well protected from rain, sleet and snow.
Next, getting a replacement. Maybe this battery is one of those hard-to-get and possibly expensive ones? To my surprise a lithium CR-123 is easily available:
Pack of two costs 9,90€ on my local hardware store. Not even expensive!
Yet another well-designed feature of a Vantage Vue.
Unfortunately the second battery has its expiry date before 2025 which is the due time of next change. Generally speaking I'd rather power the unit from a cable. That way I wouldn't have to climb to my garage roof at all. Doing the climbing every three years in a reasonably good weather isn't too bad.