On API design
Wednesday, April 17. 2024
API design is hard.
If your software makes the behavior hard to change, your software thwarts the very reason that software exists.
- Robert C. Martin
ref. Sandia LabNews on February 14, 2019
What happens when in a design meeting, you're sure that a thing will NEVER happen? Until it does!
What if you were adamant, a value added tax percentage will NEVER be anything else than an integer? Until it does!
From news on 16th Apr 2024: Reports: Finnish government to raise general value-added tax rate to 25.5%
Meanwhile...
At Paytrail API (https://docs.paytrail.com/#/):
At Visma Netvisor API (https://support.netvisor.fi/en/support/solutions/articles/77000554152-import-sales-invoice-or-order-salesinvoice-nv):
I don't think there is much to do. You take yout your humility hat and start eating the bitter oat porridge to get the thing fixed.
Btw. thanks to Afterdawn on bringing this to our attention.
Bottom line: It shouldn't be too hard to figure out what politicians do. They don't think rational (thoughts nor numbers).
Examples from France and Monaco (VAT rates 20%, 10%, 5.5%, 2.1%), Ireland (VAT rates 23%, 13.5%, 9%, 4.8%), Liechtenstein (VAT rates 8.1%, 2.6%, 3.8%), Slovenia (VAT rates 22%, 9.5%, 5%) or Switzerland (VAT 8.1%, 2.6%, 3.8%).
Programming principles: SOLID vs. FLUID
Sunday, March 3. 2024
As Mr. Kralj puts it:
Everybody talks about S.O.L.I.D.
But no one mentions the opposite principles
Well said!
Classic principle is defined as:
- Single Responsibility
- Open-Close (this one is hard to grasp!)
- Liskov Substitution (this one is even harder!)
- Interface Segregation
- Dependency Inversion
This new(er) priciple defines opposite for crappy code:
- Fragile
- Lax
- Untested
- Insecure
- Disorganized
Alternate F.L.U.I.D. clarifies original S.O.LI.D. and is from 2011:
- Functional
- Loose
- Unit Testable
- Introspective
- (i)Dempotent
However, I found Mr. Henney's concept of re-doing S.O.L.I.D. with clarifications and went with Mr. Kralj's derivative of defining the opposite instead.
Credits to Mr. Harjot Singh for his original artwork at https://harjotse.medium.com/solid-principles-df26a1b131cb and Kristian Kralj for his idea!
System update of 2024
Thursday, February 29. 2024
I''ve been way too busy with my dayjob to do any blogging nor system maintenance.
Ever since S9y 2.4.0 update, my blog has been in a disarray. This has been a tough one for me as I'd love to run my system with much better quality.
Ultimately I had to find the required time and do tons of maintenance to get out of the sad state.
Mending activities completed:
- Changed hosting provider to Hetzner
- Rented a much beefier VM for the system
- Changed host CPU:
- manufacturer from Intel to ?
- architecture from AMD64 into ARMv8
- generation from Pentium 4 into something very new
- Upgraded OS into Oracle Linux 9
- Upgraded DB into PostgreSQL 16
- Allocated more RAM for web server
- Tightened up security even more
- Made Google Search happier
- ... fixeds a ton of bugs in Brownpaper skin
Qualys SSL Report still ranks this site as A+ having HTTP/2. Netcraft Site report still ranks this site into better half of top-1-Million in The World.
Now everything should be so much better. Now this AI-generated image portrays a fixed computer system:
Happy New Year 2024!
Sunday, December 31. 2023
I've been really busy working with number of things other than my daily job. This, unfortunately, translates into not much time to do any blogging.
One of the things I've been tinkering with is generative AI. The buzzword you keep bumping into everywhere. A really good example of what AI can do for you is to improve my non-existing artistic talent. The above image is generated with Nightcafe. Go like it there!
Fedora Linux 39 - Upgrade
Sunday, November 12. 2023
Twenty years with Fedora Linux. Release 39 is out!
I've ran Red Hat since 90s. Version 3.0.3 was released in 1996 and ever since I've had something from them running. As they scrapped Red Hat Linux in 2003 and went for Fedora Core / RHEL, I've had something from those two running. For the record: I hate those semi-working everything-done-only-half-way Debian/Ubuntu crap. When I want to toy around with something that doesn't quite fit, I choose Arch Linux.
To get your Fedora in-place-upgraded, process is rather simple. Docs are in article Performing system upgrade.
First you make sure existing system is upgraded to all latest stuff with a dnf --refresh upgrade
. Then make sure upgrade tooling is installed: dnf install dnf-plugin-system-upgrade
. Now you're good to go for download all the new packages with a: dnf system-upgrade download --releasever=39
Note: This workflow has existed for a long time. It is likely to work also in future. Next time all you have to do is replace the release vesion with the one you want to upgrade into.
Now all prep is done and you're good to go for the actual upgrade. To state the obvious: this is the dangerous part. Everything before this has been a warm-up run.
dnf system-upgrade reboot
If you have a console, you'll see the progress.
Installing the packages and rebooting into the new release shouldn't take too many minutes.
When back in, verify: cat /etc/fedora-release ; cat /proc/version
This resulted in my case:
Fedora release 39 (Thirty Nine)
Linux version 6.5.11-300.fc39.x86_64 (mockbuild@d23353abed4340e492bce6e111e27898) (gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4), GNU ld version 2.40-13.fc39) #1 SMP PREEMPT_DYNAMIC Wed Nov 8 22:37:57 UTC 2023
Finally, you can do optional clean-up: dnf system-upgrade clean && dnf clean packages
That's it! You're done for at least for next 6 months.
Bare Bones Software - BBEdit - Indent with tab
Saturday, October 28. 2023
As a software developer I've pretty much used all text editors there is. Bold statement. Fact remains there aren't that many of them commonly used.
On Linux, I definitely use Vim, rarely Emacs, almost never Nano (any editor requiring Ctrl-s over SSH is crap!).
On Windows, mostly Notepad++ rarely Windows' own Notepad. Both use Ctrl-s, but won't work over SSH-session.
On Mac, mostly BBEdit.
Then there is the long list of IDEs available on many platforms I've used or am still using to work with multiple programming languages and file formats. Short list with typical ones would be: IntelliJ and Visual Studio.
Note: VScode is utter crap! People who designed VScode were high on drugs or plain old-fashioned idiots. They have no clue what a developer needs. That vastly overrated waste-of-disc-space a last resort editor for me.
BBEdit 14 homepage states:
It doesn’t suck.®
Oh, but it does! It can be made less sucky, though.
Here is an example:
In above example, I'm editing a JSON-file. It happens to be in a rather unreadable state and sprinkling bit of indent on top of it should make the content easily readable.
Remember, earlier I mentioned a long list of editors. Virtually every single one of those has functionality to highlight a section of text and indent selection by pressing Tab. Not BBEdit! It simply replaces entire selection with a tab-character. Insanity!
Remember the statement on not sucking? There is a well-hidden option:
The secret option is called Allow Tab key to indent text blocks and it was introduced in version 13.1.2. Why this isn't default ... correction: Why this wasn't default behaviour from get-go is a mystery.
Now the indention works as expected:
What puzzles me is the difficulty of finding the option for indenting with a Tab. I googled wide & far. No avail. Those technical writers @ Barebones really should put some effort on making this option better known.
Wi-Fi Router Lifespan: A Threat to National Security?
Sunday, October 15. 2023
Wireless LAN, or Wi-Fi, is topic I've written a lot about. Router hardware is common. Most end-user appliance people use are wireless. Wi-Fi combined with proper Internet connection has tons of bandwidth and is responsive. From hacking perspective quite many of those boxes run Linux or a thing with hackable endpoints. Or alternatively, on the electronics board, there are interesting pins that a person with knowledge can lure the box do things manufacturer didn't expect to happen. Oh, I love hardware hacks!
Routers are exploitable
Back in 2016 these routers were harnessed to a new use. From hacker's perspective, there exists a thing which works perfectly, but doesn't do the thing hacker wishes it to do. So, after little bit of hacking, the device "learns" a new skill. This new skill was to participate in criminal activity as a DDoS traffic generator. Geekflare article How to Secure Your Router Against Mirai Botnet Attacks explains as follows:
According to Paras Jha, one of the authors of the Mirai bot, most of the IoT devices infected and used by the Mirai Botnet were routers.
A word from national intelligence organization
Fast forward nine years to 2023. Things really haven't changed much. There exists even more Wi-Fi routers. The routers are manufactured in huge volumes, designed to make maximum profit for the manufacturer and are lacking a ton of security features. Combine these hackable routers with all the geopolitical tension in the World right now our beloved routers have become the interest of Finnish Security and Intelligence Service (or Supo, acronym in Finnish).
This month, in their annual National Security Overview 2023, in Threats to national security are continually evolving, they issued a warning: "Cyber espionage exploits unprotected consumer devices". Actually, they did pretty much the same thing back in March -21 with following statement: "Supo has observed that the intelligence services of authoritarian states have been exploiting dozens of network devices and servers of Finnish individuals and businesses in cyber espionage operations."
How?
Having a national intelligence service to warn about crappy network hardware is a big deal. They don't do the same warning about your toaster or dish washer or cheap Android phone. Same characteristics don't really apply to anything else. A device needs to be:
- On-line / Internet connected
- See Mr. Hyppönen's book: If It’s Smart, It’s Vulnerable
- And btw., all routers are computers. Any computer is considered as a "smart" device.
- Insecure
- Yeah. Even the expensive Wi-Fi routers have crappy manufacturer firmware in them. Cheap ones are especially vulnerable. And even the good ones expire after couple of years as manufacturer loses interest and ceases publishing new upgrades.
- Exist in masses
- Literally every home and business has at least one. I don't know the total number of homes and businesses in the World, but it must be a big number!
On those three characteristics, following things are true:
- Every single vulnerable device can be found easily.
- As the internet, there are 3,706,452,992 public IPv4 addresses. That seems like a big number, but in reality it isn't. (Actually, the scale of the number is same as the number of homes+businesses.)
- In 2013 the entire address space could be scanned in 44 minutes. (See Washington Post article Here’s what you find when you scan the entire Internet in an hour)
- Every single vulnerable device can be cracked wide open without human interaction by automated tools in a split second.
- Every single cracked device can be made to do whatever the attacker wants.
- Any typical scenario is some sort of criminal activity.
Why?
100% of all lay-persons I've talked to state "I don't care if I'm hacked. I'm not an important person and have nothing to hide." What they miss is, attacker wanting to use their paid connection while impersonating as them to commit crimes.
We also have learned not all attackers are after money, some of them are state actors doing espionage. There are multiple types of attackers ranging from teenagers smearing their school's website to cybercriminals stealing money to government sponsored spies stealing state secrets.
Now we're getting to the reason why intelligence services are issuing warnings!
Scaring consumers - There is profit to be made
Since these intelligence service warnings have existed for couple years, in May -23 a major ISP / Telco in Finland, DNA, issued a press release (this is in Finnish, sorry) stating following:
Does you home have over four year old router? An expert reveals why it can be a risk.
Translated quote:
As a rule of thumb I'd say: a four year old router for basic user is aged. Advanced users may replace their routers every two years.
Going open-source!
For clarity: I'm not disputing the fact an aged router with never being upgraded to latest available firmware wouldn't be a security risk. It is! As a hacker I'm disputing is the need to purchase a new one. Gen. pop. will never ever be able to hack their devices into running OpenWrt or DD-WRT, that's for sure. Instead, educating people with risks involved with cheap consumer electronics and offering advice on smart choices would be preferred.
Here is my advice:
- Router manufacturers (and ISPs) are commercial entities targeting to maximize their profit. Their intent is to sell you a new router even when the hardware of your old device is still ok.
- Part of profit maximizing is to abandon the device after couple years of its release. There exists manufacturers which have never released a single security patch. Profit maximizing is very insecure for you as consumer.
- Hardware doesn't expire as fast as software does. There are exeptions to this. Power supplies and radio frequency electronics takes the greatest wear&tear on your 24/7 enabled device, sometimes getting a new box is the best option.
- Your old hardware may be eligible for re-purposing with open-source options. Ask your local hacker for details.
- Open-source firmware gets upgrades for both features and security for any forseeable future. This can happen as open-source firmware unifies all various hardware under a single umbrella.
- Make a habit of upgrading your open-source firmware every now and then. New firmwares will be made available on a regular basis.
Personally, for the past 19 years I've only purchased Wi-Fi routers which have OpenWrt or DD-WRT -support. Typically, after unboxing the thing, factory firmware runs only those precious minutes to get a proper Linux running into them. This is what I recommend everybody else to do!
PS. Kudos to those manufacturers who skipped the part with creating something and abandoning firmware of their own and license open-source solutions. There aren't many of you. Keep it up!
More USB Ports for iMac - HyperDrive 5-in-1 USB-C Hub
Sunday, September 3. 2023
Apple computers are known to have not-so-many ports. Design philosophy is for a machine to be self-sufficient and to not need any extensions nor ports for extensions. Reality bites and eats any ideology for a breakfast. I definitely need the extensions!
So, for my new iMac, I went shopping for more ports. Whenever I need to do this, I'll first check Hyper. Their products are known to be of highest quality and well designed to meet the specific requirements of a Mac. To be clear: on an iMac there are four ports: two USB-C and two Thunderbolt 4. This ought to be enough for everybody, right? Nope. All of them are in the back of the computer. What if you need something with easy access?
This is what's in a HyperDrive for iMac box:
Those changeable covers are designed to match the colour of the iMac. This reminds me of 1998 when Nokia introduced the Xpress-on Covers for 5110:
Image courtesy of nokiaprojectdream.com.
This is how the USB-hub clamps into iMac:
Now I don't have to try and blindly attempt to touchy/feely the location of a port behind the computer. On my desk, back of the iMac is definitely not visible nor available. Also, it is noteworthy, somebody might declare the setup "ugly" or complain of my choice of color express-on cover. The iMac is silver, but I have a yellow cover on the HyperDrive. That's how I roll!
New toys - Apple iMac
Monday, August 28. 2023
Summer here in Finland is over. It's windy and raining cats&dogs. Definitely beginning of autumn.
For me, summer is typically time to do lots of other things than write blog posts. No exceptions this time. I did tinker around with computers some: new rack server to replace the old Intel Atom, some USB-probing via DLMS, some Python code, etc. etc. I may post someething on those projects later.
And: I got a new iMac. Here are some pics:
Back-in-the-days, there used to be 21.5" and 27" iMac. Actually, my old one is the small one. Since Apple abandoned Intel CPUs, one-size-fits-all, only 24" option available. Also, the iMacs I have are VESA-mounted ones. There is no room on my desk.
Apple's magic mouse is for somebody else. I hate the thing! Good thing I still have a perfectly working MX Anywhere 2. On my other computer I (obviously) use a MX Anywhere 3 and have plans to upgrade it into a 3S.
Cabling in an iMac is not-so-standard:
Ethernet RJ-45 -socket is in the PSU, which has your standard IEC C5 "Mickey Mouse" connector. On the other end, there is a Magsafe. With Ethernet in it! It has to be some sort of USB-C / Thunderbolt 4 -thingie with really weird magnetic connector.
Transferring setteings and data from a OS X to modern macOS works like a charm. Nothing in Windows can touch that level of easiness. Also, now I have an OS that can do Time Machine backups to a Linux/Samba. Nice! Maybe I should post something about that setup also.
Next: Run the thing for 10+ years. Btw. the old one is for sale, it really has no monetary value, but it works and I don't need it anymore.
Nuvoton NCT6793D lm_sensors output
Monday, July 3. 2023
LM-Sensors is set of libraries and tools for accessing your Linux server's motheboard sensors. See more @ https://github.com/lm-sensors/lm-sensors.
If you're ever wondered why in Windows it is tricky to get readings from your CPU-fan rotation speed or core temperatures from you fancy GPU without manufacturer utilities. Obviously vendors do provide all the possible readings in their utilities, but people who would want to read, record and store the data for their own purposes, things get hairy. Nothing generic exists and for unknown reason, such API isn't even planned.
In Linux, The One toolkit to use is LM-Sensors. On kernel side, there exists The Linux Hardware Monitoring kernel API. For this stack to work, you also need a kernel module specific to your motherboard providing the requested sensor information via this beautiful API. It's also worth noting, your PC's hardware will have multiple sensors data providers. An incomplete list would include: motherboard, CPU, GPU, SSD, PSU, etc.
Now that sensors-detect
found all your sensors, confirm sensors
will output what you'd expect it to. In my case there was a major malfunction. On a boot, following thing happened when system started sensord
(in case you didn't know, kernel-stuff can be read via dmesg
):
systemd[1]: Starting lm_sensors.service - Hardware Monitoring Sensors...
kernel: nct6775: Enabling hardware monitor logical device mappings.
kernel: nct6775: Found NCT6793D or compatible chip at 0x2e:0x290
kernel: ACPI Warning: SystemIO range 0x0000000000000295-0x0000000000000296 conflicts with OpRegion 0x0000000000000290-0x0000000000000299 (_GPE.HWM) (20221020/utaddress-204)
kernel: ACPI: OSL: Resource conflict; ACPI support missing from driver?
systemd[1]: Finished lm_sensors.service - Hardware Monitoring Sensors.
This conflict resulted in no available mobo readings! NVMe, GPU and CPU-cores were ok, the part I was mostly looking for was fan RPMs and mobo temps just to verify my system health. No such joy. Uff.
It seems, this particular Linux kernel module has issues. Or another way to state it: mobo manufacturers have trouble implementing Nuvoton chip into their mobos. On Gentoo forums, there is a helpful thread: [solved] nct6775 monitoring driver conflicts with ACPI
Disclaimer: For ROG Maximus X Code -mobo adding acpi_enforce_resources=no
into kernel parameters is the correct solution. Results will vary depending on what mobo you have.
Such ACPI-setting can be permanently enforced by first querying about the Linux kernel version being used (I run a Fedora): grubby --info=$(grubby --default-index)
. The resulting kernel version can be updated by: grubby --args="acpi_enforce_resources=no" --update-kernel DEFAULT
. A reboot shows fix in effect, ACPI Warning is gone and mobo sensor data can be seen.
As a next step you'll need userland tooling to interpret the raw data into human-readable information with semantics. A new years back, I wrote about Improving Nuvoton NCT6776 lm_sensors output. It's mainly about bridging the flow of zeros and ones into something having meaning to humans. This is my LM-Sensors configuration for ROG Maximus X Code:
chip "nct6793-isa-0290"
# 1. voltages
ignore in0
ignore in1
ignore in2
ignore in3
ignore in4
ignore in5
ignore in6
ignore in7
ignore in8
ignore in9
ignore in10
ignore in11
label in12 "Cold Bug Killer"
set in12_min 0.936
set in12_max 2.613
set in12_beep 1
label in13 "DMI"
set in13_min 0.550
set in13_max 2.016
set in13_beep 1
ignore in14
# 2. fans
label fan1 "Chassis fan1"
label fan2 "CPU fan"
ignore fan3
ignore fan4
label fan5 "Ext fan?"
# 3. temperatures
label temp1 "MoBo"
label temp2 "CPU"
set temp2_max 90
set temp2_beep 1
ignore temp3
ignore temp5
ignore temp6
ignore temp9
ignore temp10
ignore temp13
ignore temp14
ignore temp15
ignore temp16
ignore temp17
ignore temp18
# 4. other
set beep_enable 1
ignore intrusion0
ignore intrusion1
I'd like to credit Mr. Peter Sulyok on his work about ASRock Z390 Taichi. This mobo happens to use the same Nuvoton NCT6793D -chip for LPC/eSPI SI/O (I have no idea what those acronyms are for, I just copy/pasted them from the chip data sheet). The configuration is in GitHub for everybody to see: https://github.com/petersulyok/asrock_z390_taichi
Also, I''d like to state my ignorance. After reading less than 500 pages of the NCT6793D data sheet, I have no idea what is:
- Cold Bug Killer voltage
- DMI voltage
- AUXTIN1 is or exactly what temperature measurement it serves
- PECI Agent 0 temperature
- PECI Agent 0 Calibration temperature
Remember, I did mention semantics. From sensors
-command output I can read a reading, what it translates into, no idea! Luckily there are some of the readings seen are easy to understand and interpret. As an example, fan RPMs are really easy to measure by removing the fan from its connector. Here is an excerpt from my mobo manual to explain fan-connectors:
As data quality is taken care of and output is meaningful, next step is to start recording data. In LM-Sensors, there is sensord
for that. It is a system service taking a snapshot (you can define the frequency) and storing it for later use. I'll enrich the stored data points with system load averages, this enables me to estimate a relation with high temperatures and/or fan RPMs with how much my system is working.
Finally, all data gathered into a RRDtool database can be easily visualized with rrdcgi
into HTML + set of PNG-images to present a web page like this:
Nice!
Postman Making it Mandatory to Store All Information Into Postman's Servers
Monday, June 19. 2023
Postman. Not the guy delivering letters to your mailbox, the software which makes API-development much easier.
Earlier this year I was mighty shocked to learn Postman is tightening its grip on developers.
First they made using a Postman-account mandatory. Any new installation into a workstation MUST login to a Postman-account. Earlier it was a strong suggestion, but there existed a skip-button. Anybody having an old installation can keep their way-of-working. When ever there is a need for new installation, surprise is imminent.
For the record: I do have a Postman-account. On some of my machines, I do login into it to collaborate with other developers. There are number of machines where I do not.
The traditional way-of-working is called "Scratch Pad" in the new world order. This scratchpad will contain any locally stored information such as collections of requests, URL endpoints, variables and secrets. Not sharing such delicate information with anybody is my preferred approach.
Hint: For any logged in user, getting to your scratchpad is surprisingly easy. Access to it is well hidden, but available:
Settings sprocket-wheel will contain the option for Scratch Pad.
There is a change coming.
In September 2023 Postman will force to store all passwords, API-keys, secrets, customer's endpoints to their server. There is a blog post from May 2023: Announcing the new lightweight Postman API Client and sunsetting Scratch Pad.
It remains to be seen if this "lightweight" Postman is a piece of usless crap or not. Somehow I don't see the option of storing precious secrets my customers handed me for safe-keeping to a 3rd party who can do whatever they want with it. In some cases, contractually I'm pretty sure I'm not even allowed do to that!
HP Color LaserJet custom X.509 certificate - Part 2
Sunday, June 18. 2023
Earlier this year, I wrote how it was impossible to replace X.509 certificate on a HP printer.
Let's state the good news first: Me and many other people suspected, it was a broken firmware. Indeed, that was the case. Now there is a more recent firmware which contains a fix for the issue.
Determining current firmware version
Version can be verified in System --> Administration menu.
How Laserjet's on-line firmware check works
While tinkering on this, it became evident my HP Laserjet unit isn't especially good at figuring out if there was a new firmware available or not. After some wiresharking, it was possible to determine the logic.
Main endpoint is at http://h19005.www1.hp.com/pub/laserjet/updates/me/manifest. Note: yes it is HTTP, not HTTPS. Also, if you click the link, a file will be downloaded to your machine. This is because most browsers refuse to display octet-streams on screen.
Response will contain a human-readable proprietary application/octet-stream
with information about available Laserjet firmwares. Among the data is a mapping table:
20190819: 20211122
20200324: 20211122
20200603: 20211122
20201022: 20211122
20211122: 20230321
20211221: 20230321
20220224: 20230321
20220309: 20230321
20220411: 20230321
20220729: 20230321
My interpretation is as follows:
- Firmware versions 20190819, 20200324, 20200603 and 20201022 can be upgraded into version 20211122.
- Any firmware versions beginning from 20211122 up to most recent one can be upgraded 20230321
As we know, latest frimware 20230321 contains the fix for certificate uploading, there is an upgrade path for any firmware to have the latest and the brightest version.
If I wanted to, using this URL it would be possible to create a daily/weekly/monthly check if I should take a look at my printer. The own mechanism isn't especially good, and I'd rather do the upgrading manually anyway.
Certificate / PKCS #12
When the bug prevented upload of any new certificates, I did investigate pretty much all available options OpenSSL has for packing PKCS#12-files.
Fixed firmware does accept any reasonable PKCS#12-file. This will work with full chain certificate in the file or not. In Laserjet, only the certificate will be picked and used, any intermediate CA certs will be ignored.
Finally
It took HP half a year to fix this. Unacceptable!
Custom authentication with Postman Pre-request script
Friday, May 19. 2023
Postman. Not the guy delivering letters to your mailbox, the software which makes API-development much easier.
With APIs, there are number of authentication mechanisms. All kinds of passwords and keys are used to make sure I'm allowed to make the request. Also, modern authentication mechanisms include OAuth 2.0, OpenID Connect and such. Postman does support OAuth 2.0 out-of-the-box. For those less informed, here is a brief description by Auth0:
OAuth 2.0, which stands for “Open Authorization”, is a standard designed to allow a website or application to access resources hosted by other web apps on behalf of a user.
For the record, OAuth 2.0 is a good protocol. Getting a JWT and making subsequent requests with it is among many things secure & sensible. If you didn't guess it by this far, yes, there is a glitch. RFC 6749 states following in subsection 4.3.2. Access Token Request:
The client makes a request to the token endpoint by adding the following parameters using the "application/x-www-form-urlencoded" format per Appendix B with a character encoding of UTF-8 in the HTTP request entity-body:
grant_type REQUIRED.
Value MUST be set to "password".
username REQUIRED.
The resource owner username.
password REQUIRED.
The resource owner password.
Since 1999 when RFC 2617 was written, we kinda have grown out of sending URL-encoded payloads as form responses. Today, it is 20s and JSON is the way to go. This is the glitch I mentioned, computer says: NO! Spec doesn't state "you may use JSON if you prefer to use it". I do. Many others do too. As Postman is per spec, they don't support JSON-encoded request bodies in OAuth 2.0 in their boxed authentication code. They do support all of the bells/whistles OAuth 2.0 has, but only per-spec.
Again, Postman is a good product, it does support freedom in form of a Pre-request script. I chose to exercise my freedom and crafted a plan is to following:
- Write couple lines of JavaScript to be run as Postman Pre-request script to handle authentication
- Script will be run for every request.
- It is possible to override this mechanism by writing a per-request pre-request script, in which case the collection one will be ignored.
- Per-request script can be as short as "
return
" to do a no-op.
- Required authentication credential in form of JWT is stored into Postman collection variables and is generally available for authenticating any request in the collection
- Pre-request script needs to be smart:
- It will contain logic to determine if any further steps to execute an authentication is required or already existing token can be used to save time and resources.
- We don't want to flood the authentication provider by requesting a JWT for everey single time. They might get angry and kick us out if we're too greedy.
- Script is capable of reading the required authentication credentials for username and password from Postman environment.
- NOTE! Credentials are not read from Collection, from Environment.
- These credentials will be exhcnaged into a token. The token will be used for actual authentication.
- Authentication protocol used is OAuth 2.0 ish, using JSON-body and expecting a JSON-response.
- That's it. Done!
This is my Postman request-collection with the pre-request script:
Further, I have collection variables and the environment (the name I have for this environment is Production):
Here is one request using the pre-request -script from collection:
There isn't much setup needed. It just inherits the authorization setting from collection. Simple!
Full script I have is as follows:
const authTokenCollectionVariableName = "AuthToken";
// Verify, an OAUTH access token exists
if (pm.collectionVariables.get(authTokenCollectionVariableName)) {
const tokStr = atob(pm.collectionVariables.get(authTokenCollectionVariableName).split('.')[1]);
const tok = JSON.parse(tokStr);
const expiry = new Date(tok.exp * 1000);
const now = new Date();
console.log(`Expiry: ${expiry}`);
if (expiry > now) {
console.log(`Auth - DEBUG: Has existing token. It is still valid. No need to renew.`);
return;
}
console.log(`Auth - DEBUG: Has existing token. Renewing expired one.`);
} else {
console.log(`Auth - DEBUG: No existing token found. Requesting one.`);
}
// Refresh the access token and set it into environment variable
pm.sendRequest({
url: pm.collectionVariables.get("Account_API_BaseURL") + "/api/authenticate",
method: 'POST',
header: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: {
mode: 'raw',
raw: JSON.stringify({
username: pm.environment.get("User"),
password: pm.environment.get("Password")
})
}
}, (error, response) => {
pm.expect(error).to.equal(null);
pm.expect(response).to.have.property('code', 200);
pm.collectionVariables.set(authTokenCollectionVariableName, response.json().accessToken);
console.log(`Auth - INFO: Saved token ok.`);
});
To repeat: This script will run only inside Postman as embedded script. If you just whip up a random JavaScript environment, it most certainly will not have a pm
-object in it and if it does, you cannot do all those fancy things I'm doing there.
Now the problem is solved. My requests run as smoothly and effortlessy as they would run with 100% per-RFC OAuth 2.0.
AMD Software: Adrenalin Edition - Error 202
Saturday, April 29. 2023
GPU drivers are updated often. The surprise comes from the fact, the update fails. What! Why! How is this possible?
Error 202 – AMD Software Installer Cannot Proceed as a Windows Update Is Currently in Progress
Making sure, this isn't a fluke. I did retry. A few times, actually. Nope. Didn't help.
I'm not alone with this. In AMD community, there is a thread Error 202 but there's no pending Windows Update. Also, on my Windows, there was nothing pending:
As this was the first time I had a hiccup, I realized I knew nothing about the installer. Always on Windows, it pays off to run setup.exe
with switch /?
. This is what will be displayed:
Haa! Options. Going for: .\Setup.exe -install -boot -log inst.log
After, the expected failure, log reveals:
InstallMan::isDisplayDriverEligibleToInstall :6090 Display driver Eligible to Install
isWindowsServiceStopped :4102 Queried Windows Update Status: 4
pauseWindowsUpdate :5244 drvInst.exe is currently running
InstallMan::performMyState :5177 ERROR --- InstallMan -> Caught an AMD Exception. Error code to display to user: 202. Debug hint: "drvInst.exe is currently running"
No idea where the claim of drvInst.exe is currently running
comes from. It isn't! Obviously something is reading Windows Update status wrong. Let's see what'll happen if I'll bring the Windows Update service down with a simple PowerShell-command: Stop-Service -Name wuauserv
Ta-daa! It works! Now installation will complete.
Assembly of my new PC 2023
Friday, April 28. 2023
It's funny how fast you stop paying much attention to your car or sofa. First you spend reasonable amount of money on it. Then they just exist as everyday items. Then, slowly but surely, there is the urge: Should I get a new one?
Exactly the same thing happens with your PC. Maybe one of those precious resources starts running out, SSD or RAM. Maybe CPU or GPU starts feeling bit slow. Then you remember how good everything was when you first got the thing. Yup, the urge is there. Must obtain new.
Roughly five years ago, I was doing my PC on Twitch. As Larpdog doesn't stream anymore, no public appearances this time. I did assemble a new rig, though.
During those years, things have evolved. This happens all the time.
- EATX is a real thing, see Wikipedia for ATX info
- The fact that both cases and motherboard are sold is a big deal.
- This is not a real standard!
- CPU is LGA1700
- Back in the days, I thought 1200 pins connecting my CPU to motherboard was a lot.
- DDR-5
- Having slight overclock on DDR-4 made it pretty fast.
- Now slowest DDR-5 begins way above OCd DDR-4.
- PCIe is 5.0
- I don't own any extension cards supporting 5.0. Good thing 4.0 and 3.0 cards work ok.
- M.2 NMVe with 5.0 is generally available.
- Every single case has a plexi-glass side
- What! Why? For what purpose!!
- If I wanted to see into my PC, I'd also wanted to see into my own stomach and would install plexiglass 6-pack into myself.
- Well. I don't want either. Stupid idea!
- There are almost no USB-connectors in cases
- There used to be. Plenty of those connectors.
- Motherboard box contains coded messages in swag
I ordered the first case I found without ridiculous see-through side. It is from Swedish manufacturer Fractal, Meshify 2. The entire case is steel mesh. Like full of tiny holes. It's bad if you'd spill Pepsi into it, but it's good for ventilation.
This is what Republic of Games mobo box has in it, some swag with a message:
The first thing I had to do is put 25°07'29.5"N 121°28'15.6"E into Google Maps. It results in this link: https://goo.gl/maps/diiBipMcFfz5oBgz7
Ah! Those coordinates point to Asus headquarters in Taiwan. It was a nice gift. Also, it was a fun "spy game" trying to figure out what those numbers translate into. 👍
I/O performance of M.2 is satisfactory:
Those ATTO Benchmark results are very good! I'm happy with my investment.
This is a solid PC for my next 5-year period. I know I will upgrade the GPU in next couple years. At the moment, my existing ATI does a fine job running Dead Island 2.