Google Drive spam
Friday, April 2. 2021
A completely new type of spam has been flooding my mailbox. Ok, not flooding, but during past week I've got 7 different ones. The general idea for this spam delivery method is for the spam to originate from Google. How in detail the operation works, is to either exploit some innocent person's Google Account or create a ton of brand new Google Accounts to be used briefly and then thrown away. What the scammers do with the account is on Google Drive they'll create a presentation. There is no content in the presentation, it will be completely empty and then they'll share the document with me. Ingenious!
Shared presentation looks like this (hint: its completely blank):
The trick is in the comment of the share. If you add a new user to work on the same shared file, you can add own input. These guys put some spam into it.
When the mail arrives, it would contain something like this:
This approach will very likely pass a lot of different types of spam-filtering. People work with shared Google Drive documents all the time as their daily business and those share indications are not spam, its just day-to-day business for most.
Highlights from the mail headers:
Return-Path: <3FDxcYBAPAAcjvttlu0z-uvylws5kvjz.nvvnsl.jvt@docos.bounces.google.com>
Received-SPF: Pass (mailfrom) identity=mailfrom;
client-ip=209.85.166.198; helo=mail-il1-f198.google.com;
envelope-from=3fdxcybapaacjvttlu0z-uvylws5kvjz.nvvnsl.jvt@docos.bounces.google.com;
receiver=<UNKNOWN>
DKIM-Filter: OpenDKIM Filter v2.11.0 my-linux-box.example.com DF19A80A6D5
Authentication-Results: my-linux-box.example.com;
dkim=pass (2048-bit key) header.d=docs.google.com header.i=@docs.google.com header.b="JIWiIIIU"
Received: from mail-il1-f198.google.com (mail-il1-f198.google.com [209.85.166.198])
(using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
(No client certificate requested)
by my-linux-box.example.com (Postfix) with ESMTPS id DF19A80A6D5
for <me@example.com>; Thu, 25 Mar 2021 09:30:30 +0200 (EET)
Received: by mail-il1-f198.google.com with SMTP id o7so3481129ilt.5
for <me@example.com>; Thu, 25 Mar 2021 00:30:30 -0700 (PDT)
Reply-to: No Reply <p+noreply@docs.google.com>/code>
Briefly for those not fluent with RFC 821:
Nothing in the mail headers would indicate scam, fraud or even a whiff of spam. It's a fully legit, digitally signed (DKIM) email arriving via encrypted transport (TLS) from a Google-designated SMTP-server (SPF),
Given trusted source of mail, the only feasible attempt to detect this type of spam is via content analysis. Note: as an example of detecting and blocking unsolicited email, I've past written my thoughts how easy it is to block spam.
Well, until now it was. Darn!
Behind the scenes: Reality of running a blog - Story of a failure
Monday, March 22. 2021
... or any (un)social media activity.
IMHO the mentioned "social" media isn't. There are statistics and research to establish the un-social aspect of it. Dopamin-loop in your brain keeps feeding regular doses to make person's behaviour addicted to an activity and keep the person leeching for more material. This very effectively disconnects people from the real world and makes the dive deeper into the rabbit hole of (un)social media.
What most of the dopamin-dosed viewer of any published material keep ignoring is the peak-of-an-iceberg -phenomenon. What I mean is a random visitor gets to see something amazingly cool. A video or picture depicting something that's very impressive and assume that person's life consists of a series of such events. Also humans tend to compare. What that random visitor does next is compares the amazing thing to his/hers own "dull" personal life, which does not consist of a such imaginary sequence of wonderful events. Imaginary, because reality is always harsh. As most of the time we don't know the real story, it is possible for 15 seconds of video footage to take months or preparation, numerous failures, reasonable amounts of money and a lot of effort to happen.
An example of harsh reality, the story of me trying to get a wonderful piece of tech-blogging published.
I started tinkering with a Raspberry Pi 4B. That's something I've planned for a while, ordered some parts and most probably will publish the actual story of the success later. Current status of the project is, well planned, underway, but nowhere near finished.
What happened was for the console output of the Linux to look like this:
That's "interesting" at best. Broken to say the least.
For debugging of this, I rebooted the Raspi into previous Linux kernel of 5.8 and ta-daa! Everything was working again. Most of you are running Raspian, which has Linux 5.4. As I have the energy to burn into hating all of those crappy debians and ubuntus, my obvious choice is a Fedora Linux Workstation AArch64-build.
To clarify the naming: ARM build of Fedora Linux is a community driven effort, it is not run by Red Hat, Inc. nor The Fedora Project.
Ok, enough name/org -talk, back to Raspi.
When in a Linux graphics go that wrong, I always disable the graphical boot in Plymouth splash-screen. Running plymouth-set-default-theme details --rebuild-initrd
will do the trick of displaying all-text at the boot. However, it did not fix the problem on my display. Next I had a string of attempts doing all kinds of Kernel parameter tinkering, especially with deactivating Frame Buffer, learning all I could from KMS or Kernel Mode Setting, attempting to build Raspberry Pi's userland utilities to gain insight of EDID-information just to realize they'll never build on a 64-bit Linux, failing with nomodeset and vga=0 as Kernel Parameters to solve the problem. No matter what I told the kernel, display would fail. Every. Single. Time.
It hit me quite late in troubleshooting. While observing the sequence of boot-process, during early stages of boot everything worked and display was un-garbled. Then later when Feodra was starting system services everything fell. Obviously something funny happened with GPU-driver of Broadcom BCM2711 -chip of VideoCore 4, aka. vc4 in that particular Linux-build when the driver was loaded. Creating file /etc/modprobe.d/vc4-blacklist.conf
with contents of blacklist vc4
to prevent VideoCore4 driver from ever loading did solve the issue! Yay! Finally found the problem.
All of this took several hours, I'd say 4-5 hours straight work. What happened next was surprising. Now that I had the problem isolated into GPU-driver, on IRC's #fedora-arm -channel, people said vc4 HDMI-output was a known problem and was already fixed in Linux 5.11. Dumbfounded by this answer, I insisted version 5.10 of being the latest and 5.11 lacking availability. They insisted back. Couple hours before me asking, 5.11 was deployed into mirrors sites for everybody to receive. This happened while I was investigating failing and investigating more.
dnf update
, reboot and pooof. Problem was gone!
There is no real story here. In pursuit of getting the thing fixed, it fixed itself by time. All I had to do is wait (which obviously I did not do). Failure after failure, but no juicy story on how to fix the HDMI-output. On a typical scenario, this type of story would not get published. No sane person would shine any light on a failure and time wasted.
However, this is what most of us do with computers. Fail, retry and attempt to get results. No glory, just hard work.
Book club: Formula 1 All The Races - The First 1000
Sunday, March 21. 2021


This one is a simple directory or reference manual of first thousand F1 races. If you want your copy, go for Formula 1 All The Races - The First 1000 @ Veloce Publishing.
For a non-fan, the book is as dry as a phone book. For a true fan like me, there are short descriptions of seasons and every single grand prix driven. Personally, I love reading about the early days. TV was barely invented, but wasn't such a huge part of F1 as it is now. What I do is fix a rerefence point from the book and fill in the gaps by googling for additional information. This works well as not all of the races are that interesting.
In short: Definitely not for everyone, but only for fans (pun intended).
Windows 10 Aero: shaker minimize all windows - disable
Friday, February 26. 2021
I'm not sure, but Windows 10 minimizing all currently open windows when you drag a window to left and right has to be the worst feature.
Sometimes I love to arrange the windows properly to make work better on a multi-display environment. As nerds at Microsoft read me adjusting the window position as "shaking" they decide to minimize all of my open ones. Since today, I've been looking hard both left and right, but found no actual use for this "feature" (bug). The gesture guessing is inaccurate at best. Also, IF (emphasis on if) I want to minimize all of my currently open windows, I'd love to clearly indicate the wish for doing so. I hate these artificial stupidity systems which try to out-smart me. They never do.
If you're like me and want nothing to hear from that, there is no actual option for getting rid of the madness. The ONLY option seems to be to edit registry, so let's do that.
Registry-file is very trivial:
Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced]
"DisallowShaking"=dword:00000001
Import that and you're done. For further info, read article How to Enable or Disable Aero Shake in Windows 10 @ TenForums.
Camera tripod-connector thread
Sunday, January 31. 2021
I'm sure everybody who has ever held a camera knows there is a thread for tripod somewhere bottomside of the camera. This is what my Canon EOS would look like from the belly-side:
Why all cameras have that specific threading is because manufacturers want to be ISO 1222:2010 -compliant. Reading Wikipedia article Tripod (photography), Screw thread, reveals the spec having a 1/4-20 UNC or 3/8-16 UNC thread. There is also a phrase "Most consumer cameras are fitted with 1/4-20 UNC threads."
According to mandatory Pulp Fiction reference, in Europe we have the metric system. Personally I have no idea what is a 1/4 inch Whitworth UNC thread mentioned in Stackexchange article Why aren't tripod mounts metric? Following up on the history, to my surprise, that particular camera thread can be traced at least to year 1901. There seems to be suggestions to circa 1870 and name T. A. Edison being mentioned, but none of that can be corroborated.
Time warping back to today.
My work has been remote for many many years. Given the global pandemic, everybody else is also doing the same. As every meeting is done over the net, I've made choices to run with some serious hardware. If you're interested, there is a blog post about microphone upgrade from 2019.
The camera is a Logitech StreamCam. What I rigged it into is the cheapest mirophone table stand I could order online. The one I have is a Millenium MA-2040 from Thomann. The price is extremely cheap 20,- €. However, cheap it may be, but it does the job well.
It doesn't require much thinking to realize, the ISO-standard thread in StreamCam is 1/4", but a microphone stand will have 3/8" or 5/8" making the fitting require an adapter. Thomann page states "With 5/8" threaded connector for recording studios and multimedia workstations". Logitech provides options in their package for camera setup. The typical use would be to have monitor/laptop-bracket which makes the camera sit on top of your display. Second option is the 1/4" setup. To state the obvious: for that you'll need some sort of tripod/stand/thingie.
Here are pics from my solution:
When you go shopping for such adapter, don't do it like I did:
Above pic is proof, that I seriously suck at non-metric threads. From left to right:
- female 1/2" to male 1/4" adapter (bronze), not working for my setup
- female 3/8" to male 1/4" adapter (black, short), not working for my setup
- female 5/8" to male 1/4" adapter (black, long), yes! the one I needed
- female 5/8" to male 1/4" adapter /w non-UNC thread (silver), not working for my setup
For those wondering:
Yes. I did order 4 different adapters from four different stores until I managed to find the correct one.
Also, there is nothing wrong with my laptop's camera. I simply want to position the camera bit higher than regular laptop camera will be.
Python Windows: Pip requiring Build Tools for Visual Studio
Wednesday, January 13. 2021
Update 26th Feb 2023:
This information has been obsoleted during 2022. Information in this article regarding Python and Visual Studio Build Tools download location is inaccurate. For latest information, see my 2023 update.
Python, one of the most popular programming languages today. Every single even remotely useful language depends on extensions, libraries and stuff already written by somebody else you'll be needing to get your code to do its thing.
In Python these external dependencies are installed with command pip
. Some of them are installed as eggs, some as wheels. About the latter, read What are wheels? for more information.
Then there is the third kind. The kind having cPython in them needing a C-compiler on your machine to build and subsequent install when the binaries are done. What if your machine doesn't have a C-compiler installed? Yup. Your pip install
will fail. This story is about that.
Duh, it failed
I was tinkering some Python-code and with some googling found a suitable library I wanted to take for a spin. As I had a newly re-installed Windows 10, pip install
failed on a dependency of the library I wanted:
building 'package.name.here' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
Yes, nasty error that. I recall seeing this before, but how did I solve it the last time? No recollection, nothing.
The link mentioned in the error message is obsoleted. There is absolutely nothing useful available by visiting it. I'm guessing back in the days, there used to be. Today, not so much.
What others have done to navigate around this
Jacky Tsang @ Medium: Day060 — fix “error: Microsoft Visual C++ 14.0 is required.” Nope. Fail.
Stackoverflow:
-
How to install Visual C++ Build tools? Nope. Fail.
-
Microsoft Visual C++ 14.0 is required (Unable to find vcvarsall.bat) Yesh!
This problem is spread far and wide! Lot of people suffering from the same. Lot of misleading answers spreading for multiple years back.
Visual Studio 2019 Build Tools
Page is at https://visualstudio.microsoft.com/visual-cpp-build-tools/
(The link is in the vast river of comments in the last Stackoverflow-question)
Click Download Build Tools, get your 2019 installer and ...
BANG! Nope, it won't work. Failure will indicate a missing include-file:
c:\program files\python38\include\pyconfig.h(205): fatal error C1083: Cannot open include file: 'basetsd.h': No such file or directory
My solution with 2017 tools
Download link as given to you by Microsoft's website is https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16 (2019). As it happens, 2019 will contain 2015 build tools, we can assume 2017 to do the same.
If you hand edit to contain release 15 (2017): https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=15
Yaaash! It works: pip
will end with a Successfully installed -message.
Finally
Tricky subject, that. Looks like all the years have done so many changes nobody is able to keep a good track of all. What a mess! Uh.
New SSD for gaming PC - Samsung EVO 970 Plus
Tuesday, January 12. 2021
My gaming PC Windows 10 started acting out, it wouldn't successfully run any updates. I tried couple of tricks, but even data-preserving re-install wouldn't fix the problem. It was time to sort the problem, for good. The obvious sequence would be to re-install everything and get the system up and working properly again.
Doing such a radical thing wouldn't make any sense if I'd lose all of my precious data while doing it. So, I chose to put my hand to the wallet and go shopping for a new SSD. That way I could copy the files from old drive without losing anything.
A Samsung 970 EVO Plus SSD. It was supposed to be much faster than my old one, which was pretty fast already.
Installation into Asus motherboard:
The M.2-slot is behind a block of aluminium acting as a heat sink. All cards, including the GPU needed to be yanked off first, then three screws undone before getting a glipse of the old 1 TiB M.2 SSD.
Note: In my MoBo, there is a second, non-heatsink, slot for what I was about to do. Transfer data from old drive to new one. I think it is meant to be used as a temporary thing as the drive is sticking out and isn't properly fastened.
Putting it all together, installing Windows 10 20H2 and running a benchmark:
Twice the performance! Every time you can boost your PC into 2X of anything, you'll be sure to notice it. And yup! Booting, starting applications or games. Oh, the speed.
Note: In a few years, the above benchmark numbers will seem very slow and obsolete. Before that happens, I'll be sure to enjoy the doubled speed.
Let's Encrypt Transitioned to ISRG's Root
Sunday, January 10. 2021
My previous post @ Let's Encrypt Transitioning to ISRG's Root.
Let's Encrypt's announcement: Standing on Our Own Two Feet also note their support plan for old Android phones: Extending Android Device Compatibility for Let's Encrypt Certificates to maintain their IdenTrust root as an alternative source.
To put it short, plans made back in 2019 got refined and went to execution. The certificates issued by Let's E have their own root CA certificate and are not depending on a partner organization's root.
I'm getting certs from Let's E, why should I care?
If you're lucky, no need to.
If you're like me, stuff stops working.
An example: my OpenLDAP slapd
is configured to serve LDAPS (that's TLS-wrapped LDAP) from TCP/636 with a Let's E cert I'm getting from them every 60 days. Last week I did run the update and my recent cert update was issued by their new R3 intermediate server, as X3 having been phased out.
To state the obvious, lot of stuff in Linux depend on ability to access users. Now that it was gone, the previously mentioned "lot of stuff" ceased to function.
The exact message I managed to dig out with slapd -d 3
was:
TLS trace: SSL_accept:TLSv1.3 early data
TLS trace: SSL_accept:error in TLSv1.3 early data
5ff9a0ec connection_get(23): got connid=1008
5ff9a0ec connection_read(23): checking for input on id=1008
TLS trace: SSL3 alert read:fatal:unknown CA
TLS trace: SSL_accept:error in error
TLS: can't accept: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca.
5ff9a0ec connection_read(23): TLS accept failure error=-1 id=1008, closing
Note: fatal:unknown CA
Ok, my stuff got broken, what now?
Luckily the fix is easy, go get the new R3 cert from https://letsencrypt.org/certs/lets-encrypt-r3.pem. To see all of their certs in Let's E chain-of-trust, go to https://letsencrypt.org/certificates/
When targeting specifically OpenLDAP and slapd
, I went to /etc/openldap/certs/
and symlinked lets-encrypt-r3.pem
from /etc/pki/tls/certs/
which is the standard Fedora/CentOS/RedHat location for certificate PEM-files.
After downloading the cert, next thing was to get the hash of the downloaded R3-cert:
# openssl x509 -hash -noout -in /etc/pki/tls/certs/lets-encrypt-r3.pem
Which outputted the result of (your result must be identical to this):
8d33f237
This hash needs to point to the actual PEM-file, symlink:
# ln -s /etc/pki/tls/certs/lets-encrypt-r3.pem 8d33f237.0
That should be the fix. Next systemctl restart slapd
and observe functioning OpenLDAP-server.
After any changes to my LDAP-configuration, I'll verify the result with a query similar what my Linux system would do with a direct LDAP-search of:
$ ldapsearch -H "ldaps://my.ldap.server.example.com/" -x \
-b ou=People,dc=example,dc=com \
"(cn=Jari Turkia)"
and system wrapper for above:
$ getent passwd jatu
Confirmed as working!
Done.
Final thoughts
I and everybody else had over an year to prepare for this. Did I? Nope. Had other more "important" things to do instead. Fail!
Advent of Code 2020
Saturday, December 26. 2020
As I don't have too many projects on my hands during this COVID-19 ridden year, I decided to go for an ultimate time-sink of AoC 2020.
For the curious, here are my stats:
----Part 1----- ----Part 2-----
Day Time Rank Time Rank
23 03:35:13 5086 - -
19 09:53:13 8934 09:53:26 5961
18 03:08:25 6521 04:04:25 6063
17 16:12:13 16057 16:12:23 15108
16 03:01:21 9251 03:52:53 6641
15 02:14:07 8224 02:16:33 6855
14 02:54:23 8940 03:52:58 7359
13 04:20:46 13423 06:15:57 7818
12 04:26:10 12452 04:55:22 10616
11 02:34:45 9354 03:22:14 8110
10 02:46:44 15237 04:17:26 10408
9 01:52:12 11970 02:13:22 11396
8 01:49:09 12056 03:06:07 12907
7 04:12:28 14520 04:12:38 11238
6 03:30:29 17152 03:46:03 16033
5 04:28:02 18252 05:15:07 19367
4 02:17:40 14478 02:38:02 10416
3 02:41:11 16008 02:53:35 15164
2 04:30:05 23597 04:37:14 21925
1 >24h 77025 >24h 72031
My weapon-of-choice was Python. I'm a fan of IntelliJ, so I wrote my code with that.
As you can see, I didn't complete all of them. It's mostly about time required to complete the latter ones. As an example 19 took way too many hours in a Saturday, I chose to opt out at that point. I did have time to complete first part of 23.
1-9 were really trivial ones. Task in 7 was really badly worded, but after couple of failures manageable. 10 was very tricky for the optimization requirement. It is possible to populate an entire tree, but it is so heavy on resources and time-consuming, going for the math was the better way. 11 and anything after it was beyond trivial. 13 was a huge math problem and it took a while to solve. 17 was a 3D game-of-life (a 2D GoL was done in 11 already) and required really careful work. 18 involved solving reverse polish notation calculations and I considered that as rather easy. Then came 19 which involves parsing a set of rules, but given references to other rules, the approach becomes tricky and tangled soon. I completed it and decided it would take too much of my daily hours to complete any subsequent tasks. However, for 23 I did spend couple minutes just to realize my approach was badly optimized for any large set of data. At that point I churned.
Initially I did enjoy the tasks, but when the complexity ramped up I was torn. I didn't want to not do it just because the was complexity, but on the other hand writing code to be discarded for hours wasn't the best use of my time while Chrismas was nearing. At that point I didn't enjoy the tasks anymore, they were more like chores I "had" to do.
Next year, the AoC will probably be arranged as it has been since 2015. I may not participate on that one.
Merry Christmas 2020!
Friday, December 25. 2020
Merry Christmas!
Happy Holidays!
Hyvää Joulua!
Btw. as the maps by Jakub Marian are so cool, here is an another one:
Full attribution to his work. Go see the originals at https://jakubmarian.com/merry-christmas-in-european-languages-map/ and https://jakubmarian.com/christmas-gift-bringers-of-europe/. Mr. Marian fully deserves all the possible credit for permission to use his material with attribution and also for the really cool stuff he has made. Check it out yourself!
podman - Running containers in Fedora 31+
Tuesday, November 10. 2020
To clarify, I'll put the word here: Docker
Naming confusion
Next, I'll go and fail explaining why Docker isn't Docker anymore. There is an article from year 2017 OK, I give up. Is Docker now Moby? And what is LinuxKit? trying to do the explaining, nearly with a success. In that article, word "docker" is presented a number of times in different context. Word "docker" might mean the company, Docker Inc., the commercial techology with open source packaging Docker CE or paid version Docker EE. I'll add my own twist, there might be command docker
in your Linux which may or may not have something to do with Docker Inc.'s product.
In short: What you and I both call Docker isn't anymore. It's Moby.
Example, in Fedora 33:
# rpm -q -f /usr/bin/docker
moby-engine-19.03.13-1.ce.git4484c46.fc33.x86_64
Translation: Command docker
, located in /usr/bin/
is provided by a RPM-package called moby-engine.
Further, running dnf info moby-engine
in Fedora 33:
Name : moby-engine
Version : 19.03.13
Release : 1.ce.git4484c46.fc33
Architecture : x86_64
Size : 158 M
Source : moby-engine-19.03.13-1.ce.git4484c46.fc33.src.rpm
Repository : @System
From repo : fedora
Summary : The open-source application container engine
URL : https://www.docker.com
License : ASL 2.0
Description : Docker is an open source project to build, ship and run any
: application as a lightweight container.
This moby-thingie is good old docker
after all!
Fedora confusion
Installing Docker into a Fedora 33 with dnf install docker
, making sure the daemon runs with systemctl start docker
, pulling an image and in an attempt to debug what the container image about to be debugged has eaten by going with a classic:
docker run -it verycoolimagenamehere /bin/bash
... will blow up on your face! What!?
Error message you'll see states following:
docker: Error response from daemon: OCI runtime create failed: this version of runc doesn't work on cgroups v2: unknown.
Uh. Ok?
- Docker-daemon returned an error.
- OCI runtime create failed (btw. What's an OCI runtime?)
- runc failed (btw. What's a runc?)
- doesn't work on cgroups v2 (btw. What's cgroups and what other versions exist than v2?)
Lot of questions. No answers.
Why there is Fedora confusion?
Going to google-search will reveal following information: cgroups is the mechanism which makes Docker tick. There exist versions 1 and 2 of it.
Real nugget is article Fedora 31 and Control Group v2 by RedHat. I'm not going to copy/paste the contents entirely here, but to put it briefly: In Fedora 31 a decision was made to fall forward into cgroups v2. However, there is a price for doing this and one of them is broken backwards-compatiblity. cgroups v1 and v2 cannot co-exist at the same time. Running v2 has lots of benefits, but major drawback is with the specific softare by Docker Inc. which will not work with this newer tech and apparently will not start working in a near future.
Part of the confusion is that nobody else besides Fedora has the balls to do this. All other major distros are still running cgroups v1. This probably will change sometimes, but not soon. Whenever the most popular distros would go for v2, all others would follow suit. We've seen this happen in systemd and other similar advances.
Mitigating Fedora confusion
When Fedora-people chose to fall forward, they had some backing for it. They didn't simply throw us users out of the proverbial airplane without a parachute. For Fedora 31 (and 32 and 33 and ...) there exists a software package that is a replacement for docker
. It is called podman
. Website is at https://podman.io/ and it will contain more details. Source code is at https://github.com/containers/podman and it has explanation: "Podman (the POD MANager): A tool for managing OCI containers and pods". Shortly: It's docker
by RedHat.
Installing podman and running it feels like running Docker. Even the commands and their arguments match!
Something from earlier:
podman run -it verycoolimagenamehere /bin/bash
... will work! No errors! Expected Bash-prompt! Nice.
Mitigating differences
There exists lot of stuff in this world with full expectance of command docker
and it's configuration ~/.docker/config.json
.
A good example is Google Cloud Platform SDK accessing GCP Container Registry. (Somebody from the back row is yelling: AWS ECR! ... which I'll be skipping today. You'll have to figure out how aws ecr get-login-password
works by yourself.)
Having installed GCP SDK and running command gcloud auth configure-docker
(note! in Fedora 33: CLOUDSDK_PYTHON=python2 gcloud auth configure-docker
, to confirm Python 2.x is used) will modify the Docker config-file with appropriate settings. Podman won't read any of that! Uff. Doing a podman pull
or podman login
into GCR will politely ask for credentials. And nope, don't enter them. That's not a very secure way of going forward.
Throwing a little bit of GCP-magic here:
- (skip this, if you already logged in) Log into GCP:
gcloud auth login
- Display logged in GCP-users with a:
gcloud auth list
- Display the (rather long) OAuth2 credential:
gcloud auth print-access-token '
<account-id-here!>'
- Glue this into a podman-command:
podman login \
<account-id-here!>
-u oauth2accesstoken \
-p "$(gcloud auth print-access-token '')" \
https://gcr.io - Success: Login Succeeded!
Now you have successfully authenticated and a podman pull
will work from you private container repo.
Finally
Lot of confusion.
Lot of questions.
Hopefully you'll find some answers to yours.
Getting rid of Flash from Windows - For good
Wednesday, October 28. 2020
Today, Microsoft released KB4577586, Update for the removal of Adobe Flash Player. This is wonderful news! I've been waiting for this moment to happen for many many years. Many organizations will cease to support Adobe Flash end of this year and this release is an anticipated step on that path. Goodbye Flash!
Updates need to be manually downloaded from Microsoft Update Catalog, link: https://www.catalog.update.microsoft.com/search.aspx?q=kb4577586. The reason for this is the earliness. If you're not as impatient as I, the update will go its natural flow and eventually be automatically offered by your Windows.
As you can see from the listing, you need to be super-careful when picking the correct file to download. Also, to make things worse, there are some discrepancies in the descriptions:
To get the exact version of your Windows, run winver
. It will output something like this:
In that example, the package required is for Windows 10 release 1909. To download the correct package, you also need to know the processor architecture. Whether it's AMD-64, Intel-32 or ARM-64 cannot be determined from above information. If you have no idea, just go with x64 for processor architecture, that's a very likely match. Those not running x64 will know the answer without querying.
As a software engineer, I obviously wanted to extract all the required information programmatically. I automated the process of getting to know the exact version your particular Windows is running by writing and publishing a helper-script for PowerShell Core. If you are able to run PowerShell Core, the script is available at: https://gist.github.com/HQJaTu/640d0bb0b96215a9c5ce9807eccf3c19. Result will look something like this:
Result will differ on your machine, but that's the output on my Windows 10 Pro release 1909 (x64) build 18363
. Couple of other examples are:
Windows 10 Pro release 2004 (x64) build 20241
on my Windows Insider preview
Windows 8.1 Pro with Media Center (x64) build 9600
on my Windows 8.1
If you managed to pick out the correct .msu
-file, on running it, output will query if you want to proceed with installation:
In case of a mismatch, error will say something about that particular update not being applicable to your computer:
After installation, go verify the results of Flash being removed. Running your regular Chrome (or Firefox) won't do the trick. They won't support Flash anyways. What you just did was removed Flash from Windows, aka. Internet Explorer. Go to https://www.whatismybrowser.com/detect/is-flash-installed and observe results:
On a Windows, where KB4577586 hasn't been successfully applied, message will indicate existence of Adobe Flash player:
That's it. Flash-b-gone!
Tracking your location via Mobile network
Sunday, October 25. 2020
Privacy, especially your/mine/everybody's, has been an always actual topic ever since The Internet begun its commercial expansion in the beginning of 90s. Being an important topic, it has been thought everybody should have the right for privacy. In EU, the latest regulation is called on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, or General Data Protection Regulation. In California, USA they'd call it California Consumer Privacy Act.
Privacy protection gone wrong
Both above mentioned regulations have good intention. Forcing websites to bombard all of their users with stupid questions are a complete waste of bandwidth! Everybody is tired of seeing popups like this while surfing:
Somehow the good intention turned on itself. Literally nobody wants to make those choices before entering a site. There should be a setting in my browser and all those sites should read the setting and act on it without bothering me.
Mobile phone tracking
Meanwhile, your cell service provider is using your data and you won't be offered a set of checkboxes to tick.
As an example, Telia’s anonymized location data helps Finnish Government fight the coronavirus (April 2020). This corporation has a product called Crowd Insights. Anybody with a reasonable amount of money (actual price of the service is not known) can purchase location data of actual persons moving around a city. There is a brief investigation by Finnish Chancellor of Justice stating that the service is legal and won't divulge any protected data. The decision (unfortunately, only in Finnish) Paikannustietojen hyödyntäminen COVID-19 –epidemian hillinnässä states, the service's data is daily or hourly and while a reasonable accurate location of a mobile device can be obtained, if that location data doesn't identify a person, obviously, it's not protected by any laws.
On the topic of COVID-19, Future of Privacy Forum has published an article A Closer Look at Location Data: Privacy and Pandemics, where they raise points from ethics and privacy perspective of such tracking. A good read, that!
Application of mobile movement tracking
Here is one. A newspaper article titled "Puhelinten sijaintitiedot paljastavat, kuinka ihmismassojen liikkuminen yöllisessä Helsingissä muuttui" (https://www.hs.fi/kaupunki/art-2000006698336.html):
For non-Finnish -speaking readers: this article is about movement of people in center of Helsinki. For a reference point, there is Crowd Insights data from September 2019, time before COVID-19. Movement data from pandemic is from September 5th, October 10th and 17th. To state the obvious: in 2019 between Saturday 9pm and Sunday 9am people moved a lot. What's also visible is how this global pandemic changed this behaviour. In September 2020 there were no strict regulations for night clubs and bars, which is clearly visible in the data.
Anyway, this is the kind of data of you walking around in your hometown streets, your mobile in pocket, can be easily gathered. Doing the same walk without a cell network -connect mobile device wouldn't show in that data set.
What! Are they tracking my movements via my cell phone?
Short: Yes.
Long: Yes. Your cell network provider knows every second to which cell tower all devices in their network are connected to. They also know the exact location of that identified cell tower. This coarse location information can be used by somebody or not.
Everybody knows the Hollywood movie cliché where a phone call is being tracked and somebody throws the frase "Keep them talking longer, we havent' got the trace yet!". In reality they'll know your location if your phone is turned on. It doesn't have to have an ongoing call or message being received. This is how all cell networks are designed and that's how they have always operated. They know your coarse location at all times. How exact a "coarse" location is, depends. Who has access to the location information is protected by multiple laws. The point is: they have the information. At all times.
Example illustration from 3GPP spec TR 23.912 Technical report on Super-Charger:
I'm skipping most the TLA (Three-Letter Acronyms) from the pic, but the main concept is having the car (with a phone in it) moving around the network. A HLR (or Home Location Register) will always keep track, on which BSS (note: I think it's called Radio Network Subsysten, RNS in UMTS and LTE) the mobile device talks to. This BSS (or RNS) will send updates on any jumping between the serving cells.
To simplify this further: Just replace the car with a phone in your pocket and this fully applies to people bar-hopping in center of Helsinki.
Database of cell towers
As the cell tower locations are the key component when pinpointing somebody's location, we need to know which cell towers exist and exact locations of those. Unfortunately telcos think that's a trade secret and won't release such information to general public. At the same time, from our phones we can see the identifier of the cell tower a phone is connected to and some hints to neighbouring cells. I wrote about iPhone field test mode couple years back. What a phone also has is a GPS pinpointing the exact location where a particual cell tower and couple of its friends are located at. When added with the information of, a phone typically connecting to the tower with best signal, it is possible to apply some logic. Further gathering couple of data points more, it is possible to calculate a coarse location of a cell tower your phone connects to.
Being an iPhone user, I'm sorry to say an iPhone is not technically suitable for such information gathering. Fortunately, an Android being much more open (to malware) is. Necessary interfaces exist in Android-system to query for cell tower information with an app like Tower Collector. This kind of software it is possible to create records of cell tower information and send them to OpenCelliD to be further processed and distributed to other interested parties.
OpenCelliD website contains an interactive map:
The above example is from my home town of Lappeenranta, Finland. What it depicts is approximation of a LTE cell tower location having following attributes:
- MCC: 244
- MNC: 91
- LAC: 37322
- CID: 7984918
MCC and MNC indicate the telco. CellID Finder has following information for Finland at https://cellidfinder.com/mcc-mnc#F:
MCC | MNC | Network | Operator or brand name | Status |
244 | 3 | DNA Oy | DNA | Operational |
244 | 5 | Elisa Oyj | Elisa | Operational |
244 | 9 | Finnet Group | Finnet | Operational |
244 | 10 | TDC Oy | TDC | Operational |
244 | 12 | DNA Oy | DNA | Operational |
244 | 14 | Alands Mobiltelefon AB | AMT | Operational |
244 | 15 | Samk student network | Samk | Operational |
244 | 21 | Saunalahti | Saunalahti | Operational |
244 | 29 | Scnl Truphone | Operational | |
244 | 91 | TeliaSonera Finland Oyj | Sonera | Operational |
What a LAC (Location Area Code) and CID indicate cannot be decoded without a database like OpenCelliD. Wikipedia article GSM Cell ID gives some hints about LAC and CID. The page also lists other databases you may want to take a look.
COVID-19 tracking
Apple and Google being the manufactures of major operating systems for mobile devices combined forces and created Exposure Notifications. This technology does NOT utilize cell towers nor GPS it it. It works only on Bluetooth LE.
As mentioned in appropriate Wikipedia article, the protocol is called Decentralized Privacy-Preserving Proximity Tracing (or DP-3T for short).
Finally
The key takeaway from all this is:
Location of your mobile device is always known.
Your location, movements between locations and timestamp you did the moving (or not moving) are actively being used to track everybody of us. That's because the technology in mobile networks requires the information.
If this information is shared to somebody else, that's a completely different story.
Mountain biking in Lappeenranta /w GoPro
Friday, October 23. 2020
To test my new GoPro, I published a track of some bicycling into Jälki.fi.
GPS-track is at https://jalki.fi/routes/4070-tyyskan-rantareitti-2020-09-24.
4K video is at https://youtu.be/TUIbstiFisE.
Advance-fee scam - 2.0 upgrade /w Bitcoin
Thursday, October 22. 2020
From Wikipedia https://en.wikipedia.org/wiki/Advance-fee_scam:
An advance-fee scam is a form of fraud and one of the most common types of confidence tricks. The scam typically involves promising the victim a significant share of a large sum of money, in return for a small up-front payment, which the fraudster requires in order to obtain the large sum.
Any Internet user knows this loved scam is very common, used actively all the time and has number of aliases, including Nigerian scam and 419 scam.
I was on my computer minding my own business when a bot approached me in Discord (that chat-thing gamers use):
Basically what they're saying, I'd get ~7000 € worth in Bitcoins by going to their scam-site, registering as new user and applying the given promo code.
Ok. For those whose bullshit detector wouldn't start dinging already think this for a second:
Why would a complete stranger offer you 7k€ in Discord!!
no
they
wouldn't.
Being interested on their scam, I went for it. Clicked the link to their website, registered a new account, followed instructions and applied the promo code. Hey presto! I was rich!
I was a proud owner of 0.711 BTC. Serious money that!
Further following the instructions:
Obviously I wanted to access my newly found riches. Those precious Bitcoins were calling my name and I wanted to transfer them out to a wallet I controlled and could use them for buying new and shiny things.
Not minding the 0.005 BTC transaction fee, this is what happens when you try accessing your Bitcoin giveaway -price:
Now they're claiming my new account has been frozen, because they think I'm a bot. Thawing my funds would be easy, simply transfer ~300€ worth of my money to them! As I wanted to keep my own hard-earned money, I did not send them the requested 0.03 BTC. I'm 100% sure, they'll keep inventing "surprising" costs and other things requiring subsequent transfer of funds. I would never ever be able to access the fake-price they awarded me.
Nice scam!