Practical Internet Bad Neighborhoods with Postfix SMTPd
Sunday, April 26. 2020
Bad what?
Neighbourhood. In The Internet. There is a chance you live in one. It is not likely, but entirely possible. Most ISPs offering services to you are NOT bad ones.
This idea of a "bad neighbourhood" comes from University of Twente. Back in 2013 Mr. Moura did his PhD thesis titled "Internet Bad Neighborhoods". Summary:
Of the 42,000 Internet Service Providers (ISPs) surveyed, just 20 were found to be responsible for nearly half of all the internet addresses that send spam. That just is one of the striking results of an extensive study that focused on “Bad Neighborhoods” on the internet (which sometimes correspond to certain geographical areas) that are the source of a great deal of spam, phishing or other undesirable activity.
In plain text for those allergic to scientific abstracts: Based on 2013 study, 0.05% of all Internet Service Providers are responsible for 50% of the trouble. (Some of you'll say 99% of all studies are crap, but this one falls into the 1%.)
How can you detect a "bad neighbourhood"?
Oh, that's easy! Somebody from such a neighbourhood will send you spam. They'll send you links to malware. They port-scan you. They try to guess your passwords in SMTPd, SSHd or whatever their scanning revealed about you.
The theory is: That person "lives" (not necessarily IRL live, but Internet live) near other such criminals and wanna-be-criminals. Such people will find safe harbour within ISPs or hosting providers with lax security and/or low morale. Offenders find other like-minded cohorts and commit their cyber crime from that "neighbourhood".
Ok, I know an IP-address of a spam sender, what next?
Today, most people are using their email from Google or Microsoft or whatever cloud service. Those companies hire smart people to design, develop and run software to attempt to separate bad emails from good emails. Since I run an email server of my own, I have the option (luxury!) of choosing what to accept. One acceptance criteria is the connecting client's IP-address.
Off-topic: Other accepting criteria I have is standards compliance. Any valid email from Google or Microsoft or other cloud service provider will meet bunch of standards, RFCs and de-facto practices. I can easily demand a connecting client to adhere those. A ton of spammers don't care and are surprised that I'll just kick them out for not meeting my criteria. Obviously it won't stop them, but it will keep my mailbox clean.
On a connection to my SMTPd I'll get the connecting client's IP-address. With that I do two things:
- Check if its already blacklisted by spam blockers
- I'm mostly using Zen by Spamhaus. Read https://www.spamhaus.org/zen/ for more.
- Check if its already blacklisted by me based on an earlier spam that passed standards criteria and blacklisting by spam blockers
For a passing spam a manual process is started. From connecting client's IP-address first I'll determine the AS-number (see: Autonomous system (Internet) @ Wikipedia). Second, from the AS-number I'll do a reverse listing of CIDRs covered by that ASN.
Example illustration:
Credit: Above picture is borrowed from https://www.noction.com/knowledge-base/border-gateway-protocol-as-migration.
In the above picture, I'm Customer C on green color. The CIDR assigned to me by my ISP B is 192.168.2.1/24. The spammer attempting to send me mail is at Customer D on red color. As an example, let's assume the sending mail server will have IP-address of 192.168.1.10. Doing an ASN-query with that IPv4 will result in AS 64499. Then I'll do a second ASN-query to query for all the networks assigned for AS 64499. That list will contain one entry: 192.168.1.1/24. Now I can block that entire CIDR as a "bad" one. Effectively I have found one "Internet Bad Neighborhood" to be added into my blocking list. In reality I don't know if all IP-addresses in that network are bad, but I can assume so and won't get punished by doing so.
The only real "punishment" for me is the cardinal sin of assuming. I do assume, that everybody in that neighbourhood are as evil as the one trying to spam me. Unfortunately for the "good ones" in that network, blocking the entire network so is very very effective. The level of trouble originating from that neighbourhood comes down really fast.
Practical approach with Postfix
Doing all the queries and going to different databases manually is obviously tedious, time-consuming and can be easily automated.
A tool I've used, developed an ran for many years is now polished enough and made publicly available: Spammer block or https://github.com/HQJaTu/spammer-block
In the README.md, I describe my Postfix configuration. I do add a file /etc/postfix/client_checks.cidr
which I maintain with the list CIDRs of known spammers. The restriction goes into Postfix main.cf
like this:
smtpd_client_restrictions =
permit_mynetworks
permit_sasl_authenticated
check_client_access cidr:/etc/postfix/client_checks.cidr
An example output of running ./spammer-block.py -i 185.162.126.236
going into client_checks.cidr
would be:
# Confirmed spam from IP: 185.162.126.236
# AS56378 has following nets:
31.133.100.0/24 554 Go away spammer! # O.M.C.
31.133.103.0/24 554 Go away spammer! # O.M.C.
103.89.140.0/24 554 Go away spammer! # Nsof Networks Ltd (NSOFNETWORKSLTD-AP)
162.251.146.0/24 554 Go away spammer! # Cloud Web Manage (CLOUDWEBMANAGE)
185.162.125.0/24 554 Go away spammer! # O.M.C.
185.162.126.0/24 554 Go away spammer! # O.M.C.
That's it. No need to map a CIDR-file. Just remember to reload the postfix after changing the CIDR-list to make sure the new list is in effect of Postfix listener.
Problem with ipwhois-library
This is the reason I didn't publish the Spammer block -project year ago. My Python-tool is using ipwhois-library for the heavy lifting (or querying). It is a well-written handy library doing just the thing I need in this project. However, (yes even this one has a however in it) ipwhois is using RADb (https://www.radb.net/) for source of information. My experience is, that when specifically talking about list of CIDRs contained in an AS-number, RADb is not perfect source of information. To fix this, I wrote an improvement of ipwhois adding ipinfo.io as an alternate ASN data source. A developer can make a choice and use the one source suiting the project.
There is a catch. ipinfo.io doesn't allow free users more than 5 ASN-queries / day / IP-address. By creating an account and paying for their API-usage that limitation can be lifted. Also, their information about CIDRs in AS-number is accurate. This is a prime example why I love having options available.
Unfortunately for me and most users requiring ASN-queries, the author of ipwhois didn't approve my improvement. You may need to install my version from https://github.com/HQJaTu/ipwhois instead. Obviously, this situation is not ideal. Having a single version would be much better for everybody.
Finally
That's it. If you receive spam from unknown source, run the script and add results to your own block-list. Do that couple weeks and your amount of spam will reduce. Drastically!
Beware! Sometimes even Google or Microsoft will send you spam. Don't block reliable providers.
I'm open for any of your comments or suggestions. Please, don't hesitate to send them.
An end is also a beginning, Part II
Thursday, April 9. 2020
Today I'm celebrating. A number of reasons to pop the cork from a bottle of champagne.
- I cut my corporate AmEx half
- I'm having a sip from this "open source of water"
- I'm having number of sips from from the bubbly one and wait for next Tuesday!
Adobe Flash - Really?
Tuesday, March 31. 2020
I was updating a Windows Server 2019
Adobe Flash!! In 2020!! Really?
Hey Microsoft: Get rid of flash player already. Make it optional. I won't be needing it anytime soon.
JA3 - TLS fingerprinting with Wireshark
Sunday, February 9. 2020
TLS-fingerprinting. Yes. Most of regular users don't even have a concept for it.
Any regular Jane User browsing around her favorite websites can be "easily" identified as a ... well ... regular user. That identification is done by number of factors:
- Announced User-Agent if HTTP-request
- JavaScript-code executed, what's announced and detected by running library code for available features
- Traffic patterns, how greedy the attempt to chomp a website is
- Behaviour, timing of interactions, machines tend to be faster than humans
When putting any regular user aside and concentrating on the malicious ones, things start to get tricky. A bot can be written to emulate human to pass the detection not to raise any concerns. The only known effective method is to place a CAPTCHA for the user and make them pass it. There are known cases, where human will guide the bot trough and pass control to bot for it to continue its mischief. Given that, there's always room for new options for identifing requests.
There are number of organizations and people (including Google, Edward Snowden and US Government) to encourage people to encrypt everything. An example of this would be The HTTPS-Only Standard by US Government. Early 2020 figures indicate, that the entire Internet is ~60% HTTPS, ~93% requests arriving to Google are using HTTPS instead of non-encrypted HTTP. We're getting there, about everything is encrypted.
Short history of TLS fingerprinting
The thing with TLS-encryption is, the way the encryption is implemented can be fingerprinted. This fingerprint can be added to list of factors used to determine if you are who you say you are. A really good presentation about history TLS/SSL fingerprinting can be found from DETECTION ENGINEERING: Passive TLS Fingerprinting Experience from adopting JA3, but to summarize it here:
- 2009: Apache module mod_sslhaf
- 2012: p0f, TCP/IP fingerprinting tool
- 2015: SquareLemon TLS fingerprinting
- and the good one from Salesforce:
2017: JA3 for clients - 2018: JA3S for servers
- a latecomer 2019: Cisco Joy
What's missing from the list is the techniques used by various government agencies and proprietary systems. It is obvious to me, they hire bunch of super-smart individuals and some of them MUST have found the fingerprinting and never telling us about it. Point here is, there has always been methods to fingerprint your encryption, you just didn't know about it.
In this blog post I'll be focusing on JA3 as it is gaining a lot of traction in the detecting Threat Actors -field of security. This post is not about dirty details of JA3, but if you want, you can study how it works. The source code is at https://github.com/salesforce/ja3 and a video of two out of three J.A.s presenting JA3 @ Shmoocon 2018 can be found at https://youtu.be/oprPu7UIEuk?t=407. Unlike source code (which has some prerequisites), the video alone should give most of you an idea how it works.
Theory of detecting Threat Actors (TAs)
Remember, this fingerprinting originates from intrusion detection. It can be used for other purposes, but a lot of the background is from IDS. So, when studying a system for Indicators of Compromise, the theory is as follows:
A Threat Actor to change various aspects of the attack. On the bottom of the pyramid-of-pain, there are things that are trivial to alter. You can change the hash of your attack tool (if running on a system trying to detect you), or approach the target from another IP-address or change your Command&Control server to a new domain. All of these changes would throw off any attempts to detect you. Food for thought: Typical OS-commands won't change, unless upgrades are being applied. If a tool changes constantly, it's a tell.
When placing TLS-fingerprinting on the same pyramid:
Looking into one's encrypted traffic obscures some of the handy points used to detect your activity. So, neeed to shift focus to the top of the pyramid. On top there are tools and attacker's ways of working. Really difficult to change those. That's exactly where TLS-fingerprinting steps in. What if your tools leave breadcrumbs? Enough for a vigilant detection-system to realize somebody is doing probing or already has compromised the system. At minimum there is weird traffic whose fingerprint doesn't look good warranting closer look by a human.
JA3 on Wireshark
My beef with JA3 has (so far) been the fact, that my favorite network analysis tool, Wireshark, doesn't support it. Now it does:
(I took a still frame from JA3 Shmoocon presentation video and pasted Wireshark logo on top of it)
There is a Wireshark dissector for JA3. It is written in LUA by an anonymous GitHub contributor who chose not to divulge any public details of him/herself and me. Original version didn't do JA3S, and I contributed that code to the project. The Wireshark plugin is available at https://github.com/fullylegit/ja3. To get it working in Wireshark my recommendation is to install the LUA-plugin into your personal plugins-folder. In my Windows, the folder is %APPDATA%\Wireshark\plugins
. Note: The sub-folder plugins\
didn't exist on my system. I created it and then placed the required .lua
-files into it.
In action a JA3-fingerprint would look like this:
What we see on the Wireshark-capture is a TLS 1.2 client hello sent to a server. That particular client has JA3-hash of "66918128f1b9b03303d77c6f2eefd128". There is a public JA3-database at https://ja3er.com/, if you do any fingerprinting, go check your matches there. Based on JA3er, the previously captured hash typically identifies a Windows Chrome 79. Not suspicious, that one.
Trying to evade JA3 fingerprinting
Security is always a chase. As we already learned, lot of (bad?) people knew about this fingerprinting for many many years. They may be already ahead, we don't really know that yet.
A good primer on how to make your fingerprint match on demand is a post JA3/S Signatures and How to Avoid Them.
Let's try some manual tinkering! A simple
curl --tlsv1.2 https://ja3er.com/
Will indicate de4c3d0f370ff1dc78eccd89307bbb28 as the JA3-hash, that's a known curl/7.6x. For example a Windows Firefox 72 gets a JA3-hash of b20b44b18b853ef29ab773e921b03422, so it should be easy to mimic that, right?
Ignoring all other parts of JA3, Windows Firefox 72 will on TLS 1.2 use a 18 cipher set in prioritized order of:
- 0x1301/4865 TLS_AES_128_GCM_SHA256
- 0x1303/4867 TLS_CHACHA20_POLY1305_SHA256
- 0x1302/4866 TLS_AES_256_GCM_SHA384
- 0xc02b/49195 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- 0xc02f/49199 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- 0xcca9/52393 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
- 0xcca8/52392 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
- 0xc02c/49196 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- 0xc030/49200 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- 0xc00a/49162 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
- 0xc009/49161 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
- 0xc013/49171 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- 0xc014/49172 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- 0x0033/51 TLS_DHE_RSA_WITH_AES_128_CBC_SHA
- 0x0039/57 TLS_DHE_RSA_WITH_AES_256_CBC_SHA
- 0x002f/47 TLS_RSA_WITH_AES_128_CBC_SHA
- 0x0035/53 TLS_RSA_WITH_AES_256_CBC_SHA
- 0x000a/10 TLS_RSA_WITH_3DES_EDE_CBC_SHA
Info:
Full list of all TLS/SSL cipher suites is at IANA: https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-4 Beware! The list is a living thing and cipher suites are known to be deprecated for old TLS-versions. Also new ones might pop.
That 18 cipher list in OpenSSL-format used by curl is:
- TLS_AES_128_GCM_SHA256
- TLS_CHACHA20_POLY1305_SHA256
- TLS_AES_256_GCM_SHA384
- ECDHE-ECDSA-AES128-GCM-SHA256
- ECDHE-RSA-AES128-GCM-SHA256
- ECDHE-ECDSA-CHACHA20-POLY1305
- ECDHE-RSA-CHACHA20-POLY1305
- ECDHE-ECDSA-AES256-GCM-SHA384
- ECDHE-RSA-AES256-GCM-SHA384
- ECDHE-ECDSA-AES256-SHA
- ECDHE-ECDSA-AES128-SHA
- ECDHE-RSA-AES128-SHA
- ECDHE-RSA-AES256-SHA
- DHE-RSA-AES128-SHA
- DHE-RSA-AES256-SHA
- AES128-SHA
- AES256-SHA
- DES-CBC3-SHA
Info:
My Linux curl is built to use OpenSSL for TLS and OpenSSL has a different naming for ciphers than IANA and everybody else in the known universe.
Going for a curl with --cipher
and listing all of the above ciphers colon (:
) separated will rather surprisingly, NOT produce what we intended. We're expecting to get full JA3 starting with:
771,4865-4867-4866-49195-49199-52393-52392-49196-49200-49162-49161-49171-49172-51-57-47-53-10,
but are getting a:
771,4866-4867-4865-4868-49195-49199-52393-52392-49196-49200-49162-49161-49171-49172-51-57-47-53-10-255,
instead.
TLS-version of 771 matches, first three ciphers match, in WRONG order. Then curl appends a 4868 into the list. 4868, or 0x1304 is TLS_AES_128_CCM_SHA256. We didn't ask for it, but because OpenSSL knows better, it will append it to the list.
Miserable failure!
Going forward, getting all the 18 ciphers right with OpenSSL can be done. It's just too much work for me to even attempt. And I haven't got to extensions or elliptic curve parameters yet! Uff.... way too difficult to forge the fingerprint. I might have a go with that on a Python. If I have time.
Finally
This is fun stuff!
Understanding how TLS works and why it can be used as a very good indicator of your chosen tool is critical to your success. Your Tor-client WILL have a distinct fingerprint. It will be different than your Chrome, iOS Safari, Python 3 or PowerShell. Anybody having access to your encrypted traffic (like your ISP or the server you're surfing to) will be able to detect with what you're arriving.
Embrace that fact!
Wangiri Call Scam - Missed Call from International Number
Thursday, January 30. 2020
This is what happened to me:
A missed call from Papua New-Guinea. Well... I don't know anybody there, so they shouldn't be calling me.
It doesn't take too much investigation to realize, IT'S A SCAM!
Example: Have you been getting unexpected overseas calls? from Australia
and How to identify and report Wangiri fraud to Vodafone from UK.
The Vodafone article says:
What’s Wangiri fraud?It’s receiving missed calls from international numbers you don’t recognise on either a mobile or a fixed-line phone.
The fraudsters generating the missed calls hope that their expensive international numbers will be called back
so that they can profit.
Looks like that scam has been going on for years. The reason is obvious, it's way too easy! Making automated calls and hanging up when the other side starts ringing doesn't cost you anything. The seriously expensive number unsuspecting victims will call back will apparently play you some music while making you wait as long as you like. Every minute the criminals will get a slice of your money.
How is this possible? How can you change the number you're calling from? Well, easy! You can do it too: https://www.spooftel.com/
"SpoofTel offers you the ability to spoof caller ID and send SMS messages. You can change what someone sees on their call display when they receive a phone call to anything you like!"
Entire world is using ancient telecommunications protocol SS#7. If you're really interested, read The Wikipedia article about it. There are number of flaws in it, as it is entirely based on the assumption only non-criminals have access to global telecommunications network. It used to hold true at the time it was created, but after that. Not so much. And that unchangeable thingie we have to thank for this and multiple other scams and security flaws.
Data Visualization - Emotet banking trojan
Monday, January 27. 2020
Emotet is a nasty piece of malware. It has been around The Net for number of years now and despite all the efforts, it is still stealing money from unsuspecting victims who log in into their online bank with their computers and suddenly lose all of their money to criminals.
Last month, I bumped into a "historical" Emotet-reference. A document contains the URLs for malicious distribution endpoints of documents and binaries used to spread the malware. It also contains IPv4-addresses for Command & Control servers. There are hundreds of endpoints listed, and every single one I tested was already taken down by ISPs or appropriate government officials. Surprisingly, only 20% of the URLs were for Wordpress. Given its popularity and all the security flaws, I kinda expected the percentage to match its market share, 35% of all the websites in the entire World run Wordpress. If you're reading this in the future, I'd assume the percentage to be higher.
As a coding exercise, I analysed the listed endpoints for all three variants (or Epochs as this malware's generations are called) of Emotet and created a heatmap of them. It would be really fun to get a list of all the infected computers and list of those computers where money was stolen from, but unfortunately for my curious mind, that data wasn't available.
So, no victims, only hijacked servers in this map:
Actual Google Maps -application I wrote is at https://blog.hqcodeshop.fi/Emotet-map/map.html, go investigate it there.
This is a simple project, but if anybody want's do learn data visualization with Google Maps JavaScript API, my project is at https://github.com/HQJaTu/emotet-malware-mapper. Note: You will need an API-key from Google for your own projects. My API-key is publicly available, but restricted. It won't work for you.
As analysis of the hijacked distribution points and C2 -servers, there is lot of heat in obvious places, Europe and North America. But as you can see, there are lots of servers in use all around the globe. That should give everybody an idea why fighting cybercrime is so difficult.
Update 30th Jan 2020:
Emotet seems to be picking up speed, there is a US CISA warning about Increased Emotet Malware Activity. Apparently it is #1 malware currently in the world.
State of Ubisoft in 2020
Saturday, January 18. 2020
Is Ubisoft really going to pull a Nokia?
My obvious reference is to every Finn's favorite corporation Nokia. They ruled the mobile phone industry by having the best network equipment and best handhelds. Ultimately they managed (pun: unmanaged) to lose their status on both. For those who don't know all the dirty details, an excellent summary is for example Whatever Happened to Nokia? The Rise and Decline of a Giant.
Ubisoft's recent actions can be well summarized by what happened today (18th Jan 2020). I opened Uplay on my PC:
And started my recent favorite game, Ghost Recon Breakpoint:
Obviously, given how Ubisoft manages their business, I couldn't play it.
On Twitter, they're stating "We're aware of the issues affecting connectivity on multiple titles and the ability to access Cloud Save data on Uplay PC." Looks like that statement isn't entirely true. Other comments include "Its just not pc players, it's on the xbox too." Mine and other gamer's frustration can be summarized well on this tweet "When you finally get a Saturday to just play and grind it out there's an outage..AN OUTAGE ON A Saturday.... WTF!!!"
This isn't especially rare incident. They were down on Ghost Recon Breakpoint's launch day too. PCgamer has an article about that. A quote:
"Ghost Recon Breakpoint is out today, but unfortunately you may not be able to play it right now.
Users trying to connect to the game are experiencing a few different errors: The servers are not available at this time."
Not only they made a serious mess with the latest Ghost Recon by incorporating game mechanics from The Division. You will be looting constantly, most of your "precious" loot is pure junk. You won't need it, want it, but new unnecessary items will keep adding to your inventory. Unlike The Division (and Division 2), in Ghost Recon Breakpoint they don't offer you any tools or mechanism to maintain your ever-filling inventory. Also, they added a character level to the game. Completely not what Ghost Recon Wildlands had. To me that's blasphemy! Ghost Recon series isn't an role playing game!
When talking about other recent Ubisoft releases, Far Cry series has always been very high on my favorite games. Then came the Far Cry 5, and one of the worst ever episodes in the entire video gaming history. In comparison, I think ET in Atari2600 is a good game! In the Far Cry 5 game, there are three regions to free from oppressors (mostly by killing, destroying and blowing up their stuff). One of the regions is illegal drugs related. Gameplay in that region includes hours and hours of dream sequences, which you cannot bypass! To make it worse, some of the uttely stupid dream sequences are "interactive". You have to press forward for 15 minutes to make game progress. If you won't, nothing will happen (I tested and waited for an hour). What sane person designed that!!? What sane gamer would think that's fun?
Also I'm hugely annoyed by the fact, that in any of the three regions when story progresses, you will be kidnapped. The kidnapping will happen and you cannot prevent it. Oh, why!? The kidnapping will happen even if you're in the middle of flying of an aircraft. A person will come, while you're mid air and grab you! That kinda yanks you away from immersing into the game's universe. I know games aren't supposed to be realistic, but there has to be SOME rules and limits.
This string of mistakes and errors of Ubisoft didn't go unnoticed. A recent announcement of Ubisoft revamps editorial team to make its games more unique was issued. A quote: "However, following the disappointing sales performance of 2019 titles The Division 2 and Ghost Recon Breakpoint" Well, that's what you end up having. Bad games will result in bad sales. The Division 2 didn't offer anything new from The Division. Instead of roaming around New York's winter, you're in Washington DC during summer. That's it. Any gamers were seriously let down by the lack of new elements to the game. Ghost Recon Breakpoint, that one they seriously messed up. Again, a serious letdown, resulting in bad sales.
When talking about Ubisoft's "ability" to mismanage technical issues, they're world class in that! World class fuckups, that is. Given my expertise on managing application on arrays of servers, I personally know what can go wrong with a set of game servers. I been there, done it and even got the T-shirt about that. What's surprising here, is their ability to do the same mistakes over and over again. Their SRE (or Service Reliability Engineering) is utter crap. Heads should roll there and everybody incompetent should be fired.
In the above announcement to make things better, they do mention word "quality" couple of times. Ghost Recon Breakpoint is an excellent example of non-quality. I simply cannot comprehend what QA-people where thinking when they played the game in hard difficulty level. To increase difficulty, all bots, turrets and non-player characters gain ability to see trough smoke, dust, trees and foliage. That's really discouraging for a gamer like me. I'm a huge fan of Sniper Elite -series, where hiding is a huge part of the game. Here you'll revert back to Far Cry 2 game mode, where visual blocking simply won't happen. If material you're hiding behind allows bullets to pass, you're toast! Second really serious issue is ladders. I have no idea, why didn't they implement ladders the same way ALL the other Ubisoft titles do. In Breakpoint, it takes multiple attempts to use ladders, up or down. And this doesn't include the elevation bug introduced in November 2019 update. If you traverse to a high enough location, your game will crash. Every. Single. Time. Given, this game's map has mountains and some missions are on mountain peaks, it would make sense to fix the dreaded altitude bug fast. But nooooooooh! As of today (18th Jan 2020), the bug already celebrated it's two month birthday. No update on WHEN this will be fixed, no information, no fix, just crash.
Also I won't even begin to describe why my Uplay account doesn't have email address verified, apparently their vendor for emails cannot handle TLDs with more than three characrers. By contacting Ubisoft support, I got instruction to use a Google Mail account for Uplay. I did that, I got myself a .gaming
-domain and registered that as my email. Doesn't work. They cannot send me any verification mails. As an example, I'd love to post this rant to Ubisoft Forums. I cannot, as my email address, where they can sell me games and send suggestion from support cannot be verified.
To conclude: I won't be sending any of my money to Ubisoft. Not anymore.
Far Cry series, for me, is finished. I won't be eyeballing any more ridiculously long and stupid dream sequences.
The Division, been there, done it. Adding difficulty to the game by introducing huge amounts of almost indestructible enemies really won't cut it for me.
Ghost Recon... well... I'll be playing Wildlands. That's the last good one in that series.
Watch Dogs, a remake of Far Cry 3 into modern urban world. Even the missions were same. Watch Dogs 2 wasn't that bad, but still. I think I've seen that universe not evolve anymore.
Ultimately, I won't be anticipating any of the upcoming Ubisoft releases. Ubisoft's ability to produce anything interesting has simply vanished.
Take my suggestion: Don't send any of your money to them either.
PS. Connectivity issues still persist. In three hours Ubisoft hasn't been able to fix the issue, nor provide any update on ETA.iPhone USB-C fast charging
Thursday, January 16. 2020
Now that EU is doing yet another round on Common charger for mobile radio equipment 2019/2983(RSP), it inspired me to take a closer look on USB-C or USB 3.1 cables.
One USB-C cable is for micro-USB and another is for Apple's Lightning connector. More details about the Apple-cable can be found from support article About the Apple USB-C to Lightning Cable. They claim, that some iPads/iPhones would go up to 96W on a compatible charger. Qualcomm Quick Charge @ Wikipedia has more details on that.
From left to right:
- Celly TCUSBC30WWH (30W)
- Exibel 38-9107 (18W), note: Exibel is a Clas Ohlson -brand
- A generic Huawei USB2.0 (10W)
To get a real slow rate of charging, an ancient 2.5W USB-charger could also be measured. As an impatient person, I don't think I own such a device anymore, so I couldn't measure it's slowness.
It wasn't completely drained. The thing with Li-Ion charging is to avoid the battery heating. Given the chemical reaction in a Li-Ion cell on charging, it is not possible to pump too much current to a cell while maintaining efficiency both on energy and time. A typical charging cycle follows a very careful formula charging the cell more when it's drained and less when it's reaching full capacity.
My testing was around 20% capacity. Here are the measurements:
Note: Obviously my measurements are from the wall socket. Not all the energy goes to the iPhone, as there will be some loss on the charger itself.
- Huawei 10W charger measured 9W, which is nice!
- Exibel 18W charger measured 14W, which is ~20% less than expected
- Celly 30W charger measured 18W, which is ~40 less than expected
Conclusions:
An iPhone 8 won't be using the Apple-mentioned 96W, no matter what. The measured 18W is a lot more than USB2.0 can do, meaning the actual charging will be LOT faster on an near-empty battery. Note: it is not possible drain Li-Ion cell completely, your phone will shut down before that happens. If I'm happy to get 80% capacity to my iPhone, charging for that will happen in half the time I can get with a regular 10W charger. During charging, as the capacity increases, the rate of charging will decline, a lot. For the remaining 20% I won't benefit from USB-C charger.
Additional note:
iPhone 8 won't sync data via USB-C. That's really weird. For data, an USB2.0 Lighting cable is required. On my iPad, an USB-C cable works for both charging and data.
HDMI Capture with AVerMedia Live Gamer Portable 2 Plus
Wednesday, January 8. 2020
HDMI or High-Definition Multimedia Interface is the de-facto connector and signaling for almost everything at your home having picture and sound. As the data in the pipe is digital, it is relatively easy to take some of it for your own needs. Typical such needs would include streaming your activities to The Net or extracting the data to your computer.
As the devices required are relatively inexpensive, I got one. Here it is, a AVerMedia Live Gamer Portable 2 Plus (or GC513 as AVerMedia people would call it):
It is a relatively small unit having couple of connectors in it:
The obvious ones are HDMI in and out. The micro-USB is for powering the unit and extracting the data out of it to a computer. If you want to do some live streaming of your fabulous gaming, there are 3.5mm jacks for headphones/mic. The last slot is for an SD-card if you want to make any recordings with the unit not connected to a computer.
Windows software is called RecCentral. A sample session with a Super NES Classic Edition (https://www.nintendo.com/super-nes-classic/) connected to it would look something like this:
The software is capable of producing high-quality JPEG stills, but mostly people are interested in the unit's capability of producing AVC (aka. H.264, aka. MPEG-4) video from your precious HDMI input.
Just looking at the red triangle-shaped box isn't much fun, so I took the unit for a real-life test. I did some video/still capturing which resulted as footage of some Super Metroid gameplay. The actual game is from year 1994 and HDMI didn't exist at that time. But the modern SNES actually does have HDMI-output in it, making it an interesting target for this test.
With help of Adobe Premiere Pro (Audition, After Effects, Photoshop) software:
I edited a nice all-bosses video from the game. The resulting video has been uploaded into YouTube as Super Metroid - All bosses:
All artistic decisions seen on the video are made by my son, whose YouTube channel the video is at. In this video-project I merely acted as a video technician (obviously, my son's video and sound editing skills aren't quite there yet). Also, the actual gameplay on the video is by my son. IMHO his gameplay is excellent, given his age doesn't have two digits in it. Especially the last two bosses took a lot of practice, but he seems to master it now.
Finally:
I totally recommend the AVerKey Live Gamer Portable 2 Plus for your trivial H.264 capturing. It really won't cut it for any serious work but packs enough punch for the simple things.
Also, if you enjoy the YouTube-video we made, give it your thumbs up. My son will appreciate it!
Running PHP from Plesk repo
Sunday, December 8. 2019
In 2013 I packaged PHP versions 5.4, 5.5 and 5.6 into RPMs suitable for installing and running in my Plesk-box. PHP is a programming language, more about that @ https://www.php.net/. Plesk is a web-hosting platform, more about that @ https://www.plesk.com/.
As I chose to distribute my work freely (see announcement https://talk.plesk.com/threads/centos-6-4-php-versions-5-4-and-5-5.294084/), because there wasn't that much innovation there. I just enabled multiple PHP versions to co-exist on a single server as I needed that feature myself. In 2015 Plesk decided to take my freely distributed packages and started distributing them as their own. They didn't even change the names! A sucker move, that.
However, I said it then and will say it now: that's just how open-source works. You take somebody else's hard work, bring something of your own and elevate it to the next level. Nothing wrong with that. However, a simple "Thanks!" would do it for me. Never got one from big greedy corporation.
In this case, the faceless corpo brought in stability, continuity and sustained support. Something I would never even dream of providing. I'm a single man, a hobbyist. What they have is teams of paid professionals. They completed the parts I never needed and fixed the wrinkles I made. Given the high quality of my and their work, ultimately all my boxes have been running PHP from their repo ever since.
This summer, something changed.
My /etc/yum.repos.d/plesk-php.repo
had something like this for years:
baseurl=http://autoinstall.plesk.com/PHP_7.2/dist-rpm-CentOS-$releasever-$basearch/
I was stuck at PHP 7.2.19, something that was released in May 2019. Six months had passed and I had no updates. On investigation I bumped into https://docs.plesk.com/release-notes/obsidian/change-log/#php-191126. It states for 26th November 2019 for PHP 7.2.25 to be available for Plesk. That's like a big WHAAAAAAT!
More investigation was needed. I actually got a fresh VM, downloaded Plesk installer and started installing it to get the correct URL for PHP repo. It seems to be:
baseurl=http://autoinstall.plesk.com/PHP73_17/dist-rpm-CentOS-$releasever-$basearch/
Ta-daa! Now I had PHP 7.3.12:
# rpm -q -i plesk-php73-cli
Name : plesk-php73-cli
Epoch : 1
Version : 7.3.12
Release : 1centos.8.191122.1343
Architecture: x86_64
Source RPM : plesk-php73-cli-7.3.12-1centos.8.191122.1343.src.rpm
Build Date : Fri 22 Nov 2019 01:43:45 AM EST
Build Host : bcos8x64.plesk.ru
Packager : Plesk <info@plesk.com>
Vendor : Plesk
Summary : Command-line interface for PHP
Description :
The php-cli package contains the command-line interface
executing PHP scripts, /usr/bin/php, and the CGI interface.
Actually PHP 7.4 is also available, just replace PHP73_17
with PHP74_17
, to get the desired version.
PS.
Most of you are super-happy about your Apache/PHP -pair your distro vendor provides. If you're like me and ditched Apache, getting Nginx to run PHP requires some more effort. And if your requirements are to run a newer version of PHP than your vendor can provide, then you really short on options. Getting tailored PHP from Plesk's repo and pairing that with you Nginx takes one stresser out.
Blog server upgrade to CentOS 8
Saturday, December 7. 2019
Since the inception of this blog back in January 2013, my weapon-of-choice has been CentOS Linux. When looking at the release chart @ https://en.wikipedia.org/wiki/CentOS#Latest_version_information it becomes obvious this is the 3rd major version of CentOS I'm running my blog on. In 2013 only version 6 was available, I must have upgraded into version 7 during 2014, and now 2019 I'm running on version 8. Given how RedHat and their organization(s) operate, the base Linux for my system is Fedora 28. See Fedora Project releases from https://fedoraproject.org/wiki/Releases.
The only motivation for me to upgrade is technology. RHEL/CentOS almost never upgrade their component versions. They do back-port any security patches even if authors of the original ones give up on their obsoleted stuff. RedHat does not. For people loving things how they are, that's a good thing. For people like me, its not that good.
Absolutely necessary things I had earlier, but lost and again have:
- HTTP/2
- For how and why this differs from HTTP/1.1 everybody else is still using, dive into Mr. Curl's book http2 explained. Its freely available @ https://http2-explained.haxx.se/content/en/
- TLS 1.3
- TLS versions 1 and 1.1 have been obsoleted. That leaves TLS 1.2 as the almost-only viable secure protocol.
- Obvious disclaimer for TLS 1.3: As of writing, it is still experimental. In reality not so much. Chrome and Firefox (among other platforms) support TLS 1.3 fully.
- Cloudflare's Head of Research Nick Sullivan is a known 1.3 enthusiast. Read his thoughts @ https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/.
Other highlights:
- PHP 7.3
- My blog software runs on PHP. I upgraded 7.2, but am too scared to go for 7.4 yet.
- Native dual-stack IPv6/IPv4 networking. This is courtesy of my service provider.
- TLS 1.2 configured to not support any CBC-ciphers, for details see Why did TLS 1.3 drop AES-CBC? as an example
- Inspiration for this taken from Cipherli.st and Security/Server Side TLS on Mozilla wiki.
- Apologies for anybody using IE 11 on Windows Phone 8.1, or Safari versions 6-8 on iOS 6-9/OS X 10.9 or 10.10. You won't see this text as your devices/operating systems won't support my reasonably secure settings.
- For everybody else: Congratulations on having a decently secure device to do your Internet browsing with.
- tmux
- Terminal multiplexer, https://github.com/tmux/tmux/wiki
- Most of you just SSH into a server and be happy with it. I almost always run my sessions trough something that will keep my work safe if a disconnection occurs. To my surprise I keep bumping into sysadmins who don't either know about this or don't see this as a necessary approach.
- I've ran GNU Screen for over 25 years now. Not anymore. Uff!
- nftables (https://wiki.nftables.org/), courtesy of RHEL 8 / CentOS 8
- the new packet classification framework that replaces the existing {ip,ip6,arp,eb}_tables infrastructure
- I've ran IPchains / IPtables for 21 years now. Not anymore. Arf!
Qualsys report on my blog now:
Nice!
Next up: CentOS Stream.
A new attempt to allow change of software versions. This will effectively detach CentOS from RHEL and gear it towards Fedora. This enables CentOS to get newer software as a rolling release Linux-distro, but keep the changes not-so-aggressive.
I won't run this yet on my blog server. This is so new at this point, but I'll have it running on a devel-box.
Schneier's Six Lessons/Truisms on Internet Security
Monday, November 11. 2019
Mr. Bruce Schneier is one of the influencers I tend to follow in The Net. He is a sought after keynote speaker and does that on regular basis. More on what he does on his site @ https://www.schneier.com/.
Quite few of his keynotes, speeches and presentations are made publicly available and I regularly tend to follow up on those. Past couple of years he has given a keynote having roughly the same content. Here is what he wants you, an average Joe Internet-user to know in form of lessons. To draw this with a crayon: Lessons are from Bruce Schneier, comments are mine.
These are things that are true and are likely to be true for a while now. They are not going to change anytime soon.
Truism 1: Most software is poorly written and insecure
The market doesn't want to pay for secure software.
Good / fast / cheap - pick any two! The maket has generally picked fast & cheap over good.
Poor software is full of bugs and some bugs are vulnerabilities. So, all software contains vulnerabilities. You know this is true because your computes are updated monthly with security patches.
Whoa! I'm a software engineer and don't write my software poorly and it's not insecure. Or ... that's what I claim. On a second thought, that's how I want my world to be, but is it? I don't know!
Truism 2: Internet was never designed with security in mind
That might seem odd 2019 saying that. But back in the 1970s, 80s and 90s it was conventional wisdom.
Two basic reasons: 1) Back then Internet wasn't being used for anything important, 2) There were organizational constraints about who could access the Internet. For those two reasons designers consciously made desicisions not to add security into the fabric of Internet.
Where still living with the effects of that decision. Whether its the domain name system (comment: DNS), the Internet routing system (comment: BGP), the security of Internet packets (comment: IP), security of email (comment: SMTP) and email addresses.
This truism I know from early days in The Net. We used completely insecure protocols like Telnet or FTP. Those were later replaced by SSH and HTTPS, but still the fabric of Internet is insecure. It's the applications taking care of the security. This is by design and cannot be changed.
Obviously there are known attempts to retrofit fixes in. DNS has extension called DNSSEC, IP has extensions called IPsec, mail transfer has a secure variant of SMTPS. Not a single one of those is fully in use. All of them are accepted, even widely used, but not every single user has them in their daily lives. We still have to support both the insecure and secure Net.
Truism 3: You cannot constrain the functionality of a computer
A computer is a general purpose device. This is a difference between computers and rest of the world.
No matter how hard is tried a (landline) telephone cannot be anything other than a telephone. (A mobile phone) is, of course, a computer that makes phone calls. It can do anything.
This has ramifications for us. The designers cannot anticipate every use condition, every function. It also means these devices can be upgraded with new functionality.
Malware is a functional upgrade. It's one you didn't want, it's one that you didn't ask for, it's one that you don't need, one given to you by somebody else.
Extensibility. That's a blessing and a curse at the same time. As an example, owners of Tesla electric cars get firmware upgrades like everything else in this world. Sometimes the car learns new tricks and things it can do, but it also can be crippled with malware like any other computer. Again, there are obvious attempts to do some level of control what a computer can do at the operating system level. However, that layer is too far from the actual hardware. The hardware still can run malware, regardless of what security measure are taken at higher levels of a computer.
Truism 4: Complexity is the worst enemy of security
Internet is the most complex machine mankind has ever built, by a lot.
The defender occupies the position of interior defending a system. And the more complex the system is, the more potential attack points there are. Attacker has to find one way in, defender has to defend all of them.
We're designing systems that are getting more complex faster than our ability to secure them.
Security is getting better, but complexity is getting faster faster and we're losing ground as we keep up.
Attack is easier than defense across the board. Security testing is very hard.
As a software engineer designing complex systems, I know that to be true. A classic programmer joke is "this cannot affect that!", but due to some miraculous chain, not ever designed by anyone, everything seems to depend on everything else. Dependencies and requirements in a modern distributed system are so complex, not even the best humans can ever fully comprehend them. We tend to keep some sort of 80:20-rule in place, it's enough to understand 80% of the system as it would take too much time to gain the rest. So, we don't understand our systems, we don't understand the security of our systems. Security is done by plugging all the holes and wishing anything wouldn't happen. Unfortunately.
Truism 5: There are new vulnerabilities in the interconnections
Vulnerabilities in one thing, affect the other things.
2016: The Mirai botnet. Vulnerabilities in DVRs and CCTV cameras allowed hackers to build a DDoS system that was raid against domain name server that dropped about 20 to 30% of popular websites.
2013: Target Corporation, an USA retailer, attacked by someone who stole the credentials of their HVAC contractor.
2017: A casino in Las Vegas was hacked through their Internet-connected fish tank.
Vulnerability in Pretty Good Privacy (PGP): It was not actually vulnerability in PGP, it was a vulnerability that arose because of the way PGP handled encryption and the way modern email programs handled embedded HTML. Those two together allowed an attacker to craft an email message to modify the existing email message that he couldn't read, in such a way that when it was delivered to the recipient (the victim in this case), a copy of the plaintext could be sent to the attacker's web server.
Such a thing happens quite often. Something is broken and its nobody's fault. No single party can be pinpointed as responsible or guilty, but the fact remains: stuff gets broken because an interconnection exists. Getting the broken stuff fixed is hard or impossible.
Truism 6: Attacks always get better
They always get easier. They always get faster. They always get more powerful.
What counts as a secure password is constantly changing as just the speed of password crackers get faster.
Attackers get smarter and adapt. When you have to engineer against a tornado, you don't have to worry about tornadoes adapting to whatever defensive measures you put in place, tornadoes don't get smarter. But attackers against ATM-machines, cars and everything else do.
An expertise flows downhill. Today's top-secret NSA programs become tomorrow's PhD thesis and next day's hacker tools.
Agreed. This phenomenon can be easily observed by everybody. Encryption always gets more bits just because today's cell phone is capable of cracking last decade's passwords.
If you want to see The Man himself talking, go see Keynote by Mr. Bruce Schneier - CyCon 2018 or IBM Nordic Security Summit - Bruce Schneier | 2018, Stockholm. Good stuff there!
New Weather Station - Davis Vantage Vue - Part 2 of 2: Linux installation
Tuesday, November 5. 2019
This is a part two out of two on my Davis Vantage Vue weather station installation story. Previous part was about hardware installation.
Datalogger expansion
The indoors console has a slot for expansion devices. I went for the RS-232 WeatherLink datalogger expansion:
RS232 Cabling
The datalogger has a very short cable and a RJ-11 connector on the other end. The obvious good thing is the common availability of telephone extension cords to get past the 5 cm cable lenght of the datalogger. A regular landline telephone typically has such RJ-11 connectors in it and what I did was to get an inexpensive extension cord with suitable lenght.
For computer connectivity, with datalogger box has a blue RJ-11 to RS232 converter. The four connected pins of the converter are as follows:
RS232 and Linux
Most computers today don't have a RS232-port in them. To overcome this, years ago I bought a quad-RS232-port USB-thingie:
If you look closely, Port 1 of the unit has a DIY RS232-connector attached into it. That's my Vantage Vue cable connected to the indoors console. Also, note the lack of the blue RJ-11 to RS232 converter unit. I reverse engineered the pins and soldered my cable directly to a D-9 connector to get the same result.
Now the hardware parts is done. All the connectors are connected and attached to a PC.
Software
Half of the work is done. Next some software is needed to access the data in the data logger.
For Windows and macOS
Those not running Linux, there is WeatherLink software freely available at https://www.davisinstruments.com/product/weatherlink-computer-software/. It goes without saying, the software is useless without appropriate hardware it gets the inputs from. I never even installed the software, as using a non-Linux was never an option for me. So, I have no idea if the software is good or not.
For Linux
As you might expect, when going to Linux, there are no commercial software options available. However, number of open-source ones are. My personal choice is WeeWX, its available at http://www.weewx.com/ and source code at https://github.com/weewx/weewx.
Install WeeWX:
- Download software:
git clone https://github.com/weewx/weewx.git
- In git-directory, create RPM-package:
make -f makefile rpm-package SIGN=0
- As root, install the newly created RPM-package:
rpm --install -h dist/weewx-3.9.2-1.rhel.noarch.rpm
- That's it!
Configure WeeWX:
- (dependency) Python 2 is a requirement. Given Python version 2 deprecation (see https://pythonclock.org/ for details), at the time of writing, there is less than two months left of Python 2 lifetime, this project really should get an upgrade to 3. No such joy yet.
- (dependency) pyserial-package:
pip install pyserial
- Run autoconfig:
wee_config --install --dist-config /etc/weewx/weewx.conf.dist --output /etc/weewx/weewx.conf
- In the array of questions, when being asked to choose a driver, go for
Vantage (weewx.drivers.vantage)
- In the array of questions, when being asked to choose a driver, go for
- Inspect the resulting
/etc/weewx/weewx.conf
and edit if necessary:- Section
[Station]
week_start = 0
- Section [StdReport]
HTML_ROOT = /var/www/html/weewx
- (optional for Weather Underground users), Section
[StdRESTful]
- Subsection
[[Wunderground]]
- Enter station name and password
enable = true
- Subsection
- Section
- Configuring done!
Now you're ready (as root) to start the thing with a systemctl start weewx
.
On a working system, you should get LOOP-records out of weewxd
instantly. After a little while of gathering data, you should start having set of HTML/CSS/PNG-files in /var/www/html/weewx
. It's a good idea to set up a web server to publish those files for your own enjoyment. This is something you don't have to do, but I strongly advice to enable HTTP-endpoint to your results. I will immensly help determining if your system works or not.
One reason I love having a weather station around is to publish my data to The Net. I've been tagging along with Weather Underground for years. There has been good years and really bad years, when Wunderground's servers have been misbehaving a lot. Now that IBM owns the thing, there has been some improvements. The most important thing is there is somebody actually maintaining the system and making it run. Obviously, there has been some improvements in the service too.
Aggregation problem
When I got my system stable and running, I realized my wind data is flatline. In nature it is almost impossible for not to be windy for multiple days. I visually inspected the wind speed and direction gauges, they were working unobstructed. However, my console did not indicate any wind at all.
After multiple days of running, I was about to give up and RMA the thing back to Davis for replacement, the console started working! That was totally unexpected. Why: 1) the system did not work, 2) the system started working without any actions from me. That was the case for me. As the problem has not occurred since, it must have been some kind of newness.
What next
Now that everything is up, running and stays that way. There's always something to improve. What I'm planning is to do is pump the LOOP-records to Google BigQuery.
As a side note: when new information is available, it will be emitted by weewxd as LOOP: <the data here>
into your system logs. However, that data isn't used. After a period of time, your Vantage Vue will aggregate all those records into a time-slot record. That will be used as your measurement. Since those LOOP-records are simply discarded I thought it might be a good idea to base some analytics on those. I happen to know Google BigQuery well from past projects, all that I need is to write a suitable subsystem into weewx to pump the data into a correct place. Then it would be possible to do some analytics of my own on those records.
New Weather Station - Davis Vantage Vue - Part 1 of 2: Hardware installation
Monday, November 4. 2019
Last July, there was a severe hailstorm in my area. At the time, I wrote a blog post about it, because the storm broke my weather station. Since fighting with nature is futile, I'm blaming myself for relying on La Crosse -approach of using zipties to attach the components to a steel pole. Given my latitude, 61° north, during summer it can be +30° C and during winter -30° C. The huge variation of temperature combined with all the sunlight directed to nylon straps will eventually make them weak. So, here is my mental note: Next time use steel straps!
I did that when I got my new weather station. This post is about the hardware and getting it fastened securely. The next part is about getting the device talking to my Linux and pumping the results to The Net.
Davis Vantage Vue
After careful consideration, I went for a Vantage Vue from Davis Instruments.
Since this is a serious piece of hardware, in their ranking this 500,- € piece is a "beginner" model. In Davis catalog, there are "real" models for enthusiasts costing thousands of $/€. I love my hardware, but not that much.
What's in the box
For any weather station from cheapest 10,- € junk to Davis, there's always an indoor unit and outdoor unit.
In the box, there is paper manuals, altough I went for the PDF from their site. Indoors unit is ready-to-go, but outdoors unit has number of sensors needing attaching. Even the necessary tools are included for those parts, which require them. Most parts just click into place.
The Vantage Vue console is really nice. There is plenty of buttons for button-lovers and a serious antenna for free placement of the indoor unit. The announced range between units is 300 meters. I didn't test the range, but the signal level I'm getting in my setup is never below 98%. Again, serious piece of hardware.
Installing sensors
Some assembly is required for the outdoors sensor unit. I begun my assembly by attaching the replaceable lithium-ion battery.
The Li-Ion battery is charged from a solar cell of the sensor unit. I have reports from Finland, that given our lack of sunshine during winter, the sensor unit does survive our long and dark winter ok. I have no idea how long the battery wil last, but I'm preparing to replace that after couple of years.
Next one, anemometer wind vane:
Gauge for wind direction requires fastening a hex screw, but the tool in included in the box. Wind speed gauge just snaps into place. Unlike wind vane, which is hanging from the bottom of the unit, speed gauge is at the top of the unit, so gravity will take care of most of the fastening.
On the bottom of the sensor unit, there is a slot for a bucket seesaw. When the bucket has enough weight in it, in form of rainwater, the seesaw will tilt and empty the bucket. Rain is measured on how many seesaw tilts will occur. Again, this is a real scientific measurement unit, there is an adjustment screw for rain gauge allowing you to calibrate the unit. On top of the rain gauge, there is a scoop. On the bottom of the scoop, you'll need to attach a plug keeping all the unwanted stuff out of the bucket:
Also note three details in the pic: the solar cell, transmitter antenna and a level bubble helping you during installation of the sensor unit.
Outdoors sensor unit done
This is how the final result looks like for me:
Idea during installation is to make sure the sensor unit has the solar cell directed to south (180°) allowing it to capture maximum sunlight on Northen Hemisphere. This also serves the base of direction for the wind vane. Obviously, you can adjust the deviation from the console if needed. People living in Southern Hemisphere definitely need to do that. Also, you may have something special in your install, the option is there. Remember, this is a serious piece of scientific measurement hardware.
Indoors unit
A running console looks like this:
In the screen, the top half is always constant. The buttons "temp", "hum", "wind", "rain" and "bar" can be used to set the bottom half of the screen to display temperature, humidity, wind, rain or barometric pressure details.
As you can see, I took the picture on 22nd July (2019) at 16:12 (pm). It was a warm summer day +26° C both indoors and outdoors. Day was calm and there was no wind. The moon on 22nd July was a waxing cresent. Given my brand new installation, there were not yet measurements for outdoors humidity. The lack of humidity also results in lack of dew point.
People writing the console software have a sense of humor:
Console display indicates rain (an umbrella) and a text "Its raining cats'n'dogs". Hilarious!
As you can see, the indoors temperature is rather high, +28° C. In reality, that didn't reflect the truth. Later I realized, that keeping the light on will heat the console indoors sensor throwing it off by many degrees. Now I tend to keep the light off and press the "light"-button only when it is dark.
Powering indoor unit
Most people would get the required three C-cell batteries, whip them in and forget about it... until the batteries run out and need to be replaced.
Davis planned an option for that. What you can do is to ignore the battery change and plug the accompanying 5VDC 1A transformer into a wall socket and never need to change the batteries. I love quality hardware, you have options to choose from.
Datalogging
The primary purpose of me owning and running a weather station is to extract the measurements to a Linux PC. An el-cheapo weather stations offer no such option for computer connectivity. Mid-range units have a simple USB-connector (limited to max. cable lenght of 5 meters), some even RS-232 connectivity (max. cable lenght up to 100 meters). When going to these heavy-hitter -models from Davis Instruments, out-of-the-box there is nothing.
To be absolutely clear: Davis Vantage Vue can be attached to a computer, but it is an extra option that is not included in the box.
Here is a picture of USB datalogger expansion unit for Vantage Vue:
Optimal installation in my house requires the console in a place where it's easily visible and accessible. But its nowhere near a computer, so RS232 is the way I go.
I'll cover the details of datalogger expansion, cabling and Linux setup in the next post.
Deprecated SHA-1 hashing in TLS
Sunday, November 3. 2019
This blog post is about TLS-protocol. It is a constantly evolving beast. Mastering TLS is absolutely necessary, as it is one the most common and widely used components keeping our information secure in The Net.
December 2018: Lot of obsoleted TLS-stuff deprecated
Nearly an year ago, Google as the developer of Chrome browser deprecated SHA-1 hashes in TLS. In Chrome version 72 (December 2018) they actually did deprecate TLS 1 and TLS 1.1 among other things. See 72 release notes for all the details.
Making TLS 1.2 mandatory actually isn't too bad. Given Qualsys SSL Labs SSL Pulse stats:
96,0% of the servers tested (n=130.000) in SSL Labs tester support TLS 1.2. So IMHO, with reasonable confidence we're at the point where the 4% rest can be ignored.
Transition
The change was properly announced early enough and a lot of servers did support TLS 1.2 at the time of this transition, so the actual deprection went well. Not many people complained in The Net. Funnily enough, people (like me) make noise when things don't go their way and (un)fortunately in The Net that noise gets carried far. The amount of noise typically doesn't correlate anything, but phrase where there is smoke - there's fire seems to be true. In this instance, deprecating old TLS-versions and SHA-1 hashing made no smoke.
Aftermath in Chrome
Now that we have been in post SHA-1 world for a while, funny stuff starts happening. On rare occasion, this happens in Chrome:
In Chrome developer tools Security-tab has a complaint: Connection - obsolete connection settings. The details state: The server signature users SHA-1, which is obsolete. (Note this is different from the signature in the certificate.)
Seeing that warning is uncommon, other than me have bumped into that one. An example: How to change signature algorithm to SHA-2 on IIS/Plesk
I think I'm well educated in the details of TLS, but this one baffled me and lots of people. Totally!
What's not a "server signature" hash in TLS?
I did reach out to number of people regarding this problem. 100% of them suggested to check my TLS cipher settings. In TLS, a cipher suite describes the algorithms and hashes used for encryption/decryption. The encrypted blocks are authenticated with a hash-function.
Here is an example of a cipher suite used in loading a web page:
That cipher suite is code 0xC02F in TLS-protocol (aka. ECDHE-RSA-AES128-GCM-SHA256). Lists of all available cipher suites can be found for example from https://testssl.sh/openssl-iana.mapping.html.
But as I already said, the obsoletion error is not about SHA-1 hash in a cipher suite. There exists a number of cipher suites with SHA-1 hashes in them, but I wasn't using one.
Second clue is from the warning text. It gives you a hint: Note this is different from the signature in the certificate ... This error wasn't about X.509 certificate hashing either. A certificate signature looks like this:
That is an example of the X.509 certificate used in this blog. It has a SHA-256 signature in it. Again: That is a certificate signature, not a server signature the complaint is about. SHA-1 hashes in certificates were obsoleted already in 2016, the SHA-1 obsoletion of 2018 is about rest of the TLS.
What is a "server signature" hash in TLS?
As I'm not alone with this, somebody else has asked this question in Stackexchange: What's the signature structure in TLS server key exchange message? That's the power of Stackexchange in action, some super-smart person has written a proper answer to the question. From the answer I learn, that apparently TLS 1 and TLS 1.1 used MD-5 and/or SHA-1 signatures. The algorithms where hard-coded in the protocol. In TLS 1.2 the great minds doing the protocol spec decided to soft-code the chosen server signature algorithm allowing secure future options to be added. Note: Later in 2019, IETF announced Deprecating MD5 and SHA-1 signature hashes in TLS 1.2 leaving the good ones still valid.
On the wire, TLS server signature would look like this, note how SHA-1 is used to trigger the warning in Chrome:
During TLS-connection handshake, Server Key Exchange -block specifies the signature algorithm used. That particular server I used loved signing with SHA-1 making Chrome complain about the chosen (obsoleted) hash algorithm. Hard work finally paid off, I managed to isolate the problem.
Fixing server signature hash
The short version is: It really cannot be done!
There is no long version. I looked long and hard. No implementation of Apache or Nginx on a Linux or IIS on Windows have setting for server signature algorithm. Since I'm heavily into cloud-computing and working with all kinds of load balancers doing TLS offloading, I bumped into this even harder. I can barely affect available TLS protocol versions and cipher suites, but changing the server signature is out-of-reach for everybody. After studying this, I still have no idea how the server signature algorithm is chosen in common web servers like Apache or Nginx.
So, you either have this working or not and there is nothing you can do to fix. Whoa!