Open source software - root of all evil?
Saturday, July 22. 2017
In Harward Business Review, a buffoon called Joshua Gans wrote an article Could Open-Source Code Make Our Y2K Fears Finally Come True?, which I recommend everybody to read.
In case of TL;DR, I'll post a totally inappropriate summary of the article here, with words and sentences taken out-of-context to make my point. So here goes:
Recent workshop organized by the Ford and Sloan Foundations, showed that Y2K-type concerns are far from over.
The base of all this is open-source code.
The Network Time Protocol, or NTP is invented by David Mills. It’s not stable. The entire NTP relies on the sole effort of a 61-year-old Harlan Stenn who is working on a shoestring budget and has pretty much volunteered his own time for the last 30 years.
Also Azer Koçulu "broke" entire Internet (yes, all of it, the entire thing!) by removing npm-package left-pad from Github.
Yup. Sounds pretty bad. Based on that, a second Y2K is going to hit us any time soon! Also, I think that because given the above proof, all software should be as closed source as possible! NOOOOOOT!
Questions
Now, I'm asking you, yes you!, the reader of this blog post, to think this a bit.
To help your thinking, I'm posting some open questions which you must try to come out with some kind of answer:
- Only open-source software has flaws. True or false?
- Is it possible for any flaw/flaws in open-source software to be repaired/fixed/mended by anybody else than the original author?
- If using any piece of software to fulfill a specific task, is it possible to switch it for any suitable alternative piece of software performing similar or same task?
- Is it mandatory for all open-source software authors to be acknowledged in an appropriate Wikipedia-page to keep jealous pundits from ridiculing them in an article missing most of the facts?
- Must all open-source projects have a huge monetary budget?
- Is it mandatory for any self-respecting open-source project to fulfill any incoming user requests immediately?
- ... even if there are no known bugs and all the necessary functionality has existed in the project for past 30 years and all the new features are mostly nice-to-have?
- Does using any open-source software open your entire ICT-infrastucture to a risk of catastrophic cascading failure when an angry author chooses to remove all his work at that point from the known universe?
- Y2K was the most horrific event in the entire history of computer software. True or false?
- Is Mr. Joshua Glans an idiot, or is he just faking it?
(the answers are on the other side of this paper)
Moment - Tracking your mobile phone usage
Saturday, July 15. 2017
How I mostly use my mobile phone is like a mobile pocket computer. When I get back home I mostly just hang it to a wall:
However, that's not how majority of the people do it. They spend 24/7 with the phone talking, messaging or just watching TV from it. In Tunnelbana everybody else except me is like this:
In today's world bullshit is just bullshit and hard facts are hard facts (which can be alleged to be fake news). So, I installed an app called Moment. See https://inthemoment.io/ for details about the app. The idea is, that it tracks how much I use my phone and then I can view my screen time.
Yesterday Moment gave me a notification to inform, that it had something to tell me:
The text reads:
You log more screen time than 9% of the other people using Moment. You're averaging 12M per day.
... right. If my usage is at lower 9% of entire user population and if my average screen time is 12 minutes per day, that's not much! Looks like the app is designed only with heavy-duty phone users in mind.
For the record, my stats as seen by Moment are:
Not that bad. But then again, isn't that what every addict says?
Importing SVG-files into Google Drive - Illustrated Guide
Friday, July 14. 2017
I had a simple task at hand, to draw a flowchart how information is exchanged in a distributed system. Since I din't have my Microsoft Visio installed in that machine and the task was rather simple, I chose to try the Google Drawings and learn it. Easy as pie, right? Nope.
As one of the first things I wanted in my flowchart was "regular Jane User" and I wanted shape of a laptop computer to represent her. Well, Google's shape library didn't have a laptop, so why not go google for it and ta-daa! Found a suitable in couple of mouse-clicks at http://www.flaticon.com/free-icon/apple-laptop-computer_22791. Vector version available in multiple formats, so I downloaded the SVG into my Google Drive and ...
An hour later I simply gave up. Nothing in G Suite knew how to use that. And I did attempt a lot of things. Including converted the things into a .wmf
as suggested by StackExchange article Import SVG files to Google Docs as a drawing.
It. Simply. Does. Not. Work!
Given the vast amount of proof, there must have been a time when it did work, but doesn't do that anymore. However, after an another hour later I found article How to import SVG (or any vector) into GoogleDocs from Google Docs Help Forum, which claimed that .emf
would work. And oh joy! It does!
So, this is my illustrated guide of importing SVG into Google Docs/Drive/Suite ... whatever they choose to be called today. I'm sure this information will eventually be as invalid as so many pages around The Web are at the time of writing this, but I'll leave my mark to The Net with this one.
In this guide I'm doing everything in Google Drive. In reality you have lots of options to do this and go with a completely another path and still end with the same result, but I try to keep this as simple as possible.
- Upload all the required
.svg
-files into Google Drive - Right click a .svg-file and choose Open with > CloudConvert
- (one-time-task) Accept CloudConvert OAuth request
- (if returning from CloudConvert account creation), do 2. again and choose Open with > CloudConvert
- At CloudConvert, select vector > emf, and make sure Save file to my Google Drive is checked. Then hit Start Conversion:
- You can convert any number of files at one run, when conversion is done, close CloudConvert:
- Return back to Google Drive. You will find the
.emf
version of your file in the same folder the original.svg
conversion was started:
- Right-click the .emf file, notice how you CAN open it in Google Drawings:
- In Google Drawing, copy the file Ctrl-c in Windows or ⌘-c in Mac:
- If needed, you can paste (Ctrl-v in Windows or ⌘-v in Mac) the converted symbol into any other type of Google document, for example presentation:
- When looking at your Google Drive, there are three versions of the same file. To get rid of not-so-useful ones, which two to delete (from left to right:
.emf
, Google Drawings and.svg
):
- To make sure you're keeping the Google Drawings one, right click any file and select View Details:
- Keep the one saying Google Drawings, delete rest:
- Done!
I don't know why people at Google think this is a fun sequence to do for dozen or so symbols. They could easily do this conversion pretty much automatically for me.
Btw. The above laptop icon needs to be attributed:
Icons made by Freepik from www.flaticon.com is licensed by CC 3.0 BY
Blog in Azure: IPv6 fail on some users
Thursday, July 13. 2017
This is what I got from an IPv6-user:
The TLS-handshake succeeds, but after that everything breaks loose. TCP-packets are out-of-order, there are retransmissions. The above packet capture starts at second 23 and there is a failing retransmission at second 53, so obviously there is not much of a service from my website.
There are perfectly working IPv6-users, I have requests from 14 separate IPv6-addresses in log, so it works perfectly for somebody. Ultimately I have no idea what's going on, or how to fix it. If you know, drop me a comment.
Blog transferred to Azure
Wednesday, July 12. 2017
I got a disc failure. Luckily the server has a RAID, so no imminent data loss occurred. However, it is a RAID-5 setup, so the system performance was impacted heavily.
Now I transferred the entire system into Microsoft Azure and it seems to be running little bit better. The project was a huge one and I had to tinker with it a lot. I guess I'll have to do a full discosure about that later.
Hopefully this thing stays working this time.
Microsoft Azure: The remote server returned an error: (400) Bad Request - explained
Thursday, July 6. 2017
This article is specific to an attempt to upload a .vhd
image. I did my tinkering with PowerShell running Add-AzureRmVhd
command.
My attempt to upload was something like this (backtick ` is just a multi-line character in PowerShell):
Add-AzureRmVhd -ResourceGroupName blog `
-Destination https://blobs.blob.core.windows.net/vmimages/server.vhd `
-LocalFilePath '.\disk.vhd'
... and it failed. Exactly like many other people have experienced.
After a while of googling and reading docs, the entire problem/solution became apparent. I was reading article Introduction to Microsoft Azure Storage, which has following note in it:
Note
Blob storage accounts support only block and append blobs, and not page blobs.
To understand the gravity of that is beyond this blog post, but idea is that all .vhd
images need to be on a page blob. So, I just created my Azure Storage Account wrong. This is the correct way of doing it:
Make sure the Storage Account type is "General Purpose". To check what you have, go to properties:
Your "Kind" of storage needs to be "Storage" for General Purpose. If you see "Blob Storage", you'll need to do another approach.
There are plenty of good and valid usage for Storage Account of "Blob Storage", but running your VMs isn't one of them.
Mobile speeds - Summer 2017
Thursday, June 29. 2017
Somebody has got new toys. I was just doing a casual Speedtest for my mobile subscription to see if it would have any oompf in it. This is the result:
Holy cow! Nearly 150 Mbit/s download. On an iPhone 7! Whaat?
I was just having a burger in Stockholm, my subscription is Finnish, so all the traffic will exit from a Finnish IP-address. That makes the ping bad, but the download speed is trough the roof.
Here are couple of other measurements from Finland (thanks guys for these!):
Similar style results on both cases.
I don't know what changed, but Finnish telcos have really amped it up. No complaints from me! Nice!
SixXS - Thank you for your service! Let there be native IPv6 for everybody
Monday, June 5. 2017
Ok, we've established earlier, that IPv6 isnt' getting traction. ISPs are simply to lazy and they don't care about their customers, only their profits matter. It's really bad for profit to do improvements on their systems and networks. Meanwhile IPv4-addresses ran out on IANA, but ISPs don't care about that either, they stockpiled addresses and have plenty to go with.
To get IPv6 on my systems, I've been using free-of-charge service SixXS for almost 10 years. They provide IPv6-on-IPv4 -tunnels using IP-protocol 41 or 6in4. The tunnels I've been using in Finland have been provided by local ISP, DNA, again free-of-charge. During those years of service, I managed to accumulate almost 7000 ISK, that's 5 ISK per week per tunnel, if the tunnel is running without any problems.
On IPv6 day (6th June) 2017 SixSX will shut down all services. See, sunset announcement for their rationale for doing this. They pretty much say, that they ran tunnels for 17 years and don't want to do that anymore, ISPs should provide native IPv6 to every single customer they have. I'm totally agreeing with them. I'd like to keep my tunnels running, still.
It is what it is, decisions have been made and it's not going to change. So, my sincere thanks go to SixXS and DNA, and especially to all the hard working people on those organizations. Thank you for your service!
Handling /run with systemd, Part II
Sunday, June 4. 2017
It took me less than 4 years to finally revisit this subject. I'd like to thank all the people who commented the original blog post. It looks like for those years SystemD (am I writing it wrong?) was in constant evolution and new features were added.
This is what I'm running in production. Containing System
and omitting Unit
and Install
-parts as they are unchanged:
[Service]
Type=forking
PrivateTmp=yes
User=nobody
Group=nobody
RuntimeDirectory=dhis
RuntimeDirectoryMode=0750
ExecStart=/usr/sbin/dhid -P /run/dhis/dhid.pid
PIDFile=/run/dhis/dhid.pid
This also makes my RPM spec-file simpler, I got to remove stuff there, because temporary directory creation is taken care. Finally, I think that this one is done and ready!
If you want to download my source RPM-package, go here.
If you want to know more about RPM-specs, read Maximum RPM - Taking the RPM Package Manager to the Limit.
Mini-PC for a router
Tuesday, May 23. 2017
In my network setup, it's almost done: we have 1) an Internet-connection, 2) a wireless access-point to pass some mobile device traffic trought it, but something is missing: a router to tie it all together. In my post about my Swedish ISP, it became evident, that running a router they threw at me for free wasn't an option. Second completely viable option would be to run my new Wi-Fi AP with DD-WRT as a router. Totally doable. It has all the suitable ports, DD-WRT is fully equipped to act as an internet router and all that.
Me being a total nerd, of course I wanted to build a real router. The suitable hardware for it would be something tiny having enough ports and packing suitable CPU/RAM/SSD to run a real Linux in it. So, my choice for this is:
Qotom Barebone Mini PC Linux Ubuntu Wintel Nano ITX Celeron j1800/1900 Mini Computer Desktop PC Fanless x86 pc Industrial PC Computer. The marketing people at Qotom chose an appropriate name for their product, huh! If I'd choose the name, I'd go anthing with less than 19 words, removed "Ubuntu", "Wintel", double "mini", double "PC" and then start cutting the words into something like: Barebone Fanless Mini ITX PC j1800/1900, or so. But that's only me.
So, at Aliexpress.com it looks like this:
As said, the form-factor is mini-ITX making it a tiny box on the desk. In real life, it looks the same than above marketing material (sorry, pics aren't too good, no DSLR available):
I have no idea who would need 3 x D9 RS-232 -ports, but there they are. My own spec was only to have at least two RJ-45 for Ethernet. This puppy has them with 1Gbit/s speed, adding HDMI and USB-3 on top of that, which are really handy.
As you can see, there isn't much space around the 17x17 cm mini-ITX -board. The biggest one in the pic is the black cooling block on top of Celeron® J1900 CPU. Close up from the internals:
From left to right:
- Broadcom BCM943224HMP Wi-Fi adapter
- 64 GiB Hoodisk mSATA SSD
- Realtek RTL8168evl dual Ethernet connectors
There is some airspace for cooling inside the box:
For PSU there is an external transformer pushing in 12 VDC at 3 amps:
There is a commonly used IEC C13 on the other end of the transformer to make it easier to plug the thing into your country's choice of wallsocket.
I've been running the box for almost a month now, and I'm very pleased with it. The green power-on LED is way too bright. On a dark room, it illuminates everything with green, but other than that I got the perfect box for a router. As these low-budget boxes are easily available, it's mostly about choosing the most suitable one. Apart from having more than one Ethernet RJ-45, one of my selection criteria for this one was, that the manufacturer didn't take any second guesses about the CPU-cooling. It's easily the biggest block I could find among the competition. On top of that, the manufacturer did deliver the unit from UK-storage quite rapidly. What I missed from seller's page was the fact, that deliveries were made from tax-free -zone. I had to pay Swedish VAT for DHL on top of the purchase price.
New toys: iPhone 7
Monday, May 22. 2017
My employer was kind enough to issue me new toys. Any proper nerd loves new toys, I know I do!
So, I got an upgrade for my old(ish) iPhone 6. To a rather big surprise, they're exactly the same thing. Here is a quiz for you iPhone-fans. Tell me which one is 6 and which one is 7:
For the first three pics, I honestly don't know which one is which. The fourth one is an easy one, in iPhone 6 there is a white line right below the camera lens, also the lens is bigger on 7. Fifth one is a no-brainer, no 3.5 mm headphone jack exists in the new one.
Other than the missing jack, there isn't much to tell. Upgrade is almost a no-upgrade. Everything is the same, except I had to spend couple hours of restoring the backup. Actually, there was a quirk, my new phone had latest iOS, but the un-boxed one didn't, so I first had to go through the out-of-box-experience and then upgrade it to latest firmware. Then it was possible to do the restore and after restore the thing could start installing my apps.
List of things that didn't transfer:
- Touch ID fingerprints
- Apple support HT204136 says that it should, but ...
- Apps from my employer's app store
- Keyboard dictionary
- Ringtone selections
- I have a dozen or so own-made ringtones for incoming callers.
- The ringtones did transfer ok, but the selection of which one is used didn't survive the transfer.
- Google Authenticator keys
- Read the article "Is there an easy way to transfer Google Authenticator codes to a new device?" for reasoning.
- ... maybe something else I forgot to mention here
Ultimately I have to say, this wasn't worth it. I got the same phone without headphone jack, and I had to spend couple hours of work to get a 32 GiB thing to the point I started with a 64 GiB one. It's a shame there is no 64 GiB iPhone 7. They didn't let me get the 128 GiB, because it's too expensive. The new A10 CPU should be more energy efficient, but in reality it doesn't show. Only after upgrading to iOS 10.3.2, there was some improvement on battery usage. Before that, I had to charge the phone more often that my old one.
The only positive thing is, that now I have a fresh battery to my iPhone. I guess I should find some positive things with the new and improved "best ever" camera, but I simply cannot.
DocuSign hacked: E-mail addresses leaked
Tuesday, May 16. 2017
In my previous post, I was (not so) politely asking DocuSign to admit e-mail addresses leaking.
Finally, they did! They posted Update 5/16/2017 - Update on Malicious Campaign.
Q: What information was impacted?
A: It was a list of email addresses stored in a separate, non-core system used for service-related announcements.
That is something they should have done a week ago, but I guess we have to settle with that.
Btw. I got my hands on the payload VB:Trojan.VBS.Downloader.ACR as named by F-Secure. As many have reported before, it's a MS Word document with a macro in it. The VBA-thingie de-obfuscates a "picture" embedded into the .doc
and injects that directly to memory to be executed. I really didn't want to waste the ton of time investigating the actual malware, its bad, I know that without looking. Its just another reminder of how dangerous the VBA-macros are, they can call any system call from Windows kernel and do really complex hacks, like any real executeable would do.
DocuSign hacked: Officially any data leakage is denied
Sunday, May 14. 2017
On 9th and 10th of May I got really weird spam from alleging to originate from DocuSign. The attempt to lure me to hit the link was lame and I didn't believe the communication to be valid at any point. There was no DKIM-signature in the e-mail headers. I know for a fact, that real DocuSign e-mail has that in them. So, any quick analysis of the mail originating from USA and Canada was yelling SPAM! instantly.
The subject of the two first ones was "Completed: Wire Transfer Instructions for docusign Document Ready for Signature" and the third one "subpoena from WEX inc". All of them attempted to get be to download a file from an already shut down websites. All three domains used were 12 characters long .ru
-domains and looked random strings to me. I don't know if the words had actual meaning, they just looked random to me. The URLs had my e-mail address Base64-encoded to allow the perps to track any incoming clicks for active e-mails.
Of course plenty of others got the same junk as me, there is a support thread at DocuSign's site: Strange email from Docusign - is it legit? In an answer to that one, the official statement says: "However, DocuSign’s core platform has not been hacked, and our customer data remains secure". In that comment thread there are other people, who don't believe that. I'm one of them.
Why would I make such a bold claim? Well... easy. The e-mail address I'm using in DocuSign is unique. I'm not using that specific address ANYWHERE else. So, people of DocuSign, explain me where the address leaked if not from your system! I guess they claim, that it leaked from one of my systems. But then again, the same thing must have happened for number of other people too. Also, I use hundreds of different addresses for this purpose, to reliably determine which system leaked my information. Any even remotely regular user has their joe.user@mail.com-address registered to dozens and dozens of systems, so they have no possible way of knowing who leaked the data, I do.
I'm urging DocuSign to step forward with a truthful statement about the breach. Their current lies I'm not bying.
Update 15th May:
DocuSign got hacked also two years ago. Same thing happened, they deny any data leakage. Without proof, I feel like throwing unfounded wild allegation here: they have been compromised for over two years now.
In the support discussion thread couple of other people complaining about same thing than I do: the e-mail address isn't being used by anything else than DocuSign, still they're receiving junk to that particular address. How come?
Mirado Tech Talks - HTTP/2, the good, the bad and what's next
Tuesday, May 9. 2017
Mirado Consulting was kind enough to host a meetup, with appropriate food & drinks. By that I don't mean the pencil company:
... but a software development consulting agency in Stockholm.
They summoned Mr. Curl, aka. Daniel Stenberg:
... there to talk about HTTP/2 and QUIC, major improvements on HTTP-protocol. His work with network protocols at Mozilla, as author of libcurl/curl and as member of IETF's HTTPbis work group gives a pretty good picture what's happening at the HTTP-scene today. His presentation was titled HTTP/2, the good, the bad and what's next. In that he covered shortcomings of HTTP/1.1, benefits and shortcomings of HTTP/2 and a very likely future of moving away from TCP-based transport protocol into UDP-based QUIC.
Two tidbits from his presentation:
- Current browser implementations use HTTP/2 only with HTTPS:
- "most client implementations (Firefox, Chrome, Safari, Opera, IE, Edge) have stated that they will only support HTTP/2 over TLS, which makes encryption de facto mandatory", Wikipedia
- "Most existing servers only speak HTTP/2 over TLS", Daniel's blog
- HTTP/2 performance is poor on flaky network
- at 2% packet loss HTTP/2 is at least twice as slow then HTTP/1.1, HTTP/2: What no one is telling you (slide 53) by Hooman Beheshti, VP Technology at Fastly
- at 2% packet loss HTTP/2 is at least twice as slow then HTTP/1.1, HTTP/2: What no one is telling you (slide 53) by Hooman Beheshti, VP Technology at Fastly
So, it looks like HTTP/2 isn't going to save us from performance bottlenecks of HTTP/1.1 after all. Hence, QUIC.
About CA certificate handling on a Linux system
Finally, as I've written number of posts about TLS/SSL/HTTPS and one of them was about curl's really clumsy way of handling own CA-certificates. Also, I've always hated the fact, that if I'm running Firefox and curl on a Linux and want to add my own CA-root cert there, I need to do that three times:
- OpenSSL for the everything else in system
- curl for libcurl -depending apps to have it
- Firefox
IMHO that's two times too many! On a macOS you do that only once to the keychain and even curl will use that (yes, I confirmed from the man himself).
The reason is Mozilla's policy. NSS, or Network Security Services, a library written by Mozilla is boasting their FIPS 140 Validation and NISCC testing success, which a plain PEM-file in OpenSSL won't provide. That's why they insist on using NSS as storage and making us regular users suffer the pain of having multiple sources of truth.
Finally
Thank you Mirado Consulting for hosting a great event and Daniel for a great presentation!
Wi-Fi access point - TRENDnet TEW-818DRU - Part 2: Software
Monday, May 8. 2017
In my previous post, I un-boxed my new Wi-Fi access point. This is the part for running something in it.
For this to happen, the obvious prerequisite is DD-WRT binary image built specifically for TEW-818DRU. DD-WRT supported devices -list doesn't say much. Little bit of poking around results in build 23720 back in the 2014 for this one. It is at: https://www.dd-wrt.com/site/support/other-downloads?path=betas%2F2014%2F03-13-2014-r23720%2Ftrendnet-818DRU%2F. As I wanted something newer, I went for November 2016 build 30880 at: ftp://ftp.dd-wrt.com/betas/2016/11-14-2016-r30880/trendnet-818DRU/.
My typical approach for flashing new firmware is to stay connected with a wire. In practice that means, that I'll hook up an ethernet cable to my laptop and the other end to the access point's LAN-switch. Then I'll configure a static IP-address at the laptop's operating system. This makes sure, that I'm 100% connected whenever the box is running. Doing this over wireless connection and/or using dynamicly assigned IP-address may or may not work. As these boxes are expensive enough, I didn't push my luck. The downside of this approach is, that I'll need to know what the actual management IP-address will be.
Ok, let's start!
On out-of-box-experience the web GUI is at 192.168.10.1:
After login, there is a nice setup-wizard. Which of course, we'll just skip by acknowleding the alert:
Now we're at the normal administrator environment:
For me, the word "advanced" is like honey to a grizzly bear . I'll always home towards it, I know that all the goodies are stored there:
And also this time I was right, firmware upload/upgrade has its own menu item. Its clear, that this device is 100% designed by engineers, they cannot even seem to be able to agree on a single terminology. Menu has "upload", page title has "upgrade". Any self-respecting user experience designer would yell "You're confusing the user with that!", but I guess this stuff is for nerds only, and they don't care.
After selecting the trendnet-818dru-webflash.bin
file to be uploaded, there is yet again a nice warning:
It will take couple minutes for the flashing to complete:
There is very little indication, that the process completed. I didn't notice any lights blinking or something like that. It just completed, rebooted and stayed silent.
Now the IP-addess will change. DD-WRT is 192.168.1.1 at out-of-box-experience:
And that's pretty much it for firmware upgrade. At this point I did my wireless access point -setup including:
- Admin username and password
- AP's LAN IP-address, my LAN isnt' at 192.168.1/24
- Enable SSH-service
- Enable GUI-access for HTTPS and SSH
- Wireless network setup for 2.4 GHz and 5 GHz, WPA2 Personal with pre-shared key as security
DD-WRT is for knowledgeable administrators, no setup wizards or mumbo-jumbo. Just the settings.
Btw. configuration docs can be found at: https://www.dd-wrt.com/wiki/index.php/Configuration_HOWTOs