Why is there no real commitment for IPv6?
Saturday, October 29. 2016
I've been an active IPv6-user for many many years. Of course my ISP doesn't offer a native IPv6, so I'm using a tunnel from SixXS. They have been providing such tunnels free-of-charge for years, and for that I thank them and the ISPs volunteering their capacity for us nerds to have decent IPv6-connectivity. SixXS got tired for IPv6 not getting any traction, the ISPs have almost zero commitment for allowing people to use real, native IPv6. SixXS has a campaign called "Call Your ISP for IPv6!", but I don't think that's going to make much of an impact. When any ISP is actually asked about their IPv6 support, they'll stall by "we'll announce it later" or "but we do support IPv6" (by some unusable mechanism).
When looking what's happening on the ISP-side, Telia (or Sonera, as we call it here in Finland) has enabled 6rd for their connections. It combines DHCPv4 by returning enough parameters for an IPv6 setup with a 64-bitmask to be done. It kinda works, but ... still not the real thing I'm after. Also Elisa and DNA, two big mobile telcos in Finland, started offering IPv6 (DNA, Elisa) for their customers, but ... I'm not going to change my home fiber for a mobile connection. So something is happening at the telco-scene. I'm just waiting my ISP (Elisa) to act on the wired side too.
The other side of the chicken-egg -problem are the services. There is no real commitment on their side either. For example Amazon AWS (a really huge infrastructure provider) really doesn't support IPv6, they have nice IPv6 support for Internet-facing load-balancers, their S3 storage and their content delivery net Cloudfront, to mention few. But when it comes to running a server instance with real native IPv6, no dice. So, you can market your service to be IPv6-ready, all the critical Internet-facing services really do support IPv6, but your infra runs on IPv4 private addresses. Not cool.
And when it comes to services, this is a typical scenario:
That's what's been happening for LinkedIn for I-don't-know-how long. At least this week.
Me being the nerd I am, some background investigation:
# telnet www.linkedin.com 80
Trying 2620:109:c007:102::5be1:f881...
telnet: connect to address 2620:109:c007:102::5be1:f881:
Connection timed out Trying 91.225.248.129...
Connected to www.linkedin.com.
Escape character is '^]'.
A classic.
Their IPv6 is down and they don't know about it. This is their level of commitment:
On September 2014, they announce to have done a "Permanent launch of IPv6". But none of them are using it themselves to realize it has been down for a week! The really scary thing is, that they cannot afford $10 a month for a Pingdom check.
That's what I recommend for everybody to use for monitoring on-line services. Any reputable admin needs to know the second a service is out of reach by general public. IMHO that should include also admins at LinkedIn.
When it comes to lack of IPv6, I need to come clean. This blog isn't running on IPv6 either. Since most of you don't have it, it is impossible for you to know. My co-location host cannot offer me the IPv6, so no avail.
But why? Why is there no real commitment for IPv6? What's blocking all sensible people for going all-in IPv6? Everybody knows, that all possible IPv4 addresses were allocated by IANA to telcos and ISPs in January 2011. So, there is no more. Of course there are plenty of available addresses in RIRs to allocate for regional telcos, so we're not completely bankrupt with IPv4-addresses. But that day is eventually coming, it's just a waiting game. Notable efforts like World IPv6 Launch Day yield no mentionable results.
So what's holding us back? I don't know anything else except everybody going on the path-of-least-resistance. Since there are available IPv4-addresses, why risk a change. With change things can go broken or something may shift so that some people will lose some and others will win some. Not that much of a risk, if you ask me. But here we are, inching towards IPv6 very slowly. Speed it up, goddamnit!
Diffie Hellman key exchange (1024 bit) unreliable
Monday, October 17. 2016
Arstechnica wrote last week: NSA could put undetectable “trapdoors” in millions of crypto keys. The article in the link says:
A special prime devised by the researchers, however, contains certain invisible properties that make the secret parameters unusually susceptible to discovery. The researchers were able to break one of these weakened 1,024-bit primes in slightly more than two months using an academic computing cluster of 2,000 to 3,000 CPUs.
So, there is a mathematical weakness in DH-key exchange algorithm when using 1024 bits and suitable prime number.
It so happens, that Diffie Hellman has been taking major hits in the past. In May 2015 team of researches found out an implementation failure in DH-key exchange called Logjam Attack. There is no mathematical weakness, but when negotiating a key exchange, client forces the number of bits used to be ridiculously low instead of server's suggestion. And in their discovery they suggested:
The Logjam researchers include some talk about how some "attackers with nation-state resources" could break through 1024-bit DH.
All this means, that the entire Diffie Hellman algoritm is riddled by different types of flaws and any reliability it previously enjoyed among security community is gone. Even with a Logjam-patched server, using less than 1024 has been insane for a long time. Now 1024 bits are gone, what next?
Impact
In practice this affects HTTPS, SSH and VPN-tunnels. Ok, there are other software using DH-key exchange, but I'll try to keep this simple.
So, there is no backdoor that NSA or anybody can open. It's just that when client and server agree on details of the encryption used in communication, the encryption key used can be calculated by a listening party. If somebody cannot capture your key exchange and encrypted bits, they cannot de-crypt the communication. However, if somebody can grab your bits and either you're using too weak DH-key exchange, or somebody can tamper the connection and do a "Logjam", then your connection's security will be impaired. The best option is to use some other protocol for key exchange.
There is more information about key exchange and Diffie Hellman in my previous article TLS Security recap - HTTPS (in)security up until 2016.
Diffie Hellman in TLS (SSL)
To quote the Wikipedia article about Diffie Hellman: "There are three versions of Diffie Hellman used in SSL/TLS: Anonymous Diffie Hellman, Fixed Diffie Hellman and Ephemeral Diffie Hellman". To make things confusing, there is also Elliptic curve Diffie–Hellman (ECDHE), which is not affected. For the purpose of this article, it is considered a completely another key exchange protocol. Yes, it has Diffie Hellman in the name, but ... still not affected.
Of those four protocols, pretty much the only ones being used in today's Internet are DHE (affected) and ECDHE (not affected). When looking at stats according to SSL Pulse, Survey of the SSL Implementation of the Most Popular Web Sites, only 27% of the sites tested supported DH/DHE with 1024 or less bits.
What you can do
The simple version is: nothing.
If you really want to, you can check which cipher suite your browser is using:
The string "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", means, that the TLS 1.2 connection is using elliptic-curves DHE (the not affected one) for key exchange.
If you want to make things really interesting, from a Linux command prompt try to lure the server to use DHE as key exchange method. First get a list of suitable ciphers:
# openssl ciphers | perl -ne 'print join("\n", grep {/DHE-/} split(/:/));'
Let's pick one with weak key exchange, but with powerful crypto DHE-RSA-AES256-GCM-SHA384, and go for it:
# openssl s_client -cipher 'DHE-RSA-AES256-GCM-SHA384' -connect www.google.com:443
As guys at Google are smart, they won't allow that. What you have is "CONNECTED" and "alert handshake failure". It means, that your client and their server failed to agree on suitable cipher suite to use. Your request for using DHE was the key here.
Final words
If you are a server admin and didn't stop accepting Diffie Hellman as key exchange before this, do it now.
If you are a regular internet user, don't worry number of government-level organizations already have your data!
macOS Sierra upgrade from USB-stick
Friday, October 14. 2016
This is the abridged version as most steps are exactly like in OS X El Capitan. See my article about that.
Step 1: Go download
As any upgrade, go to App Store, it should look like this:
Beware, it is quite a chunk (almost 5 GiB to be exact). If your Internet connection isn't too fast, be prepared to wait a while:
When the download is done, the installer will start automatically. It will look like this:
That's your cue. Do not proceed with the installation, but quit the installer instead:
Now all the necessary files are on your machine.
Step 2: Go USB
The recipe is classic, insert an USB-stick of suitable size, unmount it, prepare it for install, copy installation files to it and you're done!
Unmount (your volume name will differ, unless the stick had an Arch Linux installation in it):
sudo diskutil umount /Volumes/ARCH_201607/
Pay attention to the output, it will give a clue about the device identifier of your USB-stick on your machine it can and will vary. My output was:
Volume ARCH_201607 on disk3s1 unmounted
Now that we know which disk it was, partition and format the stick as JHFS+:
sudo diskutil partitionDisk /dev/disk3 1 GPT jhfs+ "MacOS Sierra" 0b
It will take a while and during operation it will say something like this:
Started partitioning on disk3
Unmounting disk
Creating the partition map
Waiting for the disks to reappear
Formatting disk3s2 as Mac OS Extended (Journaled) with name MacOS Sierra
Initialized /dev/rdisk3s2 as a 7 GB case-insensitive HFS Plus volume with a 8192k journal
Mounting disk
Finished partitioning on disk3
/dev/disk3 (external, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *8.0 GB disk3
1: EFI EFI 209.7 MB disk3s1
2: Apple_HFS MacOS Sierra 7.7 GB disk3s2
Now the stick is good to go, transfer the installation image into it:
cd /Applications/Install\ macOS\ Sierra.app/Contents/Resources/
sudo ./createinstallmedia \
--volume /Volumes/MacOS\ Sierra/ \
--applicationpath /Applications/Install\ macOS\ Sierra.app/ \
--nointeraction
Again, that will take a while and output something like:
Erasing Disk: 0%... 10%... 20%... 30%...100%...
Copying installer files to disk...
Copy complete.
Making disk bootable...
Copying boot files...
Copy complete.
Done.
That's it. The stick is ready.
Step 3: Go update
This is the part, that you'll be repeating on each newly upgraded/installed machine.
Follow the installation procedure and boot to the newly installed macOS. If this doesn't make any sense to you, see my previous article about that.
Step 4: Finishing touches
After boot, you'll end up in a classic login-screen. Login and upgrade will continue there.
My choices for those new questions are:
- no, absolutely not I won't be storing my data into iCloud, I have the freebie 5 GiB of space there and with my files I'd run out pretty soon
- no, absolutely not I won't be enabling Siri (aka. spy-machine)
Step 5: Done!
That's it, enjoy your upgraded operating system.
Couple of glitches here and there, but the most important one was that my SSH didn't do agent forwarding anymore. I'm not alone with that, other people reported same issue:
ssh -A stopped working in macOS Sierra and keychain is not unlocked at login
The second thing is, that MD5 signed root certificates aren't accepted anymore as default. Unless you're doing some weird shit like I, you won't notice this change. This is fully documented and know before. In optimal world we wouldn't have any MD5 signed root certificates anymore.
Other than those two, I continued using my macs as usual.
Sydney pics
Friday, September 30. 2016
I business in Sydney, Australia and I'm sharing some pics I managed to snap with an iPhone. There wasn't too much time for doing any extensive touristing around, but I managed some.
There is a pic of The Bridge, view to harbor from top floor of one skyscraper on Victoria Street, Opera House and Manly Beach.
Fixing Google's new IPv6 mail policy with Postfix, part 2
Wednesday, September 14. 2016
I got a comment from Mr. Martin, that Google changed their SMTPd, so I'll have to revisit the article.
As suggested, new /etc/postfix/smtp_reply_filter
would be:
#New 2016/09:
/^5(\d\d )5(.*. \S+ - gsmtp.*)/ 4${1}4$2
Above one is working perfectly on my box.
Again, thanks for Mr. Martin for bringing this topic to my attention.
Book club: Security Engineering
Monday, September 12. 2016
Every once in a while I have enough time to read books. The ones made out of paper having printed words and images on the paper. And pretty much 98% of the books I've read in the last couple of decades have something to do with my profession. There is one book, that's worth mentioning: Security Engineering by Ross Anderson.
The price point for getting this one is a non-issue, you can download the entire book as a PDF with no cost at http://www.cl.cam.ac.uk/~rja14/book.html (that's at University of Cambridge). Having the book available is fully intentional, as four years have passed, author and publisher have agreed to place the material freely available for anybody interested. I most sincerely thank Mr. Anderson of doing that.
Of yourse, I recommend you to support this good work and purchase one. Go to Amazon, or similar and get your own copy. It will include a digital copy, all you have to do is go to above link and download one.
I'd definitely recommend this book to anybody ever designing or implementing anything with a computer. As the phrase goes: “Smart people learn from their mistakes. But the real sharp ones learn from the mistakes of others.” This is your chance of getting ahead and learning how some smart people blundered in their design and/or implementation of security. There is so much information in the book, but I found the case studies being the best part. The general idea is to get an injection of experience and start to think like hackers do.
My recommendation is that, for anybody working in software engineering should memorize this book and have an exam taken, so that it is crystal clear how secure systems are done.
Amazon EC2 spot prices
Friday, August 26. 2016
I was about to do some testing with a cheap Elastic Compute Cloud Linux-instance, but ... AWS wouldn't allocate me one.
Here is the reason from Amazon EC2 Spot Instances Pricing:
Somebody really lost his marbles and is paying ludicrous price for a box.
Ok, in reality that has to be work of two (or more) automated systems competing with each other in a situation where capacity of i2.4xlarge instances is scarce. Any human would do what I did, just pick the bigger box and be happy about that. That instance type with normal pricing costs like $3.41 / hour, and with spot pricing it goes ~60 cents / hour, but not when automated bidding goes haywire.
Windows 10 upgrade weeks: Done
Saturday, July 30. 2016
I wrote couple of posts about Windows 10 upgrades, but there are no more free upgrades. See:
From now on, you'll just have to pay for it.
There are couple of issues I bumped into while upgrading about dozen PCs last month alone and I'll be blogging about those later. There is no rush, as the free upgrade window closed (pun intended).
Windows 10 weeks: Upgrading Windows 10 with a clean install
Tuesday, July 19. 2016
Before release of Windows 10 build 10565 in October 2015, it was pretty much impossible to do a clean install for upgrading Windows 7 or 8 without first running the upgrade on the target machine. The announcement said:
Device activation improvements: Microsoft has received a lot of feedback from Insiders on making it easier to activate Windows 10 on devices that take advantage of the free upgrade offer to genuine Windows by using existing Windows 7, Windows 8 or Windows 8.1 product keys. If you install this build of the Windows 10 Insider Preview on a PC and it doesn’t automatically activate, you can enter the product key from a qualifying Windows 7, Windows 8 or Windows 8.1 used to activate the prior Windows version on the same device to activate Windows 10 by going to Settings > Update & security > Activation and selecting Change Product Key.
In short: You can whip up your install USB and do a clean install on a machine. If its not an OEM-machine, just enter your Windows 7 or Windows 8 key at install, and the thing should activate. Nice an easy.
And to clarify the exact build number situation your box is running, go to Settings, System, About. It should have something like this:
As suggested by Ilpo in a comment he left in my blog, there is an alternative method described by article How to Directly Clean Install Windows 10 without having to Upgrade First. Of course I had to try that! And thanks Ilpo, for the suggestion.
What you need is a Windows 10 install USB-stick. Go create one with your favorite method, and if you don't have a favorite, just go for Media creation tool, it can download an install image and store it to your USB-stick for later booting.
When your install-stick is ready, go find <your USB-stick drive>:\support\gatherosstate.exe
into the target machine to be upgraded. Do NOT run in at the USB-stick, COPY it. I just dragged the file into my desktop. When ran, the application will create a XML-file containing a fingerprint from the machine ran. That fingerprint can be used to active the upgraded Windows 10 installation later. Needless to say, this type of activation will work only for valid upgrade paths. You cannot upgrade Windows 8.1 with Bing into Windows 10 Pro (I tried).
Second important thing: Run the gatherosstate.exe
as an Administrator. I tried to validate the run-as-Admin -requirement after the upgrade was done, but it was too late. The file generated will differ. My thinking is, that it is not necessary to run as Admin, but that's what the original instructions said.
Take a copy of the generated GenuineTicket.xml
-file. You will need that later. At this point you're good to go with a clean install. Replace hard drives or just re-partition the original one, whatever your install plan is. I'm guessing you wouldn't be doing the upgrade this way if it wasn't absolutely necessary.
During Windows 10 installation do NOT enter a license key. It is possible to "skip this" and "do this later" and ultimately when the install ends run a perfectly good non-activated Windows 10.
Now its time to go activate the newly installed Windows 10. You need to locate your saved GenuineTicket.xml
-file and copy it to directory %ProgramData%\Microsoft\Windows\ClipSVC\GenuineTicket
. Like this:
Now the activation is only a reboot away. Go ahead and boot the thing, and confirm:
Done! Simple as pie.
Fedora 21 DHCP client failing to get an IP-address from Elisa [Solved!]
Monday, July 18. 2016
One of my own boxes runs a Fedora Linux. A while back my upgrade failed miserably due to Fedora installer not getting an IP-address from my ISP, Elisa. I had a minor skirmish for an hour or so with the installer, but no avail, Fedora installer beat me on that one. As I love to have that box up and running, I gave up and decided to investigate that later. Now that day came and I'm victorious!
Basics
DHCP is what pretty much everybody has for getting an IPv4 address in 2016. Mobile connections have something different, but everything else including Wi-Fi hotspots, ADSL-routers, Fiber-to-the-Home -connections, etc. etc. issue an IP-address (mostly IPv4, sometimes IPv6) to any well-behaving customer of theirs. Today, the de-facto is that the IP-address is allocated out of a well known broadband address range, or pool. Lists of those are generally available, so that home customers can be differentiated from data centers and companies.
To put DHCP simply, it is a mechanism for allocating an unique address for your Internet connection. The Wikipedia definition for Dynamic Host Configuration Protocol uses more words and isn't as concise as mine, but you'll get the idea.
Details of the problem
In case of mis-use or unpaid internet bill, they'd naturally decline any DHCP-requests for an IP-address. Since everything I tested, including various Windowses, OS Xes and Linuxes worked it wasn't about that. The connection was ok, the DHCP server issued a valid DHCP-lease as it had been doing for couple years, but not for my Fedora installer. Duh?
At this point I went to google for the symptoms and quite soon I landed into RedHat Bugzilla. It contains bug 1154200 which is titled as "not getting a dhcp address assigned". Mr. Krovich reports that his Fedora 21 installer won't get an IP-address from ISP. I pulled up a Fedora 20 installer for the previous version. It worked ok! Yep, they changed something into Fedora 21. The change affects Fedora 22, 23 and the latest 24. It does not affect RedHat nor CentOS (yet).
Fix (aka. trial and error)
In the comments of bug 1154200 they're talking about Option 61 commit which was introduced for Fedora 21 release. A possible fix would be to use DHCP-configuration: send dhcp-client-identifier = hardware;
I downloaded Fedora 24 installer and tested it out. It didn't help any. After a lot of wiresharking the traffic, I isolated this:
In the DHCP Discover -packet, there was an Option 61 present.
More poking for man 5 dhclient-options
revealed that it was possible to specify a fixed string for identifier. So, again I edited /etc/dhcp/dhclient.conf
(btw. the file didn't exist to begin with, I had to create one) to contain:
send dhcp-client-identifier = "";
That did the trick! Now even Fedora 24 installer got a a valid IP-address and it was possible to install.
Specs
In the early days, all IP-addresses were assigned manually. Everybody was given an IP-address and they punched all the details manually. That was frustrating and error-prone, so somebody invented BOOTP to automate the entire setup. Quite soon, that evolved into DHCP, defined by RFC 1531. When DHCP gained traction and more and more vendors joined the dynamically allocated -game, couple of clarifying iterations later, we're at RFC 2131 for current breed of DHCPing. It is basically the original BOOTP, but with most wrinkles ironed out.
The options are defined at RFC 1533 for DHCP Options and BOOTP Vendor Extensions. Looks like nobody supported Option 61 for a long time. Windowses don't, Apple doesn't, most Linux Distros don't, but RFC 4361 for Node-specific Client Identifiers for Dynamic Host Configuration Protocol Version Four (DHCPv4) must have done it for Fedora-guys. They chose to implement request 560361 (Dhclient doesn't use client-identifier; may cause issues in certain bridged environments) and make sure everybody uses it, with assumption that all ISPs that won't support Client Identifiers will merrily ignore the option. Nice!
Afterwards
I'll target equal blame to my ISP. The Option 61 is well-defined and it should be possible to ignore it. Looks like they're running Alcatel-Lucent hardware there and for some reason it is configured to spit on Option 61 requests.
Naturally I reported the error to my ISP, but you can assume how well that goes. Any regular customer facing clerk won't know DHCP or what it does, nor any options of it. So all I got back is the classic "we'll investigate" -style response. I'm not keeping my hopes up. I have more hope on my own Bugzilla request 1357469 to have an option to enable or disable usage of Option 61 on Fedora. They might even implement that one day.
Anyway, I'm hoping that this post will help somebody strugging to install their Fedora.
Windows 10 weeks: Getting an activated Windows 10 Pro for free
Sunday, July 17. 2016
I'd like to be clear on this: I did not figure this one out myself, I just bumped into Philip Yip's page about this. All the credit should go to him, I just tested this and confirmed it to be working.
One of the upgrades I did earlier this month was for a Windows Vista. Computer geeks point out two things immediately: 1) Vista! Really? Who would use that voluntarily? 2) well, it cannot be upgraded into Windows 10. Both very true. It just so happened, that there was an OEM Vista running on a computer. Vista will cease to have security upgrades on 10th April 2017, so I suggested to upgrade the thing into a 10 while I did some other work on the thing. Obviously the person owning the poor computer didn't want to pay for the upgrade, so I chose to go with Mr. Yip's method on it.
Prerequisites
For this to happen you will need:
- An USB-stick, minimum size 4 GiB
- This will be used to host the Windows 10 installation image
- I went to a supermarket and got a brand new 16 GiB USB3 stick for the speed improvements
- Ability to boot the target computer from the above USB-stick
- Most computers I've worked with can do this
- Windows 10 ISO, build 10130
- Ok, this one is harder one to come by, but it is generally available in the Big Net. I'm talking about software piracy here.
- You will need this specific build. Older or newer ones won't do the trick.
- A piece of software to write this ISO-file into the above USB-stick and make it bootable.
- My favorite thing for doing that is Rufus. It is a solid piece of software to write images into USB-sticks.
- Windows 10 media creation tool
- You can get one from Microsoft's website
- Possibility to disable internet connection from the target machine and enable it when needed
- Finally: Possiblity to change the BIOS/UEFI date on the target computer.
- This trick solely relies on the fact, that you can pretend the date to be in the history.
Also please note, that there is NO UPGRADE possibility here. This is effectively a fresh install. If you choose to re-install on top of your existing Vista installation, please take care no to lose any precious data. I installed a bigger hard drive to the computer, so I made sure that the old drive was intact during this process.
Steps
This is how the entire process goes:
- Use Rufus to create Windows 10 build 10130 USB-stick
- Boot into BIOS/UEFI setup, set date to anything around September 2015. I just bumped back the year and did my installation in "July 2015".
- Make sure, there is no Internet connection during the installation. THIS IS IMPORTANT!
- Boot from Windows 10 build 10130 USB. Make sure there is no Internet connectivity on the machine, so that it is impossible for it to update date via NTP or talk to Microsoft.
- Create a recent Windows 10 install USB
- I had my own computer for media creation and didn't have to use the target computer for this. This can be a parallel process.
- When asked about the edition, choose Win 10 Pro
- Turn off internet time when install done. I found that one to be little tricky, but the setting is there.
- Set a specific license key. For this you'll need to open
CMD.exe
as Administrator. If you attempt this as a regular user, ie. not right clicking thecmd.exe
and selecting "run as administrator", you'll notice that soon. The command to execute is:
slmgr /ipk 6P99N-YF42M-TPGBG-9VMJP-YKHCF
- Connect to internet. THIS is the first time the install machine is allowed to get there. If your box did connect to The Net before this, you have failed.
- Activate Windows (remember: Admin prompt):
slmgr /ato
- Now you have an activated Windows 10 running. You can go to My Computer and Properties to confirm.
However, it's an old edition and you don't want to keep that. The important thing is, that now your computer's fingerprint has been stored into Microsoft's server and any subsequent Windows 10 installations will proceed without entering any license key. - Just insert the USB install stick with the newer Windows 10 installer (the one you created on step 5) and run
setup.exe
on that. There is no need for reboot or anything, just run the installer. - Install it again! This is your second installation.
- I choose to keep nothing. This just makes sure, that all the old garbage will be gone after you're done.
- Please note, that when asked for license key, don't enter any. Always select "Do this later" or "Skip". This is not needed and your newly installed Windows 10 will be activated at the end.
- Wait 15 minutes
- You're done! Your Windows is ready to be used.
- Remove the first installation stored into
C:\Windows.old
, you don't need to keep any of that. However, the files are protected by operating system and deleting the entire directory prooved to be little tricky. I ended up booting from the USB-installation stick into rescue-mode and chose thecmd.exe
from that. On prompt I entered two commands:
del /S /F /Q c:\Windows.old
rd /S /Q c:\Windows.old
Finally
This process worked for me on 5th July 2016. I don't think this has anything to do with July 29 free upgrade -limitation.
Also I'm sure guys at Microsoft keep track of their installations. They'll know exactly how many licenses they gave out using this method. My thinking is, that they simply don't care. Pretty much all of the competition is giving out operating systems for free and they cannot for some strange reason. Anybody choosing this method needs to be aware, that in an unfortunate scenario they would deactivate all licenses distributed this way.
Happy installing! I know I created one ex-Vista user very happy with this.
Windows 10 upgrade weeks
Friday, July 15. 2016
Recently I've been doing a lot of Windows 10 upgrades. The free upgrade offer is about to end in less than 15 days:
There is a counter in Windows.com.
My own adoption to Win10 is pretty high, only one box left to upgrade. I'll have to resort to virtual machines for things, that I really, really need Windows 7 for.
The real-time Windows 10 adoption gauge @ gosquared.com shows one third:
That's pretty good, but not the number Microsoft guys are aiming for.
I regularily follow Mr. Gordon Kelly, a writer who publishes @ forbes.com. In December 2015 he wrote a piece with a title 'Free' Windows 10 Has An Expensive Secret. There he pretty much speculates, that Microsoft really wants to get to one billion (1.000.000.000) Windows 10 installations as fast as possible. Not giving away free upgrades to pretty much everybody wouldn't help in that. However, his post after Microsoft's May 2016 press release (Three Reasons Microsoft Stopped Free Windows 10 Upgrades) indicates, that even he gave up hope on prolonged free upgrades being available. I guess, they didn't want to get to one billion that bad. That's something Mr. Richard Hay @ winsupersite.com predicted last December.
It seems that it is possible to prolong your free upgrade offer by enabled assistive technologies. See Microsoft's blog post about that. But unless you do that, it's time to act on it or do some trickery after July 29th.
For my Windows 10 weeks, I'll be posting couple of articles what I did earlier this month with my various upgrades.
Megazoning (or Laser Tagging)
Wednesday, June 29. 2016
I'll post something about not computers for a change. Its pretty close, but still, not about computers.
Any self-respecting nerd (such as me) loves video games. Doing first-person-shooter games IRL is always both fun and a lot more difficult than on a computer.
We had a company activity and went to nearby Megazone for couple rounds of always fun laser tag. Since it was my first time ever doing that, it was like a slap in the face. I'm a 2nd lieutenant in FDF reserve, so I have basic understanding of tactics in a battle. Also I've played video games since early 80s and FPS games since first Wolfenstein. On top of that I've been paintballing enough to know that there is enough realism in video games and paintballing to match real military tactics. However, anything I knew about combat, tactics and fighting at that point was usless.
In the game there were three teams and you, so pretty much everybody you see is an enemy. Megazone is mostly about movement and speedy tags of any visible opponents. The worst thing that can happen to you there is that you're unable to fire your weapon for 8 seconds. During a 25 minute round that's not too dangrous. In paintball or war you're out on the first "tag", here you aren't, it's just a game of accumulating points.
Here are my stats from first round:
I sucked!
My handle in the game was Macro (in the Red team), so being 8th out of 14 wasn't that good. Tactically the maze was a nightmare! In the original Wolfenstein it was possible to be hit only from front, back or sides. In Megazone there were 2 floors, but it was a metal grid walkway making it possible to shoot trough. That made it 5th direction where getting hit was possible. At best I found couple locations where it was possible to get some cover and get hit only from two directions. The only even semi-functional tactic I found was to ignore any defence, cover and cautiousness. Just going recklessly forward and out-gunning everybody on a reaction seemed to work good. Also sniping people long-range was a really good tactic, sometimes I could do 4-5 people from a single position. They never saw me. I also did try attacking enemy bases and defending own base, but they were totally pointless exercises in futility, I spent too much time trying to figure out the value of those.
Quite an improvement in points and ranking. I was best in our team and 3rd in total!
Megazone was great fun, but with my background, it'll never be my favorite thing. I want to see my opponents suffering when I hit them, in laser tag that doesn't happen.
Fixing curl with Go Daddy Secure Certificate Authority G2 CA root
Tuesday, June 7. 2016
Curl is one weird piece of software. Every time I use it, I get chills. The reason for this is, that it almost works and when it doesn't there isn't a damn thing you can to to fix it. The entire design of that software is ... I'm lost for words here. I'm looking for words that describe: cumbersome, shitty, unorthodox, non-functional, and so on.
Since the lib-version is used by a number of libraries and other software as a means to provide HTTP-protocol implementation I do run into curl-issues often. Many times I didn't even know, that in the end I was using libcurl for access before one of these obscure errors pops. For this reason, my weapon-of-choice is wget, it uses OpenSSL's crypto and is fully compatible with pretty much everything else in a Linux-distro.
Anyway, this time I chose to research this to the bitter and. It took me about a month (calendar time) to resolve this. Of course I didn't spend all my time and energy into this, it just took a very long time to get this one done properly & right.
The problem
One day, I was just tinkering something and ran a command:
$ curl --verbose https://packetstormsecurity.net/
... and it pulled a curl on me.
* About to connect() to packetstormsecurity.net port 443 (#0)
* Trying 198.84.60.198...
* Connected to packetstormsecurity.net (198.84.60.198) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* Server certificate:
* subject: CN=packetstormsecurity.com,OU=Domain Control Validated
* start date: May 31 18:04:40 2015 GMT
* expire date: May 31 18:04:40 2016 GMT
* common name: packetstormsecurity.com
* issuer: CN=Go Daddy Secure Certificate Authority - G2,
OU=http://certs.godaddy.com/repository/,
O="GoDaddy.com, Inc.",L=Scottsdale,ST=Arizona,C=US
* NSS error -8179 (SEC_ERROR_UNKNOWN_ISSUER)
* Peer's Certificate issuer is not recognized.
* Closing connection 0
curl: (60) Peer's Certificate issuer is not recognized.
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
I hate when that happens. Also I don't know who forgot to do what, but it just won't work and nobody in the entire Internet knows how to handle that.
Figuring out the details of the issue
This is the easy part:
* NSS error -8179 (SEC_ERROR_UNKNOWN_ISSUER)
* Peer's Certificate issuer is not recognized.
In human language that reads: The problem is with HTTPS. The certificate used by the remote site is issued by a Certificate Authority (CA), that we don't know of and because we don't know it we won't trust any certificates issued by it.
Further:
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
In human that reads: There were three (3) places I tried looking for the root-certificate:
- NSS is the Network Security Services -library created by Mozilla. Its database is located at: /etc/pki/nssdb
- OpenSSL is the library/toolkit used by pretty much rest of your Linux for HTTPS-encryption. It has the trusted root-certificates at: /etc/pki/tls/certs/ca-bundle.crt
- OpenSSL didn't use a directory (CApath) for certificates.
After doing all three of those, no suitable CA was found and curl had to disconnect from the server and inform user about this lack-of-trust -issue.
Possible solutions
Ignore the issue and force curl to continue
There is a suggestion:
turn off curl's verification of the certificate, use the -k (or --insecure) option
This goes right out the window, because I'd rather chew off my left arm than force a prefectly valid check to be ignored. I'd rather spend the time investigating the fix. But that's me. Most of you don't care. I do.
Add the missing certificate to ca-bundle.crt
Good idea, but ... That file is reserved for the operating system / distro by OpenSSL. Actually curl uses this file only to make its own life easier so that curl doesn't have to distribute a set of trusted CA certificates, it just piggy-backs with something OpenSSL has.
Since this approach is OpenSSL-specific there is a procedure for adding own CA root-certificates into your system. When a new CA-root needs to be installed the mechanism is not to copy the certificate into this big file. How to actually do it, we'll get into that later.
The obvious problem with this approach is, that next time your distro gets a new CA-bundle one of two things will happen: 1) your changes will be overwritten and lost, you'll have to add the CA-root again or 2) the new CA-bundle won't be installed, because somebody messed up a file which he/she shouldn't do. This is definitely not a good approach.
Implicitly specify the CA root-certificate file
Aa-ha! There is a command-line option for this purpose:
--cacert <CA certificate>
(SSL) Tells curl to use the specified certificate file to verify
the peer. The file may contain multiple CA certificates. The
certificate(s) must be in PEM format. Normally curl is built to
use a default file for this, so this option is typically used to
alter that default file.
That's the one I could use, if I'd like to do that every goddamn single time I curl for something. First I don't want to do that every time and second, that command-line option isn't available for me, as I was using a piece of software wrapped to use libcurl.
Add the missing CA root-certificate into NSS database to establish trust
This is the one I chose. This is also the one nobody gets solved.
If you can find precise information on the web how to fix this, please tell me. I've been browsing the net for partial and non-working solutions enough not to care for half-assed solutions which don't work at the end.
Getting the missing certificate
Whatever we do (except just ignore the problem), the missing root-certificate needs to be located. With a little bit of googling I found a page Repository, Here’s a collection of important certificate documentation (https://certs.godaddy.com/repository/) at GoDaddy's server. Sure, the initial impression was "whoa, that was easy!", but when I landed on the page, I realized that there were following root-certificates available for GoDaddy Certificate Chain - G2 to download:
- GoDaddy Class 2 Certification Authority Root Certificate - G2
- GoDaddy Secure Server Certificate (Intermediate Certificate) - G2
- Microsoft to GoDaddy G2 Cross Certificate
- GoDaddy G2 Code Signing Intermediate
- GoDaddy Secure Extended Validation Code Signing CA - G2
- GoDaddy Certificate Bundle for Microsoft Windows Driver Signing - G2
- GoDaddy Certificate Bundles - G2
- GoDaddy PKCS7 Certificate Intermediates Bundle (for Windows IIS) - G2
- GoDaddy Certificate Bundles - G2 With Cross to G1
- GoDaddy Certificate Bundles - G2 With Cross to G1, includes Root
Ok, which one will I need? Darn!
Luckily I know something about X.509 certificates and especially certificate extensions. There should be an AIA or Authority Information Access -section (see RFC 5280 section 5.2.7 for details) in the cert. At least most CAs provide that information to make people's life easier.
First download the cert with a one-liner:
$ echo | \
openssl s_client -connect packetstormsecurity.net:443 \
> /tmp/packetstormsecurity.net.cert
Btw. the one-liner will say dumb things like:
depth=0 OU = Domain Control Validated, CN = packetstormsecurity.com
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 OU = Domain Control Validated, CN = packetstormsecurity.com
verify error:num=27:certificate not trusted
verify return:1
depth=0 OU = Domain Control Validated, CN = packetstormsecurity.com
verify error:num=21:unable to verify the first certificate
verify return:1
That's just vomit from the fact, that the certificate isn't trusted. The important thing is, that the one-liner will result a text-file with lot of other garbage, but also the server certificate PEM. Luckily OpenSSL will ignore all the garbage when doing command:
$ openssl x509 -noout -text -in /tmp/packetstormsecurity.net.cert
That one will output a lot of stuff. Most if which are irrelevent for this purpose. The relevant things are:
Certificate:
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc.,
OU=http://certs.godaddy.com/repository/,
CN=Go Daddy Secure Certificate Authority - G2
X509v3 extensions:
X509v3 Certificate Policies:
Policy: 2.16.840.1.114413.1.7.23.1
CPS: http://certificates.godaddy.com/repository/
Authority Information Access:
OCSP - URI:http://ocsp.godaddy.com/
CA Issuers -
URI:http://certificates.godaddy.com/repository/gdig2.crt
X509v3 Authority Key Identifier:
keyid:40:C2:BD:27:8E:CC:34:83:30:A2:33:D7:FB:6C:B3:F0:B4:2C:80:CE
Exactly what we needed! There is an AIA-block with a direct URL of http://certificates.godaddy.com/repository/gdig2.crt in it.
A download:
$ wget http://certificates.godaddy.com/repository/gdig2.crt \
-O "/etc/pki/tls/certs/Go Daddy Secure Certificate Authority - G2.pem"
... and verify that certificate's serial number:
$ openssl x509 -noout -text \
-in /etc/pki/tls/certs/Go\ Daddy\ Secure\ Certificate\ Authority\ -\ G2.pem
... will reveal:
Certificate:
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc.,
CN=Go Daddy Root Certificate Authority - G2
Subject: C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc.,
OU=http://certs.godaddy.com/repository/,
CN=Go Daddy Secure Certificate Authority - G2
X509v3 extensions:
X509v3 Subject Key Identifier:
40:C2:BD:27:8E:CC:34:83:30:A2:33:D7:FB:6C:B3:F0:B4:2C:80:CE
X509v3 Authority Key Identifier:
keyid:3A:9A:85:07:10:67:28:B6:EF:F6:BD:05:41:6E:20:C1:94:DA:0F:DE
Oh yes x 2!! The CA certificate has the correct serial number. It issued the failing certificate. This proof of correct CA-chain. We found the correct file.
Establishing trust to the new CA root-certificate in OpenSSL
This is the easy part. This one I have done hundreds of times.
First get a hash of the certificate:
$ openssl x509 -hash -noout \
-in /etc/pki/tls/certs/Go\ Daddy\ Secure\ Certificate\ Authority\ -\ G2.pem
For this particular certificate, the hash is 27eb7704. The next thing is to instruct OpenSSL that this newly downloaded certificate is trusted by our server. It can be done like this:
$ ln -s /etc/pki/tls/certs/Go\ Daddy\ Secure\ Certificate\ Authority\ -\ G2.pem \
/etc/pki/tls/certs/27eb7704.0
The idea is to symlink the downloaded file with a filename from the hash and suffix the file with a .0
(dot-zero).
Now we can verify, that our setup was done correctly (remember the "garbage" file we downloaded earlier):
$ openssl verify /tmp/packetstormsecurity.net.cert
The only valid output would be:
/tmp/packetstormsecurity.net.cert: OK
Anything else, and you fumbled it.
Additional step: Add all hashes of the certificate chain
Command line openssl
-command is at level now, however that's not how applications access certificates. Now this is where the CLI-command and library functionality differ. My box has /usr/lib64/libssl.so.10
to do the work for an application.
Looking at the SSL_CTX_use_certificate documentation, it's evident that there are functions to add a known certificate bundle (/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
in my box, symlinked via /etc/pki/tls/certs/ca-bundle.crt
), or a single certificate file. Not a directory of certificates, what openssl
-command does. An application has to iterate the directory and add every certificate individually. For example in Perl, HTTPS-connections are typically created via IO::Socket::SSL
-library. It accempts two options: SSL_ca_path
and SSL_ca_file
.
As the option of modifying the ca-bundle.crt
file was abandoned already, using option SSL_ca_file
is out. It leaves us with SSL_ca_path
, which requires every certificate hash to be symlinked to the appropriate certificate PEM-file. That's why the already done symlink won't do it completely, two additional ones are required to get IO::Socket::SSL
working properly:
$ ln -s ca-bundle.crt cbf06781.0
$ ln -s ca-bundle.crt f081611a.0
Where those two hashes come from is bit complex, but here goes:
The intermediate CA certificate we downloaded, Go Daddy Secure Certificate Authority - G2, was issued by:
$ openssl x509 -noout -issuer_hash
-in Go\ Daddy\ Secure\ Certificate\ Authority\ -\ G2.pem
... a certificate which has hash of cbf06781, which is already packed into ca-bundle.crt. Here things go weird, the Go Daddy Root Certificate Authority - G2 having hash cbf06781 is self-signed. However, during web-access that exact same certificate (with same serial number and all) is issued by a certificate having hash of f081611a. In ca-bundle.crt there is one with subject Go Daddy Class 2 Certification Authority. So, we need to add both to keep applications happy. Looks like somebody at Go Daddy really dropped the ball. Why should there be two separate CA certificates? Insane.
Actually, for example OpenSuSE Linux distro does that automatically to all bundle-certificates. The system is so stupid, that symlinkin all certificates is the only working method.
Establishing trust to the new CA root-certificate in NSS
Ok, this is the impossible part.
By lot of googling, poking around, failing, reading docs, tracing Curl, etc. I found out that there is a tool called certutil
- Utility to manipulate NSS certificate databases. It seems to belong to package nss-tools. There is a man-page and some documentation at Network Security Services. But what's happening and how should I proceed remains bit foggy.
There is a /etc/pki/nssdb/
, which we found in the beginning of this. That directory contains the NSS database in form of bunch of files. I found out that cert8.db
and key3.db
are completely obsoleted and any access methods having certutil -d /etc/pki/nssdb/
are completely useless, because they access only those files. Nobody/nothing uses those. Why are they there?
The files having effect are cert9.db
and key4.db
. The correct way of accessing those includes certutil -d sql:/etc/pki/nssdb
. Notice the sql:
-part difference. That's the part causing most confusion.
To get the certificate into the DB run command on a single line:
certutil -d sql:/etc/pki/nssdb -A -t "C,C,C"
-n "Go Daddy Secure Certificate Authority - G2"
-i /etc/pki/tls/certs/Go\ Daddy\ Secure\ Certificate\ Authority\ -\ G2.pem
Now your NSS DB should list:
# certutil -d sql:/etc/pki/nssdb -L
Certificate Nickname Trust Attributes
SSL,S/MIME,JAR/XPI
Go Daddy Secure Certificate Authority - G2 C,C,C
The three Cs mean that the certificate in the DB is an authority for servers, e-mail and code signing. certutil
docs say, that using 'C' for intermediate certificates is discouraged, and I didn't bother to check if that 'C' is needed at all. But having that doesn't break anything now the setup is done.
Testing
Now, running exactly the same command:
$ curl --verbose https://packetstormsecurity.net/
... will result in:
* About to connect() to dl.packetstormsecurity.net port 443 (#0)
* Trying 198.84.60.200...
* Connected to dl.packetstormsecurity.net (198.84.60.200) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate:
* subject: CN=packetstormsecurity.com,OU=Domain Control Validated
* start date: May 31 18:04:40 2015 GMT
* expire date: May 31 18:04:40 2016 GMT
* common name: packetstormsecurity.com
* issuer: CN=Go Daddy Secure Certificate Authority - G2,
OU=http://certs.godaddy.com/repository/,
O="GoDaddy.com, Inc.",L=Scottsdale,ST=Arizona,C=US
< HTTP/1.1 200 OK
Yesh! It works!
One hell of a thing to get fixed, but now the trust has been established so that it reaches also Curl and any applications using libcurl
.
Final words
This is a multi-mess.
First: Go Daddy messes up their certs. Why isn't their Go Daddy Secure Certificate Authority - G2
in ca-bundle.crt? Why are there two version of Go Daddy Root Certificate Authority - G2?
Second: Having NSS in a Linux is insane! Nobody else is using that for certificate storage. libcurl
's support for any own CAs is completely messed up and unusable.
CentOS 7.2 network install fail [Solved]
Sunday, June 5. 2016
I was about to upgrade an old CentOS 6 box into 7. It was all planned, backups taken, necessary information gathered and USB stick prepared with 7.2 DVD image in it. A shutdown and boot from the installation USB, bunch of settings, date/time, keyboard, network, but Däng! No dice.
My initial attempt was to install from USB, but for some reason the server didn't see the USB volume as a valid installation source. No problem, I thought, let's go for network-install then. The interface was already up and the box could reach Internet ok. Installing from a mirror shouldn't take too long. But no. All I could accomplish was a "Error setting up base repository". I went googling about this and found CentOS 7.2 Netinstall Guide – Network Installation Screenshots.
First I set up installation source as On the network: http://mirror.centos.org/centos/7.2.1511/os/x86_64/ and then This URL refers to a mirror list: Checked. No avail. It took about 8 minutes to get the error, but this approach failed miserably. What /tmp/packaging.log
had was:
ERR packaging: failed to grab repo metadata for anaconda: Cannot find a valid baseurl for repo: anaconda
ERR packaging: metadata download for repo anaconda failed after 10 retries
Argh! 8 minutes to determine, that the thing didn't work.
There was plenty of time to plan for the next move. I went to see CentOS mirror list, and picked the local Finnish mirror at nic.FUNET. Setting that as source: http://ftp.funet.fi/pub/mirrors/centos.org/7.2.1511/os/x86_64/ and with This URL refers to a mirror list: set as Unchecked got me butkus, /tmp/packaging.log
had:
ERR packaging: failed to grab repo metadata for anaconda: failure: repodata/6990209f63a9fd811f13e830ac3c6de4c5d70a42b1c6873e4329b523d394c3bd-primary.xml.gz from anaconda: [Errno 256] No more mirrors to try.
http://ftp.funet.fi/pub/mirrors/centos.org/7.2.1511/os/x86_64/repodata/
6990209f63a9fd811f13e830ac3c6de4c5d70a42b1c6873e4329b523d394c3bd-primary.xml.gz: [Errno 14] HTTP Error 404 - Not Found
Finally a tangible result. Obviously the HTTP/404 was correct. There is no such file in that directory. It took me about 15 seconds to determine, that the URL should be http://ftp.funet.fi/pub/mirrors/centos.org/7.2.1511/os/x86_64/repodata/
0e54cd65abd3621a0baf9a963eafb1a0ffd53603226f02aadce59635329bc937-primary.xml.gz. Something was off in the installer metadata. But where?
I checked treeinfo at http://ftp.funet.fi/pub/mirrors/centos.org/7.2.1511/os/x86_64/.treeinfo, but no avail. Then my poking around landed at /var/run/install/repo/repodata
. It has among others, a file named repomd.xml
. Looking at the network version from http://ftp.funet.fi/pub/mirrors/centos.org/7.2.1511/os/x86_64/repodata/repomd.xml made everything clear as crystal.
- Drive had:
- revision 1449702798
- 6990209f63a9fd811f13e830ac3c6de4c5d70a42b1c6873e4329b523d394c3bd, the file that doesn't exist in the mirror
- Network had:
- revision 1449700451
- 0e54cd65abd3621a0baf9a963eafb1a0ffd53603226f02aadce59635329bc937, the file that does exist
But how to fix this?
My initial attempt was to wget http://ftp.funet.fi/pub/mirrors/centos.org/7.2.1511/os/x86_64/repodata/repomd.xml
into /var/run/install/repo/repodata
and retry, but that didn't change anything, still the same frustrating error after 10 minute delay.
I rebooted the box and relized, that my change persisted on the USB-drive. Whoa! Anyway, I got things cooking this time. Finally the base repository was accepted, I got to go make installation selection and got the install forward.
What the hell was going on there? Where did the incorrect repomd.xml
come from? It isn't in the installation image. Or it is, but it comes from a place I didn't find. Whatever it is, there is something seriously off in the process. Why doesn't the installer try to get the most recent version from the network. It is a network install, after all!! After frustrating couple hours later than anticipated, I finally got the box upgraded. Hopefully this information saves you that time.