Xyloband - What's inside one
Sunday, March 18. 2018
If you're lucky enough to get to go to a really cool event, it may be handing out a Xyloband to everybody attending it.
For those who've never heard of a Xyloband, go see their website at http://xylobands.com/. It has some sample videos, which this screenshot was taken from:
See those colourful dots in the above pic? Every dot is a person having a Xyloband in their wrist.
As you can see, mine is from King's Kingfomarket, Barcelona 2017. There is an YouTube video from the event, including some clips from the party at https://youtu.be/lnp6KjMRKW4. In the video, for example at 5:18, there is our CEO having the Xyloband in his right wrist and 5:20 one of my female colleagues with a flashing Xyloband. Because the thing in your wrist can be somehow remote controlled, it will create an extremely cool effect to have it flashing to the beat of music, or creating colourful effects in the crowd. So, ultimately you get to participate in the lighting of the venue.
After the party, nobody wanted those bands back, so of course I wanted to pop the cork of one. I had never even heard of such a thing and definitely wanted to see what makes it tick. Back of a Xyloband has bunch of phillips-head screws:
Given the size of the circular bottom, a guess that there would be a CR2032 battery in it is correct:
After removing the remaining 4 screws, I found two more CR2016 batteries:
The pic has only two batteries visible, but the white tray indeed has two cells in it. Given the spec of a button cell (https://en.wikipedia.org/wiki/Button_cell), for a CR-battery it says: diameter 20 mm, height 3.2 mm. So, if you need 6 VDC voltage instead of the 3 VDC a single cell can produce, just put two CR2016 instead of one CR2032. They will take exactly the same space than a CR2032, but will provide double the voltage. Handy, huh! My thinking is, that 9 VDC is bit high for a such a system. But having a part with 6 volts and another part with 3 volts would make more sense to me.
Plastic cover removed, the board of a Xyloband will look like this:
Nylon wristband removed, there is a flexing 4-wire cable having 8 RBG LEDs in it:
The circuits driving the thing are:
Upper one is an Atmel PLCC-32 chip with text Atmel XB-RBG-02 in it. If I read the last line correctly, it says ADPW8B. Very likely a 8-bit Microcontroller Atmel tailored for Xylobands to drive RBG-leds.
The radiochip at the bottom is a Silicon Labs Si4362. The spec is at https://www.silabs.com/documents/public/data-sheets/Si4362.pdf. A quote from the spec says:
Silicon Labs Si4362 devices are high-performance, low-current receivers covering the sub-GHz frequency bands from 142 to 1050 MHz. The radios are part of the EZRadioPRO® family, which includes a complete line of transmitters, receivers, and transceivers covering a wide range of applications.
Given this, they're just using Silicon Labs off-the-shelf RF-modules to transmit data to individual devices. This data can be fed into the Microcontroller making the RBG LEDs work how DJ of the party wants them to be lit.
While investigating this, I found a YouTube video by Mr. Breukink. It is at https://youtu.be/DdGHo7BWIvo?t=1m33s. He manages to "reactivate" a different model of Xylobands in his video. Of course he doesn't hack the RF-protocol (which would be very very cool, btw.), but he makes the LEDs lit with a color of your choosing. Of course on a real life situation when driven by the Atmel chip, the RBG leds can produce any color. Still, nice hack.
Arch Linux failing to start network interface, part 2
Saturday, March 17. 2018
I genuinely love my Arch Linux. It is a constant source of mischief. In a positive sense. There is always something changing making the entire setup explode. The joy I get, is when I need to get the pieces back together.
In the Wikipedia article of Arch Linux, there is a phrase:
... and expects the user to be willing to make some effort to understand the system's operation
The is precisely what I use my Arch Linux for. I want the practical experience and understanding on the system. And given it's rolling release approaches, it explodes plenty.
Back in 2014, Arch Linux impemented Consistent Network Device Naming. At that time the regular network interface names changed. For example my eth0
become ens3
. My transition was not smooth. See my blog post about that.
Now it happened again! Whaat?
Symptoms:
- Failure to access the Linux-box via SSH
- Boot taking very long time
- Error message about service
sys-subsystem-net-devices-ens3.device
failing on startup
Failure:
Like previous time, the fix is about DHCP-client failing.
You vanilla query for DHCP-client status:
systemctl status dhcpcd@*
... resulted as nothingness. A more specific query for the failing interface:
systemctl status dhcpcd@ens3
... results:
* dhcpcd@ens3.service - dhcpcd on ens3
Loaded: loaded (/usr/lib/systemd/system/dhcpcd@.service; enabled; vendor pre>
Active: inactive (dead)
Yup. DHCP failure. Like previously, running ip addr show
revealed the network interface name change:
2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:52:54:52:54 brd ff:ff:ff:ff:ff:ff
There is no more ens3
, it is enp0s3
now. Ok.
Fix:
A simple disable for the non-existent interface's DHCP, and enable for the new one:
systemctl disable dhcpcd@ens3
systemctl enable dhcpcd@enp0s3
To test that, I rebooted the box. Yup. Working again!
Optional fix 2, for the syslog:
Debugging this wasn't as easy as I expected. dmesg
had nothing on DHCP-clients and there was no kernel messages log at all! Whoa! Who ate that? I know, that default installation of Arch does not have syslog. I did have it running (I think) and now it was gone. Weird.
Documentation is at https://wiki.archlinux.org/index.php/Syslog-ng, but I simply did a:
pacman -S syslog-ng
systemctl enable syslog-ng@default
systemctl start syslog-ng@default
... and a 2nd reboot to confim, that the syslog existed and contained boot information. Done again!
What:
The subject of Consistent Network Device Naming is described in more detail here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/ch-consistent_network_device_naming
Apparently, there are five different approaches on how to actually implement the CNDN. And given the old ens
-device, which is according to PCI hotplug slot (en
S for slot) index number (Scheme 2), the new naming scheme was chosen to be physical location (en
P for physical) of the connector (Scheme 3).
The information of when/what/why the naming scheme change was made eludes me. I tried searching Arch discussion forums at https://bbs.archlinux.org/, but nothing there that I could find. But anyway, I got the pieces back together. Again!
Update 30th March 2018:
Yup. The interface naming rolled back. Now ens3
is the interface used again. Darnnation this naming flapping!
Microsoft Virtual Security Summit
Wednesday, March 14. 2018
I got and ad from Microsoft about a security summit they were organizing. Since it was virtual, I didn't have to travel anywhere and the agenda looked interesting, I signed up.
Quotes:
- Michael Melone, Microsoft
- Jim Moeller, Microsoft, about infosec referring to Michael Melone sitting next to him
- Patti Chrzan, Microsoft
Discussion points:
- Security hygiene
- Run patches to make your stuff up-to-date
- Control user's access
- Invest into your security, to make attackers ROI low enough to attack somebody else
- Security is a team sport!
- Entire industry needs to share and participate
- Law enforcement globally needs to participate
- Attacks are getting more sophisticated.
- 90% of cybercrime start from a sophisticated phishing mail
- When breached, new malware can steal domain admin's credentials and infect secured machines also.
- Command & control traffic can utilize stolen user credentials and corporate VPN to pass trough firewall.
- Attackers are financially motivated.
- Ransomware
- Bitcoin mining
- Petaya/Notpetaya being an exception, it just caused massive destruction
- Identity is the perimeter to protect
- Things are in the cloud, there is no perimeter
- Is the person logging in really who he/she claims to be?
- Enabling 2-factor authentication is vital
Finally:
Goodbye CAcert.org - Welcome Let's Encrypt!
Sunday, March 11. 2018
A brief history of CAcert.org
For almost two decades, my primary source for non-public facing X.509 certificates has been CAcert.org. They were one of the first ever orgs handing out free-of-charge certificates to anybody who wanted one. Naturally, you had to pass a simple verification to prove that you actually could control the domain you were applying a certificate for. When you did that, you could issue certificates with multiple hostnames or even wildcard certificates. And all that costing nothing!
The obvious problem with CAcert.org always was, that they were not included in any of the major web browsers. Their inclusion list at https://wiki.cacert.org/InclusionStatus is a sad read. It doesn't even have Google Chrome, the most popular browser of current time in the list. The list simply hasn't been updated during the lifetime of Chrome! On a second thought looking it bit closer, the browser inclusion status -list is an accurate statement how CAcert.org is doing today. Not so good.
Wikipedia page https://en.wikipedia.org/wiki/CAcert.org has a brief history. Their root certificate was included in initial versions of Mozilla Firefox. When CA/Browser Forum was formed back in 2005 by some of the certificate business vendors of that time to have a set of rules and policies in place regarding web site certificates and the issuence policies, they kicked pretty much everybody out by default. Commerical vendors made the cut back in, but CAcert.org simply couldn't (or wouldn't) comply with those and withdrew their application for membership. The reson for CAcert.org not being able to act on anything is that the entire org is (and has been for a long time) pretty much dead. All the key persons are simply busy doing something else.
Today, the current status of CAcert.org is, that their certs are not trusted and signed by non-accepted hash algorithms. Over an year ago, there was a blip of activity and they managed to re-sign their root certificate with SHA-256, but to me it looks like they exhausted all the energy on the actual signing and newly signed root certs were never published. I wrote a post about how to actually get your hands on the new root certificate and install that to your machines.
Today, when CA/Browser Forum is mostly controlled by Google and Mozilla, a stalled CAcert.org would not be accepted as a member and at the same time there is huge pressure to start using properly signed and hashed certificates on all web traffic, I've run out of road with CAcert.org. So, for me, it's time to move on!
A brief history of Let's Encrypt
Two years ago, the certificate business had been hit hard. I collected a number of failures in my blog post about What's wrong with HTTPS. Roughly at the same time also Electronic Frontier Foundation (EFF) saw the situation as not acceptable. Businesses wanted serious money for their certs, but were not doing an especially good job about trustworthy business practices. With help of some major players (including Mozilla Foundation, Akamai Technologies, Cisco Systems, etc.) they created a non-profit organization for the sole purpose of issuing free-of-charge certificates to anybody who wanted one called Let's Encrypt.
They managed to pull that off, in a very short period of time, they become the most prominent certificate authority in the Internet. At least their product price is right, €0 for a cert. And you can request and get as many as you want to. There are some rate limits, so you cannot denial-of-service them, but for any practical uses, you can get all your certs from them. For free!
The practical benefit for Let's Encrypt operation is, that number of web server in The Net having HTTPS enabled has been rising. Both EFF and CA/Browser Forum is strongly suggesting, that all HTTP-traffic should be encrypted, aka. HTTPS. But the obvious hurdle in achieving that, is that everybody needs to have a certificate in their web server to enable encryption. Given Let's Encrypt, now everybody can do that! EFF has stated for long time, that having secure communications shouldn't be about money, it should be about your simply wanting to do that. The obvious next move is, that in coming years CAB Forum will announce, that all web traffic MUST be encrypted. However, we're not quite yet there.
Breifly on Let's Encrypt tech
Since they wanted to disrupt the certificate business, they abandoned current operation procedures. Their target is to run the entire request/issue-process fully automated and do that in a secure manner. To achieve that, they created a completely new protocol for Automatic Certificate Management Environment, or ACME. The RFC draft can be seen at https://datatracker.ietf.org/doc/draft-ietf-acme-acme/.
Their chain of trust (taken from https://letsencrypt.org/certificates/) is bit confusing to me:
As the root certificate, they use Digital Signature Trust Co. (red line with Identrust DST Root CA X3). That's something your browser has had for years. So, ultimately they didn't have to add anything. When you request a certificate, it is issued by the intermediate authority Let's Encrypt Authority X3. And as the bottom level is your very own Server Certificate.
However, I don't undertand why there is ISRG Root X1 to double-sign all the intermediates, and signing only their OCSP-data. My computers don't have that root certificate installed. So, what's the point of that?
As a note:
This is the recommened and typical way of setting up your certificate chain, nothing funny about that. For a layman, it will strike as overly complex, but in X.509-world things are done like that for security reasons and being able to "burn" an intermediate CA in a split second and roll forward to a new one without interruptions and need to install anything to your machines.
What the actual connection between Let's Encrypt and Digital Signature Trust Co. (IdenTrust) is unclear to me. If somebody knows, please drop a comment clarifying that.
My beef with Let's Encrypt
Short version: It's run by idiots! So, Let's not.
Long version:
- Tooling is seriously bad.
- The "official" python-based software won't work on any of my machines. And I don't want to try fixing their shit. I just let those pieces of crap rot.
- To me, Acme Inc. is heavily associated with Wile E Coyote
- Mr. Coyote never managed to catch the Road Runners. Many times it was because the Acme Inc. manufactured equipment failed to function. https://en.wikipedia.org/wiki/Acme_Corporation
- Mr. Coyote never managed to catch the Road Runners. Many times it was because the Acme Inc. manufactured equipment failed to function. https://en.wikipedia.org/wiki/Acme_Corporation
- Did I mention about the bad tools?
- Every single tool I've ever laid my hands on wants to mess up my Apache and/or Nginx configuration.
- I have a very simple rule about my configurations: DON'T TOUCH THEM! If you want to touch them, ask me first (at which point I can reject the request).
- There seems to be no feasible way of running those crappy tools without them trying to break my configs first.
- Most 3rd-party written libraries are really bad
- There are libraries/tools for all imaginable programming languages implementing parts/all of ACME-protocol.
- Every single one of them are either bad or worse.
- Certificate life-span i 90 days. That's three (3) months!
- The disruptive concept behind this is, that you need to renew your certs every 60 days (or at max 90 days). For that to happen, you will need to learn automation. Noble thought, that. Not so easy to implement for all possible usages. The good thing is, that if ever achieved, you won't have to suffer from certificates expiring without you knowing about it.
- As an alternative, you can get a completely free-of-charge SSL certificate from Comodo, a member of Let's Encrypt, which is valid for 90 days, but you have to do it manually using Comodo's web GUI. See: https://ssl.comodo.com/free-ssl-certificate.php if you want one.
- I won't list those commercial CAs who can issue you a 30 day trial certificate for free here, because the list is too long. So, if you just want a short-lived cert, you have been able to that for a long time.
- They're not able to issue wildcard certificates, yet.
- This is on the works, so mentioning this is hitting them below the belt. Sorry about that.
- From their test-API, you can get a wildcard certificate, if you really, really, really need one. It won't be trusted by your browser, but it will be a real wildcard cert.
- Original release date for ACME v2 API in production was 27th Feb 2018, but for <insert explanation here> they were unable to make their own deadline. New schedule is during Q1/2018. See the Upcoming Features -page for details.
My solution
It's obvious, that I need to ditch the CAcert.org and the only viable one is Let's Encrypt (run by idiots). To make this bitter choice work for me, after evaluating a number of solutions, I found a reasonable/acceptable tool called acme.sh. This tool is written with Bash and it uses curl for the ACME access. The author publishes the entire package in Github, and I have my own fork of that, which doesn't have a line of code even thinking about touching my web server configurations. You can get that at https://github.com/HQJaTu/acme.sh
Since I love Rackspace for their cloud DNS. I even wrote a Rackspace API Cloud DNS -plugin for acme.sh
to make my automated certificate renewals work seamlessly. After carefully removing all other options for domain verification, it is 100% obvious, that I will do all of my ACME domain verifications only via DNS. Actually, for wildcard certs, it is the only allowed approach. Also, some of the certificates I'm using are for appliance, which I carefully firewall out of the wild wild web. Those ridiculous web site verification wouldn't work for me anyway.
And for those, who are wondering, Why Rackspace Cloud DNS? The answer is simple: price. They charge for $0/domain. That is unlike most cloud/DNS service providers, who want actual money for their service. With Rackspace you'll get premium GUI, premium API, premium anycast DNS servers with the right price, free-of-charge. You will need to enter a valid credit card when you create the account, but as they state: they won't charge it unless you subscribe to a paid service. I've been running their DNS for years, and they never charged me once. (If they see this blog post, they probably will!)
What I'm anxiously waiting is the ACME v2 API release, which is due any day now. That will allow me to get my precious wildcard certificates using these new scripts.
Now that my chips are in, its just me converting my systems to use the Wile E Coyote stuff for getting/renewing certs. And I will need to add tons of automation with Bash, Perl, Ansible, Saltstack, ... whatever to keep the actual servers running as intened. Most probably, I will post some of my automations in this blog. Stay tuned for those!
Azure payment failure
Thursday, March 8. 2018
Since last July, this blog has been running in Microsoft Azure.
In January, Microsoft informed me, that I need to update my payment information or they'll cut off my service. Ever since, I've been trying to do that. For my amazement, I still cannot do it! There are JavaScript errors in their payment management panel, which seem to be impossible to fix.
So, eventually I got a warning, that they will discontinue my service unless I pay. Well ... I'd love to pay, but ... For the time being, all I can do is backup the site and plan for setting up shop somewhere else. This is so weird!
EBN European Business Number scam - Part 3 - Gorila's findings
Friday, March 2. 2018
Update 25th June 2019: EBN scammers bankrupt
The text below is a comment from Mr.(?) Gorila to my previous EBN scam post, he kindly translated a German article from year 2011 to English reading audience. Given the length of the text, I'm posting the un-altered comment here. I did add the emphasis for subtitles to make the article easier to read.
Why this is important, is of course the legal precedent. EBN-scammers sued somebody and lost!
So: DO NOT PAY! You will win your case in court.
So, to repeat, the text below is not mine, but I think it being very valuable for the people following EBN scam case.
Indeed, Legal German language is very difficult to translate into English. However, in 2010 there were many German articles addressing this court decision. Language of journal articles is simpler and easier to understand. Yet that article is informative and precise enough to conwey the court ruling message to general public.
Here is a good one, with translation below (Please note, that the translation is not literal to avoid German idioms and phrases inconsistent with English language):
http://www.kostenlose-urteile.de/LG-Hamburg_309-S-6610_LG-Hamburg-zu-Branchenbuchabzocke-Eintragungsformular-Datenaktualisierung-2008-des-DAD-Deutscher-Adressdienst-erfuellt-Straftatbestand-des-Betrugs.news11513.htm
Judgement of Regional Court Hamburg (Urteil vom 14.01.2011 - 309 S 66/10)
Regional Court Hamburg on Business directory rip-off:
Registration form "Data update 2008" of the DAD German Address Service constitutes criminal offense of the fraud
Due to an overall view of the court considers the intent to be deceive
The district court Hamburg has confirmed the complaint of a customer in second instance, who had sued against the DAD Deutscher Adress Dienst. This had taken the customer into the Internet address register at www.DeutschesInternetRegister.de, without making it clear that the entry was subject to a charge. The customer should pay 2,280.04 euros for the entry. The customer hired a lawyer with whom he went to the court. There, he sued for decision that he was not required to pay and for reimbursement of his legal fees. The district court Hamburg Barmbek gave the customer right. The district court Hamburg confirmed the judgment in the appeal.
Defendant is a business directory DAD with about 1.2 million registered companies. The vast majority of the registered are of free entries, which DAD has copied from publicly available sources. The customer received a letter from DAD, entittled "Data Update 2008". The letter requested a review of the existing and updated if necessary. It was also said: "The registration and updating of your basic data is free."
Only at the end of the form was an indication of the costs
An employee of the company then entered missing data on the pre-printed form and sent it to DAD as did a large number of authorities and tradespeople. Latter the company received an invoice for 2,280.04 euros for the entry with reference to a cost indication in the lower quarter of the form (annual price of 958 EUR plus VAT).
Deception about actual costs is fraud
The district court Hamburg evaluated this in its judgment as fraud. This does not change the fact that DAD has quite specifically indicated the cost-bearing nature of the offer in the letter. Rather, it is decisive that the possible act of deception in fraud is not only the pretending of false facts or the disfiguration or concealment of existing facts. Moreover, any behavior other than deception is also considered, provided that it may provoke a mistake on the part of the other person and influence the decision to make the desired declaration of intent.
Deception exists when the victim of fraud, knowing all the circumstances, would act differently
On the other hand, it was not decisive whether the deceived person followed the care required in the course of business dealings or even acted negligently with regard to the ommission of certain contractual information. Insofar as the error on the part of the customer has been triggered by a legally relevant deception. The customers claims do not fail because the error is caused by his own negligence in dealing with advertising mail.
The nature and design of the form produce erroneous ideas
In particular, in cases where the author of a contract offers, by presentation and formulation, a kind of design, which should cause the addressee has erroneous ideas about the actual supplied parameters. A deception can be assumed even if the true character of the letter could be recognized after careful reading. This also follows from a judgment of the Federal Court of 26.04.2001, Az. 4 StR 439/00. The respective deception must have been used according to plan and was not merely a consequence, but the purpose of action.
Costs notice at the end of the form could be overlooked by the customers
According to the Federal Court of Justice, in the case of a merely misleading presentation in the offer letter, it is above all a matter of how strongly significant contractual parameters are presented, distorted or edited. In the present case of DAD, the non-binding appearance of the request for review and correction of well-known data can caused that the price will be at least overlooked by some customers.
Form gave the impression of already existing contractual relations
Finally, another indication of the intended deception was that the form had already been pre-filled with the customer's data. Such an approach was apt to give the recipient the impression that it was not a novel business relationship but that it was intended to maintain or extend an existing contractual relationship.
Simple online entries usually cost no 2,280 euros
It is also crucial that none of the addressee with a total cost of over 1,900 euros for a simple online registration have to expect. With this reasoning also the district court Heilbronn decided by resolution of 23.06.2010, Az. 3 S 19/10 in a similarly stored case.
Destiny 2 Nightingale error [Solved!]
Thursday, February 22. 2018
As an employee of (a subsidiary of) Activision/Blizzard, last year those who wanted, got keys for Destiny 2.
It never worked! I never go to play it.
... and BANG! The dreaded Nightingale error:
For past couple of months, that's how much I saw Destiny 2. That isn't much. Darn!
Actually, there is an Internet full of people having the same problem. There are various solutions to, which have worked for some people and for some, not so much.
After doing all the possible things, including throwing dried chicken bones to a magical sand circle, I ran out of options. I had to escalate the problem to Blizzard Support. Since this wasn't a paid game, obviously it didn't reach their highest priority queue. But ultimately the cogs of bureaucracy aligned and I got the required attention to my problem. But ... it was unsovalvable. Or it seemed to be one.
Today, after escalating the problem back to Bungie, they pointed out the problem. My computer didn't manage to reach their CDN, so the game got angry and spat the Nightingale on my face. They also hinted me about what my computer did instead and ...
Somewhere in the guts of the Destiny 2, there is a component reading the value of environment variable HTTP_PROXY
. I had that set on the PC because of ... something I did for software development years ago.
After deleting the variable, the game started. WHOA!
So, it wasn't my router, DNS, firewall, or ... whatever I attempted before. Problem solved!
MaxMind GeoIP database legacy version discontinued
Sunday, February 11. 2018
MaxMind GeoIP is pretty much the de-facto way of doing IP-address based geolocation. I've personally set up the database updates from http://geolite.maxmind.com/download/geoip/database/ to at least dozen different systems. In addition, there are a lot of open-source software, which can utilize those databases, if they are available. Wireshark, IPtables, Bind DNS, to mention few.
The announcement on their site says:
We will be discontinuing updates to the GeoLite Legacy databases as of April 1, 2018. You will still be able to download the April 2018 release until January 2, 2019. GeoLite Legacy users will need to update their integrations in order to switch to the free GeoLite2 or commercial GeoIP databases by April 2018.
In three month's time most software won't be able to use freshly updated GeoIP databases anymore for the sole reason, that NOBODY bothered to update to their new .mmdb
DB-format.
To make this clear:
MaxMind will keep providing free-of-charge GeoIP-databases even after 1st April 2018. They're just forcing people to finally take the leap forward and migrate to their newer libraries and databases.
This is a classic case of human laziness. No developer saw the incentive to update to a new format, as it offers precisely the same data than the legacy format. It's just a new file format more suitable for the task. Now the incentive is there and there isn't too much of time to make the transition. What we will see (I guarantee you this!) in 2019 and 2020 and onwards software still running in legacy format using outdated databases providing completely incorrect answers.
This won't happen often, but these outdated databases will reject your access on occasion, or claim that you're a fraudster.
Replacing MacBook Air SSD
Saturday, February 10. 2018
I've never disassembled an Apple laptop. A while back I got a chance to peek into one, as I was summoned to add more SSD-capacity into an Air. As you can see from the spec, it is a rather early model of Air. But hey, Apple hardware is quite robust and the value you get for your money is huge. So, after 7 years of usage, it is still worth replacing the SSD with a bigger one.
The instructions for this can be found from EveryMac.com, but I'll describe my experiences here. Also having video instructions only is not something I'd like to see. A high quality picture has its value too.
Prepare job
- Make sure there is a restorable Time Machine backup of the laptop.
- If you don't care for the data on the machine, this is not necessary.
- Make sure you can boot the device from an USB-stick:
- You can restore entire SSD content from Time Machine.
- You can do a fresh install to the new SSD
Part job
A new SSD is needed. Given the availability and low price, a Transcend 240GB JetDrive 500 SATAIII 6Gb/s Solid State Drive Upgrade Kit for Select MacBook Air Models (TS240GJDM500) was selected.
Getting one from Amazon.de was a painless and fast operation. Initially, I was wondering why would be selling an "external" replacement part, because the SSD I wanted to replace is surely inside the Mac. It surely has to do something with making an Apple spare part available. Normally parts are not available for general public.
To pop open an MacBook Air, you will need a pentalobe TS4 screwdriver, and to disconnect the SSD from the motherboard, a Torx T5 screwdriver is needed. Both of which are known to hardware hackers, but not commonly owned by general public. Luckily somebody knows this, and in the box with the new SSD both are there:
Replacement job
Popping the hood of a mac is almost a no-operation. There are 10 pentalobes there, remove them and the lid will open easily. After all the years of usage, there was some dust in the CPU-fan, so I carefully removed all that before proceeding.
When the guts are seen, first task is to remove the battery connector. This ensures, that the motherboard and SSD are shut down during the operation. The connector can be disconnected quite easily without any tools:
Target-of-the-day, SSD, can be located right next to CPU:
The arrow points to the Torx T5. That is the only screw holding the SSD in place. Since I had a cool aluminium external case for the SSD, of course I placed the old SSD there:
The new Transcent SSD looks like this. Exactly what the original Samsung:
After that it's just to flip the lid back on and tighten 10 pentalobes.
Entire hardware job was completed in 15 minutes. That part was designed to be replaced!
macOS job
Now we have a laptop with a completely blank SSD in it. Of course I had to test it, but it really doesn't do much on boot. The only real option was to insert an USB-stick and press the Option-key to get the boot menu, select the USB-stick and then get to the Install/Upgrade -menu. Select Disk Utility from that:
As expected, the new drive is unitialized and you cannot select a device with that status as macOS install destination.
Since Apple engineers are quite smart, to convert an uninitialized drive to an intialized one, the operation needed is to erase it. WHAT! Yes. To repeat, the operation is to erase the drive. The drive is completely empty, it does not have a partition table, file system or data in it, but in their infinite wisdom Apple engineers have decided, the only way I can initialize the new SSD is by making double-sure there is nothing on it.
The erase will take a lot of time. Most if it, the installer tries to do something erase-related to it, but keeps failing miserably mainly due to lack of partition table. After the erase completes, there is a file system and regular macOS install can proceed. I chose to restore the machine from a TM-backup:
That really took couple hours and I didn't want to eyeball that through.
Job well done!
First boot from newly installed SSD, from restored machine image was success. First login indicated no changes at all. It was just like this machine was running as-is.
Ultimately a great success (like Borat would say)!
Arch Linux failing to update man-db
Friday, January 26. 2018
This week seems to be especially hard on my Linuxes. Doing a regular pacman -Syu
started spitting crap on me:
error: failed to commit transaction (conflicting files)
man-db: /usr/bin/accessdb exists in filesystem
man-db: /usr/bin/apropos exists in filesystem
man-db: /usr/bin/catman exists in filesystem
man-db: /usr/bin/convert-mans exists in filesystem
man-db: /usr/bin/lexgrog exists in filesystem
man-db: /usr/bin/man exists in filesystem
man-db: /usr/bin/mandb exists in filesystem
man-db: /usr/bin/manpath exists in filesystem
man-db: /usr/bin/whatis exists in filesystem
man-db: /usr/lib/man-db/globbing exists in filesystem
man-db: /usr/lib/man-db/libman-2.7.6.1.so exists in filesystem
man-db: /usr/lib/man-db/libman.so exists in filesystem
man-db: /usr/lib/man-db/libmandb-2.7.6.1.so exists in filesystem
...
A simple query for what's wrong:
# pacman -Qkk man-db
man-db: 363 total files, 0 altered files
So, nothing wrong with it. It just loves busting my balls!
Using a bigger hammer:
# pacman -S --force man-db
...
:: Processing package changes...
(1/1) upgrading man-db [######################] 100%
New optional dependencies for man-db
gzip [installed]
:: Running post-transaction hooks...
(1/2) Creating temporary files...
(2/2) Arming ConditionNeedsUpdate...
Now my pacman -Syu
works. Weird case, that.
Open Management Infrastructure in Azure gone wild
Thursday, January 25. 2018
I opened my mail, and I had 730 new e-mails there! Whaat!
One of my Azure boxes has (for reason unknown to me), following crontab-entry on root's crontab:
* * * * * [ \( ! -f /etc/opt/omi/creds/omi.keytab \) -o
\( /etc/krb5.keytab -nt /etc/opt/omi/creds/omi.keytab \) ] &&
/opt/omi/bin/support/ktstrip /etc/krb5.keytab /etc/opt/omi/creds/omi.keytab
/opt/omi/bin/support/ktstrip keeps failing, because /etc/krb5.keytab is missing. And that command is run every single minute on my machine. So, every single minute I get a new information about the failure. Nice!
The sequence of events is totally unclear to me. I haven't touched anything, but this morning an influx of e-mails stated pouring in.
OMI, or Open Management Infrastructure is something Linux-images in Azure have, so it shouldn't be anything dangerous.
The obvious fix as to remove that stupid line.
Monopoly
Sunday, January 21. 2018
Since most of the revenue earned by my empolyer comes from Candy Crush, a lot of swag around the office is somehow related. Very understandable, that.
This time we all got a Monopoly-game, but not the regular one everybody has been playing. Ok, to be entirely clear on this, threre are a minimum of 37 different language editions of the game, so in which country you spent your childhood, make it depend which streets the game you used to play has. Anyway, this is the Candy Crush -edition of Monopoly:
Also very understandably we did discuss different aspects of the game, one of them being about the winning tactics.
I started thinking, if it would be possible to just go to The Net and get some source code for a Monopoly-simulator and try to program some rules to a bot, which would then play against other bots and see which approach would be more likely to win in the long run.
Quite easily, I found a Python-version of the game by Juan Manuel Contreras from GitHub. Unfortunately, the work by him is quite unfinished, so I chose to fork it and continue from that point. I didn't touch the .py
-files, only the Jupyter notebook -version of the game.
As a result of some weekend coding, I have:
- Class for user game logic
- Lots of game functionality, moving around, collecting rent, buying, selling, going bankrupt
- Winning the game
The resulting notebook is quite bug-free, but there are still some game rules, that aren't properly followed. Especially the situation where a player defaults, all the assets need to go to creditor. If creditor is another player, then to that, if bank, there needs to be an auction where other players may purchase the defaulting user's properties. Doing auctions between AI-players is not that easy, so none of that is implemented.
The same AI-module would be needed in a situation where a player lands on an unowned property. According to rules, that needs to be auctioned if the player doesn't want to buy it. Not so easy to do with Python.
Also, the Chance-cards are not implemented, so it is impossible to get a Get-out-of-jail -card. Yet.
By running the game in the sim for couple hundred times, I found that with 5 players, it is very likely to get into a situation where nobody can build houses/hotels. Also, if there are not many houses/hotels, the other players are unlikely to land on those properties and the game will run forever with all players accumulating wealth indefinitely. To get around that, I'm detecting such a situction and artificially doubling rents. That rule is not in the original Monopoly-rules. How human players would solve this situation is to trade/barter properties between each other to gain building rights. A thing not so easy to implement with any programming language.
That Python Monopoly -project was just a fun thing to do. I'm not sure if I'll continue it any further. The code is out there, if you're interested.
HOWTO: Configuring a router on a Mini-PC with CentOS
Thursday, January 18. 2018
Over half an year later, I realized, that I never published my article about operating system and software setup of my Mini-PC router. This is a follow-up post about the Qotom-hardware I wrote earlier. So, its probably about time to do that!
To get the ball rolling on new CentOS installation, a good start is to download it, Rufus it into an USB-stick and install the minimal setup into router-PC. The CentOS installation is so well documented and trivial process, I won't go into any details of it. Read something like Installing Red Hat Enterprise Linux 7.4 on all architectures for details of that.
Goal
Every project needs a goal. In any kind of engineering there is a specification and criteria, that the goal has been met.
The goal of this project is to create a Linux-server capable of securing a local network from the Internet and allow traffic to pass from the LAN to the wild-wild-net.
Spec:
- There is a working CentOS Linux running on the MiniPC
- ISP's cable modem is configured as bridge, no double NATting done
- MiniPC gets a public IP-address from ISP
- MiniPC can be accessed from the Net via the IP-address
- Configurations persist a reboot on the MiniPC
- MiniPC issues dynamic IP-addresses to LAN-clients
- MiniPC acts as a caching nameserver to LAN-clients
- Any requests from the Net are not served
- Wireless access point is configured not do do any routing, aka. it is in access point mode
- The setup is secure with attack surface minimized
- LAN IP-address range is 192.168.1.0/24
Definition of done:
- Internet works!
- MiniPC can connect to net
- MiniPC can be connected from net and LAN via SSH
- Wired clients can connect to net via Ethernet cable without any manual configuration
- Wireless clients can connec to the net via Wi-Fi without any manual configuration
Step 1: Packages
After minimal installation, the set of tools and packages required includes:
net-tools bind-utils screen tcpdump policycoreutils-python setools
- net-tools: mostly for netstat, using
route
orifconfig
is deprecated - bind-utils: for dig and nslookup
- screen: for full-screen window manager
- tcpdump: taking a look into Ethernet and TCP/IP-packages, when something goes wrong, getting detailed view is very important
- policycoreutils-python setools: for managing SELinux
Step 2: Remove NetworkManager
Packages to install: -none needed-
Why a server would have GNOME NetworkManager installed on a server is beyond me. I simply cannot comprehend what CentOS-people are thinking, when they as default treat my server as a laptop. But the main thing is, that this piece of shit needs to go! The quicker, the better!
DANGER!
When you actually run the yum-command to remove NetworkManager, your system will lose all network connectivity. So, please, do run this at a console, not via SSH-connection.
DANGER!
Run command as root on console:
yum erase NetworkManager
Now your system's networking is royally messed up.
Step 3: Setup NICs
Packages to install: -none needed-
Check that NetworkManager created and left ifcfg
-files into /etc/sysconfig/network-scripts/
. If the appropriate ifcfg-files (one for each interface) are gone, you need to start learning how to write one fast. A good starting point on that would be RedHat Enterprise Linux 7 product documentation, Networking Guide, section 2.2 Editing Network Configuration Files.
LAN interface
Out of the two Ethernet-interfaces, 50/50 coin-flip ended as enp3s0
LAN and enp1s0
WAN. For any practical purposes, it really doesn't matter which one is which, but I'm describing here my setup. If you're using some other hardware, your interface names won't match those.
For any sensible use of your LAN-side, this interface should be connected to a network switch, so that your local network can be shared by your PC, Playstation, TV, Wi-Fi access point or whatever you have there running. Of course you can run it with only one host connected directly to the router.
This is critical: Your LAN-interface MUST have a static IP-address for it. It really cannot act as LAN-side of a router without one.
I chose my LAN to be private IP-range 192.168.1.0/24, so I edited /etc/sysconfig/network-scripts/ifcfg-enp3s0
to contain:
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV6INIT=yes
NAME=enp3s0
UUID=-don't-touch-this-
DEVICE=enp3s0
ONBOOT=yes
NETWORK=192.168.1.0
BROADCAST=193.168.1.255
USERCTL=no
IPADDR=192.168.1.1
PREFIX=24
IPV4_FAILURE_FATAL=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
WAN interface
For WAN, there is no need to touch enp1s0 setup (much). When your WAN-interface (enp1s0
) starts, it will obtain an IP-address from your ISP. With that, it will also get your ISP's DNS-address an overwrite your precious manual changes in /etc/resolv.conf. You don't want that to happen. So, prevent that and edit /etc/sysconfig/network-scripts/ifcfg-enp1s0
and add:
PEERDNS="no"
Well, that was easy!
IP-forwarding
For routing to work, it requires Linux kernel to have IP-forwarding enabled. It will allow network packets to travel between interfaces.
Enable IP-forwarding immediately:
sysctl -w net.ipv4.ip_forward=1
Enable IP-forwarding on boot:
sysctl net.ipv4.ip_forward > /etc/sysctl.d/1_ip_forward.conf
Finalize network setup
When your network interface configurations are ok, restart everything by running following as root:
systemctl enable network
systemctl restart network
Now your system:
- has both interfaces on-line
- is reachable from a machine on your wired-LAN using a static IP-address other than 192.168.1.1.
Note: your router doesn'ht have DHCPd running yet, so you need to figure out how to configure a static IP-address to your device - still gets an IP-address from your ISP from your external interface
- can reach IP-addresses via both external and internal interfaces
If these criteria are not met, there is simply no point in proceeding. Your system won't work as a router without those prerequisites.
Finally, make sure that your IPtables-rules have effect. Your box is connected to Internet and it can be accessed/bombarded from there, so run following to secure your setup:
systemctl restart firewalld
Now your system is ready to become a router.
Step 4: Firewalld
Packages to install: -none needed-
Zones
Out-of-box CentOS has firewalld enabled. It has only one zone defined for public wild-wild-net, also TCP/22 SSH is open for the world. This needs to be run as root. First split off LAN into own zone home:
# firewall-cmd --zone home --change-interface enp3s0 --permanent
Check the zones and their assigned interfaces:
# firewall-cmd --get-active-zones
home
interfaces: enp3s0
public
interfaces: enp1s0
Setup network address translation (NAT) and allow traffic to flow from your LAN to outside world. Any relevant traffic is allowed to flow in from Internet back to your LAN. Commands to run:
# firewall-cmd --permanent --direct --add-rule ipv4 nat POSTROUTING 0 -o enp1s0 -j MASQUERADE
# firewall-cmd --permanent --direct --add-rule ipv4 filter FWDI_home_allow 0 -o enp1s0 -j ACCEPT
# firewall-cmd --permanent --direct --add-rule ipv4 filter FWDI_public_allow 0 -o enp3s0 -m state --state RELATED,ESTABLISHED -j ACCEPT
Enable the DNS-server we'll setup later, also disable any outgoing DNS-queries from your LAN (a security measure):
# firewall-cmd --permanent --zone home --add-service dns
# firewall-cmd --permanent --direct --add-rule ipv4 filter FWDI_home_deny 0 -p udp -m udp --dport 53 -j REJECT
At this point do a reload:
# firewall-cmd --reload
... and test your firewall setup from router:
- You still must be able to access Internet from your router
- Your LAN does work at this point. A client with a static IP must be able to access Internet.
Step 5: Named
Packages to install: bind-chroot
You can continue to use your ISP's nameserver, but I never do that. It makes much more sense to have a caching nameserver running at your own router. This allows your box to go directly to Internet root servers and do all the name queries for you. In many countries ISPs intentionally drop some domains out or are forced by government to do that. Running your own resolver makes sure that you get all the records as is and in case of changes you can flush caches whenever you want and don't have to wait for a record to expire.Out-of-box the BIND 9.9.4 does not server anybody else than localhost. To fix this, find following two lines in /etc/named.conf
:
listen-on port 53 { 127.0.0.1; };
allow-query { localhost; };
Edit them to contain:
listen-on port 53 { 127.0.0.1; 192.168.1.1; };
allow-query { localhost; 192.168.1.0/24; };
Finally, change your system's default name resolver by editing /etc/resolv.conf
to contain a single line:
nameserver 127.0.0.1
Start the server and enable it to start on boot:
systemctl start named-chroot
systemctl enable named-chroot
Now you're ready to test the setup. Just host www.google.com
or your favorite site. Successful reply will include IP-address(es) for your query.
Step 6: DHCP
Packages to install: dhcp
Edit/etc/dhcp/dhcpd.conf
and have it contain:
ddns-update-style interim;
ignore client-updates;
authoritative;
default-lease-time 14400;
max-lease-time 86400;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.1.255;
option routers 192.168.1.1;
option domain-name "my.own.lan";
option domain-name-servers 192.168.1.1;
subnet 192.168.1.0 netmask 255.255.255.0 {
range 192.168.1.50 192.168.1.99;
}
That piece of configuration will use your router as DNS for the clients and issue them addresses from range .50
- .99
.
Start the server and enable it to start on boot:
systemctl start dhcpd
systemctl enable dhcpd
At this point, configure your client to use DHCP for IP-addressing. You must get an IP from the above range, also DNS-resolve and NAT should work, but that's the next step to do. Test it all.
Step 7: Testing it
Make sure:
- A client in your LAN gets an IP-address from DHCP
- A client in your LAN can ping your router at 192.168.1.1
- A client in your LAN can ping something in the Internet, like Google's name server at 8.8.8.8
- A client in your LAN resolves names, for example:
nslookup www.google.com
returns IP-addresses - A client in your LAN can access https://www.google.com/ via a web-browser
That's it! What else would you need?
Done!
Congratulations on your new router!
What I did next was set up my own DNS-zone so that my hosts had FQDNs. But that's beyond this blog post. Read something like How To Configure BIND as a Private Network DNS Server on CentOS 7 by DigitalOcean for that.
Com Hem offering IPv6 /56 prefix to its customers
Monday, January 15. 2018
UUUJEA!
Com Hem has been rolling out a native SLAAC/DHCPv6-based IPv6 to it's entire customer base, and they took a very important step 2 in their project. They started honoring Prefix Delegation -requests! To a non-network person that means absolutely nothing, but to a network administrator that is really a game changer!
Normally I don't use much of the features my Sagemcom cable-TV -router has, it's just set to bridge:
Since I failed earlier to get a prefix from my ISP, I was toying around with my router and set it to router-mode, and to my great surprise:
The thing issued my LAN a /64 IPv6-network! Nice.
After putting everything back and eye-balling the DHCPv6 lease file on my Linux-router:
lease6 {
interface "enp1s0";
ia-pd c4:d0:0a:85 {
starts 1515858667;
renew 302400;
rebind 483840;
iaprefix 2a04:ae00::/56 {
starts 1515858667;
preferred-life 604800;
max-life 2592000;
}
option dhcp6.status-code success;
}
}
Yes, it contains an ia-pd
-section! The iaprefix
from the file is mine, only mine, my precious address space!
I have no idea how long they have been honoring my PD-requests, but on December 17th they didn't.
A generally accepted IPv6-deployment principle is to follow RFC 5375 [IPv6 Unicast Address Assignment Considerations]'s suggestions and issue a minimum of /64 (18E IPv6 addresses) to customers. Since this /64 is completely useless for your own LAN, a second suggestion is to issue a /48 (1,2 million E IPv6 addresses) or /56 (4700E IPv6 addresses) prefix for consumer's own LANs.
Note: 18E is SI-prefix for exa, aka 10^18. A /64 IPv6 network has 18,446,744,073,709,551,616 unique addresses in it.
The numbers are astronomically big and it's quite easy to get confused and lose the perspective what they actually mean. A more concrete approach is, that by being issued a /56, I can now split my prefix into 256 separate /64 networks as I please. It's not like I need 256 LANs, I'm totally happy get even 1 of them to work!
Now I have my hands full to configure and test everything on my LAN. I need to make sure, that native-IPv6 works for wired and wireless toys I have here.
Intel CPU bug: Meltdown aftermath, Part 2
Sunday, January 14. 2018
When first information about a severe security flaw in Intel CPUs came out, it was immediately obvious, that this one is huge. This is in the scale of a meteorite hitting The Earth.
Here is just one example of what can happen, when fixed Linux kernels are deployed. I've been pumping data points from my weather station to Weather Underground for couple of years. Now this is how they are able to display my data:
If you don't see any graphs or data points there, that's my point here! WU does not work!
Yesterday I got an e-mail from them explaining the problems they're facing:
The interruption of service was related to the hot patches applied to our servers to correct the recent Intel hardware security flaws. We can say with 100% confidence that the data you share with us is completely safe. The patches required systems to be rebooted and, as these systems came back online, many of them did not boot up cleanly.
That is most definitely not the entire truth. The quality of their service at the time of writing this blog post is simply crap! Looks like Meltdown mitigation hit them harder than they anticipated.