Apple ID Scam
Sunday, December 7. 2014
Looks like somebody at Moldova was following The Fappening, and is getting bright ideas. I got an e-mail like this into one of my honeypot-addresses:
The fake e-mail goes like this:
Subject: Your apple id has been disabled 05/12/2014 09:44:30
Dear Customer;
We need to ask you to complete a short and brief step to securing and validating your account information.
https://appleid.apple.com
Failure to complete our validation process will result in a suspension of your Apple ID.
We take every step needed to automatically validate our users; unfortunately in your case we were unable to. The process only takes a couple of minutes and will make sure there is no interruption to your account.
I wasn't much surprised by that, becuse I don't use that account for anything serious (like Apple ID). I checked the link before clicking, obviously it wasn't to apple.com, but to a hijacked site located at Moldova. Somebody innocent was running an unpatched WordPress, and the crooks added some "bonus" content to the site. the HTML said: <meta name="generator" content="WordPress 3.5.1" />. The "apple ID" site looked pretty good (except, no HTTPS and that the address bar didn't match):
At the time of publishing this post, the victim-site has been pulled off the air, so there is no point in going there anymore.
Anyway, this is a yet another proof to be careful out there. In the Internet, most things aren't what they seem.
Network appliance and hard-coded passwords
Tuesday, August 26. 2014
Trend Micro reported that they found a backdoor from Netis/Netcore firmware. It is a quite serious one allowing remote code execution from the Internet side. Sure, the backdoor is "protected" by a password. As you may expect, the password is hard-coded, cannot be changed and is exactly same in each unit. Nice "security", huh!
Why doesn't this surprise me? Mr. Ronkainen, who is a really keen B593 hacker did find the Huawei internal documentation (available from the entire Internet, of course) Log_Capturing_Guide_of_LTE_CPE_B593_V1.2.docx. It describes following "Step 5 Enter admin after Login and press Enter. Then enter the password -removed- and press Enter". Actually, according to Mr. Ronkainen, the same password is the hard-coded password of serial-console. In reality, some soldering is required for serial console to work, but if you do ... there goes your security.
All B593 hacking always reveals hard-coded encryption keys and passwords. My conclusion: that poor security in these produced-as-cheaply-as-possible devices is by design, and it cannot be changed. Not too many samples in my "research", though. I don't mind having fixed default passwords, you can go and change them. These Chinese units, have fixed passwords, which is yet another story.
Again, I thank Mr. Ronkainen for sharing his findings. Even website https://www.sec-consult.com/ crredits him for his findings in SEC Consult Vulnerability Lab Security Advisory < 20140122-0 >.
Helsinki Security Meetup: SElinux presentation
Wednesday, August 20. 2014
As promised, here are my presentation slides from Helsinki Security Meetup from August 20th 2014. I did redact my e-mail address to prevent spammers harvesting it. I get enough spam already.
Presentation slides
In PDF-format: 2014HelsinkiSecurityMeetup.pdf
My backdoor C-code
Here it is: backdoor.c
There is no makefile or anything, a simple gcc backdoor.c -o backdoor will do the trick.
Running backdoor
In my demo, there was the insecure directory (run ls -Z to display the file contexts):
-rwxr-xr-x. root root unconfined_u:object_r:httpd_exec_t:s0 backdoor
-rwxr--r--. root root unconfined_u:object_r:admin_home_t:s0 start.backdoor-1.sh
-rwxr--r--. root root unconfined_u:object_r:initrc_exec_t:s0 start.backdoor-2.sh
and one secured directory:
-rwxr-xr-x. root root unconfined_u:object_r:backdoor_exec_t:s0 backdoor.secure
-rwxr--r--. root root unconfined_u:object_r:initrc_exec_t:s0 start.backdoor-3.sh
When running as httpd_t, remember to add the port into Apache allowed ports:
semanage port --add -t http_port_t -p tcp 8282
To (temporarily) change a file context, run a command like:
chcon -t backdoor_t backdoor
To permanantly change the file context,:
semanage fcontext -a -t backdoor_t /a_directory/backdoor
Now, the change will survive a restorecon-call.
What has changed after the presentation
To save system resources with one process, I changed the content of start-backdoor.sh scripts from:
#!/bin/bash
./backdoor.secure
to:
#!/bin/bash
exec ./backdoor.secure
I fixed the bug in fork child code mentioned during the presentation. Now a failing execvp() call does not leak processes. And while at it, I made failing more verbose. It will display the failure both on server and client ends.
During presentation, my backdoor-policy allowed binding the backdoor to any port. I added more security to that, and allow binding only to backdoor_port_t To get the secured backdoor running, you need to remove the TCP/8282 port from Apache, and add it to backdoor:
semanage port --delete -t http_port_t -p tcp 8282
semanage port --add -t backdoor_port_t -p tcp 8282
You can list the allowed ports with a command like:
semanage port -l | fgrep http_port_t
The SElinux backdoor policy files
The package is: backdoor_policy.tar.bz2
Remember to add the package selinux-policy-devel for make to work. Install the newly created policy with following command:
semodule -i backdoor_policy.pp
The new module will survive a system reboot.
What has changed after the presentation
Lot of unnecessary permissions have been dropped. backdoor_t can bind only to backdoor_port_t, not all ports. I also enabled backdoor_t writing to stdout, it helps to see what's going on. It is not typical for daemons to be allowed that, but especially when execvp() fails, it is so much easier to visualize SElinux policy kicking in.
Any comments are welcome!
What is netsession_win.exe and what it is doing in my Windows?
Thursday, April 3. 2014
One day I came back to my computer and saw a firewall notice about netsession_win.exe requesting access to listen UDP-requests.
Ok, I thought, that happens sometimes. An application needs permission to access The Net. This time it was a permission to receive traffic. Wait a minute! I don't remember installing anything that would require building a server on my box.
Apparently I'm not alone with my question, Mr. Scott Hanselman at CSI: My Computer - What is netsession_win.exe from Akamai and how did it get on my system? is pondering the same issue. He chose to keep it running, I chose not to.
Reasoning:
- I confirmed the facts from Mr. Hanselman blog post: the Akamai signature in the binary is valid, at least it looks like a valid one to me. This alone is definitely not a reason to kill the app.
- I don't know any purpose for such an application.
- A server binary is located at my %LOCALAPPDATA%, definitely not the place for a piece of server software to be running from.
- Anybody who installed the application and made it run did not ask for my permission for doing so.
- The application will run on each login based on a registry setting placed into HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run Mr. Hanselman located a service in his computer. My computer did not have that. I still don't like unknown applications to run all the time.
- It does not feel legit to me. All these combined make the application shady and it gives me chills.
Guys at Akamai:
This is not the way to do it! Revise your policy of putting spyware into people's computers.
Health-C-P scam: Danish Consumer Center officials gave a slap on the wrist
Friday, February 28. 2014
Last summer, in Facebook, the source of all nasty things, had ads published for Trimatol-, Corexin- and Life Detox products by really evil Danish company called Health-C-P. See my blog post about it.
Everybody knows, that government officials work really hard and do what they can, but it so happens that no matter what they do, their actions are really slow. Today we had news from Finnish Consumer Center, that Nordic countries joint their efforts and made Danish government to extend a harsh warning. The news about that is in Finnish-only, sorry. The original warning about their mis-guided marketing tactics can be found here in English.
There was no punishment by Danish government. They just instructed Health-C-P to correct their marketing. Total bullshit!
DNS testing tools
Thursday, February 27. 2014
Couple of times I've mentioned the Qualsys SSL Server Test in this blog. It is a really good (currently free) tool for evaluating your TLS/SSL server's security. Typically people run it to test their web servers, but it is no way limited to that. Anyway, it is a really good tool for any sysadmin.
Last week I was bailing out a customer when Namecheap's FreeDNS was under a huge DDoS. The irony of DDoSing a DNS was, that it was partly done with badly configured DNS's, actually a number of UDP-based protocols were used. See US-CERT's info about that.
In general the weak link in today's Internet is DNS. People don't recognize its importance for a business. I personally would like to see DNSSEC gaining more popularity to prevent cache poisoning, perhaps I'll blog about it later. But to fight those huge DDoS-attacks there is really no other way, than to use services from a service provider who has really big guns. The second thing any sysadmin can do is to make sure, they're not part of the problem. See my earlier post about that.
Last week I found a DNS-tool equal to Qualsys' tool. Its called DNS inspect and you can use it (currently) freely at http://www.dnsinspect.com/. It is a really good tool and gives you American school grade from A to F (mostly unknown by us European peope) about the target domain inspected. I warmly recommend all admins to check their domain and DNS setup with that tool.
Both of the tools I mentioned are really good. Good to the point I'd pay money to use them. Thanks to both companies for giving out those freely.
Updating Metasploit with Packet Storm exploits
Saturday, February 15. 2014
For any computer security geek (like I am), Metasploit is the de facto tool to use. What especially makes Metasploit so good, that there are number of places where you can get brand new exploits to run it with. It is obvious that Packet Storm is the #1 place for your new dose of exploits to run. They release exploits in ready packages for Metasploit on monthly basis. To help people catch up in their installations, there are also yearly packages combining year's monthly updates into a single file.
Due to popularity of Metasploit and quality of Packet Storm's exploits, there are number of instructions how to do the update. I googled "upadate metasploit with packet storm exploits" and got a ton of garbage. Really?! Couple dozen YouTube-videos showing how to do it. Videos?! Really!! Apparently all of those videos are created by script-kiddies for script-kiddies. It is soooo easy to copy/paste information from a YouTube-video, or ... then again, NOT. No matter which search phrases I used, I simply could not find a sensible way of doing the updates. When you don't find the required information from the web, it must be one of these two: 1) you're wrong or 2) everybody else is wrong. My typical choice is 2), ask anybody who knows me.
The single page with idea I found was at Mr. ljy396's blog "How to update exploits from packetstorm website". I was looking at his scripts of collecting and installing the exploits, and thought that there should be a single easy-to-use script for doing all this.
When I started writing my own script, the thought came to me again: "This is so wrong! There must be the real way of doing the updates, but I just didn't find it. It is obvious, that I'm writing an inferior duplicate thing for this." Anyway, that logic never stopped me so far. Here are the results.
The script operates in three modes:
- Do monthly updates:
updateMetasploitFromPacketStormExploits.pl --year=2014 - Download and install a specific package:
updateMetasploitFromPacketStormExploits.pl \
--package_url=http://packetstorm.interhost.co.il/1401-exploits/1401-exploits.tgz - Install an existing package:
updateMetasploitFromPacketStormExploits.pl --package_file=/tmp/1401-exploits.tgz
A Metasploit user has the "secret" $HOME/.msf4/-directory. My script adds the updates/-directory for keeping track of which monthly updates it already installed. As Metasploit requires, all of the installed exploits go to modules/. My script renames the .rb.txt-files in the packages properly during the installation. NOTE: I'm not touching the Metasploit install directory. The new updates really go to per-user directory and are not available system-wide.
Any bugs, comments and suggestions are welcome. Please drop a comment.
Certificate Authority setup: Doing it right with OpenSSL
Friday, December 27. 2013
In my previous post about securing HTTP-connections HTTP Secure: Is Internet really broken? I was speculating about the current state of encryption security in web applications. This article is about how to actually implement a CA in detail and the requirements of doing so.
During my years as server administrator, I've been setting up CAs into Windows domains. There many of the really complex issues I'm describing below have been handled so that you don't even notice them. I'm sure that using a commercial CA-software on Linux will yield similar results, but I just haven't really tested them. This article is about implementing a properly done CA with OpenSSL sample minimal CA application. Out-of-the-box it does many things minimally (as promised), but it is the most common CA. It ships with every Linux distro and the initial investment price (0 €) is about right.
Certificate what?
In cryptography, a certificate authority or certification authority (CA), is an entity that issues digital certificates.
— Wikipedia
Why would one want to have a CA? What is it good for?
For the purpose of cryptograpic communications between web servers and web browsers, we use X.509 certificates. Generally we talk about SSL certificates (based on the original, now obsoleted standard) or TLS certificates based on the currently valid standard. The bottom line is: when we do HTTP secure or HTTPS we will need a certificate issued by a Certficiate Authority.
Historically the HTTPS was developed by Netscape back in the 1994. They took X.509 as starting poing and adapted it to function with HTTP. A standard which was developed by well known Tim Berners-Lee in CERN couple of years before that. The RFC 5280 describes X.509. It has funny title "Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile", but is the current version.
Wikipedia says: "In cryptography, X.509 is an ITU-T standard for a public key infrastructure (PKI) and Privilege Management Infrastructure (PMI). X.509 specifies, amongst other things, standard formats for public key certificates, certificate revocation lists, attribute certificates, and a certification path validation algorithm."
So essentially X.509 just dictates what certificates are and how they can be issued, revoked and verified. Netscape's contribution simply was very important at the time as they added encryption of HTTP data being transferred. Contribution was so important, that the invention is still used today, and nobody has proposed anything serious to replace that.
So, enough history... let's move on.
Encryption: The ciphers
It is also important to understand, that a certificate can be used to encrypt the transmission using various cipher suites. Run openssl ciphers -tls1 -v to get a cryptic list of all supported ones. Examples:
RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5
DES-CBC3-SHA SSLv3 Kx=RSA Au=RSA Enc=3DES(168) Mac=SHA1
AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1
AES256-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA256
The actual list is very long, but not all of them are considered secure, see the chart in Wikipedia about some of the secure and insecure ciphers.
The columns are as follows:
- Name (or identifier)
- The protocol version which added the support
- Kx=: Key exchange method, RSA, DSA, Diffie-Hellman, etc.
- Au=: Authentication method, RSA, DSA, Diffie-Hellma or none
- Enc=: Encryption method with number of bits used
- Mac=: Message digest algorithm, MD5 or SHA1
If not counting any possible bugs, a number of flaws have been found in TLS/SSL, especially if using weak cipher suites. One of the recent and well known is the BEAST attack which can easily be mitigated by limiting the available cipher suites. If the BEAST is successfully applied, it can decrypt cookies from request headers. Thus, it is possible for somebody eavesdropping to gain access to your account even if you're using HTTPS. So essentially you've been doing exactly what every instruction about safe browsing tells you to, you've been using HTTPS, but still your traffic can be seen by observing parties.
Please note that when talking about cipher suites, there are two different entities which have the number of bits: server key and encryption key. The server key is the server's public/private -key pair, it can have for example 2048 bits in it. The server key is used only to transmit the client generated "real" encryption key to server, it can have for example 256 bits in it. Server key is typically never renewed, on the other hand the encryption key can have a lifetime of couple seconds.
The easy way: Self-signed certificates
Ok. Now we have established that there is a need for certificates to encrypt transmission and to issue a false sense of identified verification about the other party. Why not go the path-of-least-resistance? Why bother doing the complex process of building a CA because it is very easy to issue self-signed certificates. A self-signed certificate provides the required encryption, your browser may complain about invalid certificate, but even that complaint can be suppressed.
Most instruction tell you to run some magical openssl-commands to get your self-signed certificate, but in reality there exists a machine for that Self-Signed Certificate Generator. It doesn't get any easier than that.
My personal recommendation is to never use self-signed certificates for anything.
An excellent discussion about when them could be used can be found from Is it a bad practice to use self-signed SSL certificates? and What's the risk of using self-signed SSL? and When are self-signed certificates acceptable?
Practically a self-signed certificate is slipped into use like this:
- A lazy developer creates the self-signed certificate. He does it for himself and he chooses to ignore any complaints by his browser. Typical developers don't care about the browser complaints anyway nor they know how to suppress them.
- In the next phase the error occurs: the software goes to internal testing and number of people start using the bad certificate. Then they add couple of customer's people or external testers.
- Now a bunch of people have been trained to to pay any attention about certificate errors. They carry that trait for many decades.
I've not seen any self-signed certificates in production, so typically a commercial is eventually purchased. What puzzles me is that why don't they set up their own CA, or get a free certificate from CAcert or StartSSL.
CA setup
How to actually set up your own CA with OpenSSL has been extensively discussed since ever. One of the best instructions I've seen is by Mr. Jamie Nguyen. His a multi-part instructions are: How to act as your own certificate authority (CA), How to create an intermediate certificate authority (CA) and How to generate a certificate revocation list (CRL) and revoke certificates.
I'd also like to credit following as they helped me with setting up my own CA:
- Howto: Make Your Own Cert And Revocation List With OpenSSL
- OpenSSL CA and non CA certificate
- Generate a root CA cert for signing, and then a subject cert
Other non-minimal CAs which may be interesting, but I didn't review them:
The good way: The requirements for a proper CA
My requirements for a proper CA are:
- 2-level CA (This is what Jamie instructed):
- Root CA: For security reasons, make sure this is not in the same machine as intermediate CA. Typically this needs to be protected well and is considered off-line, except for the weekly revocation list updates.
- Intermediate: The active CA which actually issues all certificates
- Passes Windows certutil.exe -verification
- All certificates are secure and have enough bits in them
- A certificate requested for www.domain.com implicitly also has domain.com in it
- A certificate identifies the authority that issued it
- A certificate has location of revocation information in it
- CA certificates (both root and intermediate) identify themselves as CA certificates with proper usage in them
Verifying an issued certificate
In the requirements list there is only one really difficult thing: to get Windows certutil.exe to verify an issued certificate. The reason for that is, because pretty much all of the requirements must be met in order to achieve that.
A successful run is very long, but will look something like this (this is already a reduced version):
PS D:\> C:\Windows\System32\certutil.exe -verify -urlfetch .\test.certificate.crt
Issuer:
C=FI
Subject:
CN=test.certificate
C=FI
Cert Serial Number: 1003
-------- CERT_CHAIN_CONTEXT --------
ChainContext.dwInfoStatus = CERT_TRUST_HAS_PREFERRED_ISSUER (0x100)
ChainContext.dwRevocationFreshnessTime: 24 Days, 20 Hours, 35 Minutes, 29 Seconds
SimpleChain.dwInfoStatus = CERT_TRUST_HAS_PREFERRED_ISSUER (0x100)
SimpleChain.dwRevocationFreshnessTime: 24 Days, 20 Hours, 35 Minutes, 29 Seconds
CertContext[0][0]: dwInfoStatus=102 dwErrorStatus=0
Element.dwInfoStatus = CERT_TRUST_HAS_KEY_MATCH_ISSUER (0x2)
Element.dwInfoStatus = CERT_TRUST_HAS_PREFERRED_ISSUER (0x100)
---------------- Certificate AIA ----------------
Verified "Certificate (0)" Time: 0
---------------- Certificate CDP ----------------
Verified "Base CRL (1005)" Time: 0
CertContext[0][1]: dwInfoStatus=102 dwErrorStatus=0
Element.dwInfoStatus = CERT_TRUST_HAS_KEY_MATCH_ISSUER (0x2)
Element.dwInfoStatus = CERT_TRUST_HAS_PREFERRED_ISSUER (0x100)
---------------- Certificate AIA ----------------
Verified "Certificate (0)" Time: 0
---------------- Certificate CDP ----------------
Verified "Base CRL (1008)" Time: 0
---------------- Base CRL CDP ----------------
No URLs "None" Time: 0
---------------- Certificate OCSP ----------------
No URLs "None" Time: 0
--------------------------------
CRL 1006:
CertContext[0][2]: dwInfoStatus=10a dwErrorStatus=0
Element.dwInfoStatus = CERT_TRUST_HAS_KEY_MATCH_ISSUER (0x2)
Element.dwInfoStatus = CERT_TRUST_IS_SELF_SIGNED (0x8)
Element.dwInfoStatus = CERT_TRUST_HAS_PREFERRED_ISSUER (0x100)
---------------- Certificate AIA ----------------
No URLs "None" Time: 0
---------------- Certificate CDP ----------------
Verified "Base CRL (1008)" Time: 0
---------------- Certificate OCSP ----------------
No URLs "None" Time: 0
--------------------------------
------------------------------------
Verified Issuance Policies: None
Verified Application Policies:
1.3.6.1.5.5.7.3.1 Server Authentication
Cert is an End Entity certificate
Leaf certificate revocation check passed
CertUtil: -verify command completed successfully.
The tool runs the entire certification chain from leaf [0][0] into intermediate CA [0][1] and to root CA [0][2] and reads the information, does the URL fetching and confirms everything it sees, knows about and learns about. Your setup really needs to be up to par to do that!
NOTE:
Your CA root certificate must be loaded into Window's machine account. The verify does not work properly, if you install the root certificate to user store. Something like this run with administrator permissions will fix the verify:
C:\Windows\System32\certutil.exe -enterprise -addstore root Root-CA.cer
You'll just need to download the certificate file Root-CA.cer (in PEM-format) as a prerequisite.
The setup
I'm not going to copy here what Jamie instructed, but after you've done the basic setup my work starts.
The key to achieve all my requirements is in the v3 extensions. RFC 5280 lists following standard extensions: Authority Key Identifier, Subject Key Identifier, Key Usage, Certificate Policies, Policy Mappings, Subject Alternative Name, Issuer Alternative Name, Subject Directory Attributes, Basic Constraints, Name Constraints, Policy Constraints, Extended Key Usage, CRL Distribution Points, Inhibit anyPolicy and Freshest CRL (a.k.a. Delta CRL Distribution Point). Also the private internet extensions: Authority Information Access and Subject Information Access are worth noting. I highlighted the good ones.
What I have is an own Bash-script for approving a request on my intermediate CA. A fragment of it contains generation of openssl.cnf appendix:
# NOTE: Used for web certificate approval
[ v3_web_cert ]
# PKIX recommendation.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer
# PKIX recommendation.
basicConstraints=CA:FALSE
# Key usage
# @link http://www.openssl.org/docs/apps/x509v3_config.html
keyUsage = digitalSignature, keyEncipherment, keyAgreement
extendedKeyUsage = serverAuth
authorityInfoAccess = caIssuers;URI:http://ca.myown.com/Intermediate-CA.cer
crlDistributionPoints = URI:http://ca.myown.com/Intermediate-CA.crl
subjectAltName = @alt_names
[alt_names]
DNS.1 = ${SUBJECT}
DNS.2 = ${ALT_NAME}
After generation, it is used in the script like this:
openssl ca -config ${CA_ROOT}/openssl.cnf \
-keyfile ${CA_ROOT}/private/intermediate.key.pem \
-cert ${CA_ROOT}/certs/intermediate.cert.pem \
-extensions v3_web_cert -extfile cert.extensions.temp.cnf \
-policy policy_anything -notext -md sha1 \
-in Input_file.req
On my root CA I have for intermediate CA certificate:
# Used for intermediate CA certificate approval
[ v3_int_ca ]
# PKIX recommendation.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer
# PKIX recommendation.
basicConstraints = critical,CA:true
# Key usage: this is typical for a CA certificate
# @link http://www.openssl.org/docs/apps/x509v3_config.html
keyUsage = critical, keyCertSign, cRLSign
authorityInfoAccess = caIssuers;URI:http://ca.myown.com/Root-CA.cer
crlDistributionPoints = URI:http://ca.myown.com/Root-CA.crl
Also on root CA I have for the root certificate itself:
# Used for root certificate approval
[ v3_ca ]
# PKIX recommendation.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer
# PKIX recommendation.
basicConstraints = critical,CA:true
# Key usage: this is typical for a CA root certificate
# @link http://www.openssl.org/docs/apps/x509v3_config.html
keyUsage = critical, digitalSignature, keyEncipherment, keyAgreement, keyCertSign, cRLSign
crlDistributionPoints = URI:http://ca.myown.com/Root-CA.crl
Notice how root CA certificate does not have AIA (Authority Information Access) URL in it. There simply is no authority above the root level. For intermediate CA there is and that is reflected in the issued certificate. The CRL distribution points (Certificate Revocation Lists) are really important, that's the basic difference between the "minimum" and properly done CAs.
Your next task is to actually make sure, that the given URLs have the indicated files in them. If you didn't get it: you will need a web server which can be accessed by your web browser to retrieve the files. The revocation lists can be auto-generated daily or weekly, but it has a limited life span. Out-of-the-box OpenSSL has in its .cnf-file: default_crl_days= 30 So a month is the absolute maximum.
I'll compile the list the 4 URLs here for brevity:
- Certificate locations:
- authorityInfoAccess = caIssuers;URI:http://ca.myown.com/Root-CA.cer
- authorityInfoAccess = caIssuers;URI:http://ca.myown.com/Intermediate-CA.cer
- Revocation list locations:
- crlDistributionPoints = URI:http://ca.myown.com/Root-CA.crl
- crlDistributionPoints = URI:http://ca.myown.com/Intermediate-CA.crl
For the simplicity I'll keep the public files in a single point. My CRL generation script verifies the existing certificates and creates alerts if old ones do not match the current ones. Perhaps somebody changed them, or I did a mistake myself.
Summary
The requirements have been met:
- 2-level CA
- Pass!
- Passes Windows certutil.exe -verify
- Pass!
- All certificates are secure and have enough bits in them
- Pass!
- The server keys are 2048 bits minimum
- The web servers where the certificates are used have
in them.SSLCipherSuite ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH
- A certificate requested for www.domain.com also has domain.com in it
- Pass!
- A script creates the configuration block dynamically to include the subjectAltName-list
- A certificate identifies the authority that issued it
- Pass!
- All certificates have the authorityInfoAccess in them
- A certificate has location of revocation information in it
- Pass!
- Intermediate and issued certificates have crlDistributionPoints in them
- CA certificates (both root and intermediate) identify themselves as CA certificates with proper usage in them
- Pass!
- CA certificates have basicConstraints=CA:TRUE in them
That's it! Feel free to ask for any details I forgot to mention here.
My experience is that only 1% of the system admins have a rough understanding of this entire issue. My favorite hobby of cryptography, VPN-tunnels and X.509 certificates don't seem to light nobody else's fire like they do for me.
HTTP Secure: Is Internet really broken?
Wednesday, December 25. 2013
What is HTTP Secure
HTTP secure (or HTTPS) is HTTP + encryption. For the purpose of cryptograpic communications between web servers and people accessing them, we use X.509 certificates issued by a Certificate Authority.
In cryptography, a certificate authority or certification authority (CA), is an entity that issues digital certificates.
— Wikipedia
Using HTTPS and assuming its safe to do so
One of the most famous HTTPS critics is Mr. Harri Hursti.
He is pretty famous computer security expert and has succesfully hacked
a number or things. He is currently works in his own company SafelyLocked.
#SSL is so insecure it’s only worth a post-it note from the NSA.
- Harri Hursti in Slush 2013 as relayed by Twitter
During an interview in Finnish newspaper Talouselämä he said:
SSL, one of the cornerstones in network security is broken and it cannot be fixed. It cannot be trusted.
That's a stiff claim to make.
What HTTPS certificates are for
I'm quoting a website http://www.sslshopper.com/article-when-are-self-signed-certificates-acceptable.html here:
SSL certificates provide one thing, and one thing only: Encryption between the two ends using the certificate.
They do not, and never been able to, provide any verification of who is on either end. This is because literally one second after they are issued, regardless of the level of effort that goes into validating who is doing the buying, someone else can be in control of the certificate, legitimately or otherwise.
Now, I understand perfectly well that Verisign and its brethren have made a huge industry out of scamming consumers into thinking that identification is indeed something that a certificate provides; but that is marketing illusion and nothing more. Hokum and hand-waving.
It all comes down to, can you determine that you are using the same crypto key that the server is? The reason for signing certificates and the like is to try to detect when you are being hit with a man-in-the-middle attack. In a nutshell, that attack is when you try to open a connection to your 'known' IP address, say, 123.45.6.7. Even though you are connecting to a 'known' IP address of a server you trust, doesn't mean you can necessarily trust traffic from that IP address. Why not? Because the Internet works by passing data from router to router until your data gets to it's destination.
Every router in between is an opportunity for malicious code on that router to re-write your packet, and you'd never know the difference, unless you have some way to verify that the packet is from the trusted server.
The point of a CA-signed certificate is to give slightly stronger verification that you are actually using the key that belongs to the server you are trying to connect to.
Long quote, huh. The short version of that is: Not to assume that if somebody has been verified has somebody or something, that they really are. You can safely assume, that traffic between that party is encrypted, but you really don't know who has the key to decrypt.
If we ignore the dirty details of encryption keys and their exchange during encrypted request initiation and concentrate only on the encryption cipher suite, not all of the cipher suites used are secure enough. There exists pretty good ones, but the worst ones are pretty crappy and cannot be trusted at all.
Argh! Internet is broken!
To add credit Mr Hursti's claims I'll pick the cases of Comodo and DigiNotar. DigiNotar didn't survive the crack, their PR failed so miserably that they lost all of their business.
On March 15th 2011 an affiliate company was used to create a new user
account into Comodo's system. That newly created account was used to
issue a number of certificates: mail.google.com, www.google.com, login.yahoo.com, login.skype.com, addons.mozilla.org and login.live.com. The gain of having those domains is immediately clear. If combined with some other type of attack (most likely DNS cache poisoning)
a victim can be lured into a rogue site for Google, Yahoo, Microsoft
and Mozilla without them ever knowing that all of their data is
compromised. Some of the possible attack vectors include injecting rogue
software updates to victims' computers.
On July 1st 2011 DigiNotar's CA was compromised by allegedly Iranian crackers. The company didn't even notice the crack at until 19th of July. At the time there had been a valid certificate issued to Google in the wild. Again, these false certificates were combined with some other type of security flaw (most likely a DNS poisoning) and redirected victims to access a fake "Google" site without them ever known that all the passwords and transmitted data was leaked.
Both of those cases scream: DON'T use certificates for verification. But in reality Microsoft Windows, Skype and Mozilla Firefox are built so that if a certificate on the other end is verified by the connecting client, all is good. The remote party is valid, trusted and verified. Remember from earlier: "They do not, and never been able to, provide any verification of who is on either end". Still a lot of software is written to misuse certificates like the examples above do.About trust in Internet
From the above incidents we learn the following: Who do you trust in Internet? Who would you consider a trustworthy party, so that you'd assume that what the other party says or does is so? Specifically in Internet context, if a party identifies themselves, do you trust that they are who they claim to be? Or if a party says, that they'll take your credit card details and agree to deliver goods which you purchased back to you, do you trust that they'll keep their promise?
Lot of questions, no answers. Trust is a complicated thing after all. Most people would say, that they'll trust major service providers like Google or Microsoft or Amazon, but frown upon smaller ones with no reputable history to prove their actions. The big companies allegedly work with NSA and gladly deliver your details and behavior for US Government agency to study if you're about to commit serious crime. So you choose to trust them, right? No.
Well then, are smaller parties any safer? Nobody really knows. They can be, or then again not.
Regular users really don't realize, that the vendor of their favorite web browser and operating system has chosen for you as an user of their software to trust a number of other parties. Here is a glimpse of the list of trusted certification authorities from Mozilla Firefox:
I personally don't trust a company with AddTrust AB or America Online Inc. or CNNIC or ... the list goes on. I don't know who they are, what they do or why should they be worth my trust. All I know for a fact that they paid Mozilla money to include their root certificate in Mozilla's software product to establish implicit trust between them and me, when I access a website they chose to issue a certificate to. That's a tall order!
There really is no reliable way of identifying the other party you're talking or communicating to in the Internet. Many times I wish there was, but since the dawn of time, Internet has been designed to be an anonymous place and that's the way it stays until drastic changes are introduced into it.
Conclusions
No, The Internet is not broken, despite what Mr. Hursti claims. The Internet is much more than HTTP or encryption or HTTP + encryption. For the purpose of securing HTTP we'll just need some secure method of identification for the other party. Critical applications would be on-line banking and on-line shopping. Optimally all traffic would be encrypted all the time (see HTTP 2.0 details), but it is an overkill after all. Encryption adds extra bytes to transfers and for example most large files don't contain any information that would need to be hidden. When moving credit card data or passwords or such, the amount of bytes transferred is typically small in comparison to downloading system updates or install images.
Also I'd like to point out that to keep a connection secure, it's mostly about the keys. Keeping the keys safe and exchanging keys safely. The actual ongoing encryption during a session can be thought to be secure enough to be trusted. However, a great deal of thought must be put into allowing or disallowing a cipher suite. And still: all that is based on current knowledge. Once a while some really smart people find out that a cipher that was thought to be secure isn't because of a flaw or mathematical breakthrough which renders a cipher suite not secure enough to be used.
We need encrypted HTTP but not the way they implemented it in 1994. It is not the task of Verisign (or any other CA-issuer) to say: "The other party has been verified by us, they are who they claim to be". Also there must be room for longer and longer keys and new cipher suites to keep the secure Internet reliable.
SMScaster.com spam / scam
Sunday, July 21. 2013
This was the first time ever I got a SMS spam. It said:
Visit us at SMSCaster.com for FREE ! You have Won £2,500,000.00 Pounds (GBP). Email: eurodraws23@hotmail.co.uk with your name, phone number and claim code:EU2k1
Sender: +34603147561
My initial WTF was: What the hell is SMSCaster? Apparently it is SMS-spammers weapon of choice. A piece of software which can spit out a lot of garbage using your mobile phone connected to your PC.
Then again with Google, I found two instances of people complaining about the same thing. 1) from Australia and 2) from USA. Looks like SMSCaster is only means to an end, it is the utility doing the nasty stuff.
My second WTF was: Why an earth somebody in Spain (+34) wants to claim, that I have won money there? I tried doing a reverse phone number lookup, but there seems not to be any of them available.
Spain seems to be the primary source of Nigerian scam or 419 scam nowadays. Some journalists have followed up on some e-mail spam and they seem to originate from Spain. Also there are news, that Spanish officials have arrested suspects running a Nigerian scam ring.
Formlife scam aka. health-c-p.com
Tuesday, July 9. 2013
The classic Formlife-scam popped up again. This is widely documented by authorities around the world. For example Finnish Competition and Consumer Authority have two articles about Formlife ja Vital Nordic (in Finnish): Kuluttaja-lehti 2/2013 about number of Formlife complaints and bulletin about Corex and Life Detox products from May 2012.
Formlife-scam has very simple modus operandi: They lure you in, from example from Facebook ad like this: and you end up in a web page. In this case the web page is http://wnmobile.com/fin/. In the page there is lot of health-related stories about how good their product is including customer testimonials. In the page they offer you an opportunity to purchase a sample package with 4 €. The price is not too bad, unsuspecting victims enter their credit card details and submit the order for a sample. Since I don't think that web site will survive very long, I have the web page as a PDF here. http://wnmobile.com/fin/ as a PDF
That scam-site of wnmobile.com is hosted by liquidweb.com in Lansing, Michigan, USA. It is obvious, that Network Solutions, Inc. has nothing to do with this scam. They simply are a hosting company.
Nowhere in their page they reveal anything about Health-C-P, nor Formlife, nor any of those already "burned" words. When postal services deliver the "sample" package, it does not contain a sample, but the full product, actually two separate products. Also it appears that they billed your credit card twice for the amount of 165 €. A shipping manifest will look like this:
It clearly states the fact that you have been scammed! They even mention, that your next shipment will arrive in a three months time. I don't think that they are that consumer oriented, and I strongly suspect that the next shipment will never arrive. If it will, they'll charge you for another 330 €.
The sender address of shipment is:
Health Care Payment
Postboks 313
9100 Aalborg
Denmark
According to GoDaddy domain information, the domain of health-c-p.com is registered to:
Anders Dahl Pallesen
Health Care Payment
Lyngby Hovedgade 10
2800 Kongens Lyngby
Denmark
The above address is real. Such an address exists, but I have no means on verifying what is in the building there. These two addresses are 400 km apart. The ownership of domain wnmobile.com is protected by WhoisGuard and cannot be accessed without government official involvement.
Company site of Health-C-P is (as you can expect) a rush job. In the bottom of the page it also has the address of Lyngby Hovedgade 10.
It breaks often. Especially when you try to access their customer service.
It is alwo worth noting that their customer support number of +44 203 598 2170 is in UK. It was operational when I called it, but it contained a recording saying that their "customer support is under maintenance and nobody can answer the call". The website of health-c-p.com is hosted by Hetzner Online AG in Gunzenhausen, Germany. Hetzner Online is well known for two things: cheap hosting of websites/e-mail/shell and ton of suspicious activity from those cheap accounts. It is generally listed as an Internet Bad Neighbourhood.
I'm sure that the same website will appear with another name and/or domain in a near future. The delivered product has the name of Formlife in it, so it is confirmed that this case is part of the long-running Danish/Swedish health product -scam.
Update (Sep 2013):
Finnish police publicly announced, that they won't investigate any foreign scams further, unless the damages exceed value of 5000 €. For the bad people, this is a license to keep on scamming. There is zero possibility of getting caught, because there won't be an international investigation.
In the above case the credit card company informed the victim that some money will be returned, if possible all of the lost money. Anyway, it will take months for the credit card company to process the issue.
Google doing the same thing as Skype
Friday, May 17. 2013
The H-Security blog wrote about Skype reading all the messages you type. They had seen weird traffic into their website after posting the URL into Skype. Read all about it from their blog entry "Skype with care – Microsoft is reading everything you write". There is also Ed Bott's article about how H-Security guys got it wrong the first time, meaning that they don't check your links.
Anyway, this is absolutely something I had to check out. A perfect candidate for this is our Development Lab test server. In testing phase we're in public Internet so that all the parties can test our upcoming version. There are 0 links to the site, but for some reason Google has indexed a number of pages from there. The information I got is from http://www.wholinks2me.com/
What's strange here is that I had to change the domain names couple of weeks ago, to make our testing process more exact what version we're testing. Also, knowing our people I'm pretty sure that nobody publishes the test-server links in any of their wesites, I know I don't. So, the question raises: where did Google get the new address from?
When I changed the domain, I did send an e-mail about it. Yep, you got it right, our team is using Google Mail. For the purpose of full discousre: on the site there is also Google AdSense advertisements and Google Translate tool.
I investigated 10 days worth of web-server logs. In that there are 18 distinct IP-addresses where the server was accessed from. A short investigation of all 18 addresses revealed that 8 of them can be easily explained. They are home and office IP-addresses of our team. What was strange that 5 of them were from Amazon EC2 and Google networks. I have no reasonable explanation why they are accessing my site. So it is very easy to come up with couple of conspiracy theories when something like that happens.
Internet Bad Neighborhoods
Sunday, April 21. 2013
Earlier I've studied Chinese domain name scams (part 1 and part 2).
A while ago I read about a study made by Mr. Giovane César Moreira Moura. Actually, the study is his PhD thesis and it is available from his page at University of Twente, Netherlands. Anyway, he claims that roughly 50% of the crap in the Internet is originating from 20 rogue networks. He researched 42.000+ ISPs and found out that e-mail spam, scam attempts, etc. are originated pretty much from the same places. He does not do much finger-pointing, but provides the idea how to produce the results.
His study inspired me to investigate the origin networks of all the crap my honey pot was receiving and do some finger-pointing (I don't have any restrictions about that ). My "honey pot" is a 20+ year old e-mail address. It is in every imaginable spammer/scammer/crap magnet -list. My results are badly skewed: when it receives junk, I'll manually tag it and report it to SpamCop (a spam-protecting service owned by Cisco Systems, Inc.), and eventually blocking the IP-address as a spammer. Since the mail server uses SpamCop blocking-list I won't receive any more junk from the IP, which for studying spammers is not good.
There is lot of evidence that most crap originates from hijacked computers, but not all. Some of spam arriving to me originates from VPS-boxes. I dunno if they are rented with real or stolen credit cards. Anyway, most spam I receive have some sort of forging attempt in the mail headers. So I'm utilizing tracing of non-forged e-mail origin with SpamCop's reporting tool. In his thesis Mr. Moura writes that the ultimate origin is almost always not discoverable. Hiding one's real location is way too easy in the Internet. Closing the sending IP typically helps, but leaves the criminal unidentified.
Anyway, here's my list:
- 30, IRINN-BROADCAST-ADDRESSES, India
- 9, GOOGLE, USA
- 8, PAET-FSS-IMPLI-1, USA
- 5, FR-OVH, France
- 3, 66-132-128-0-NET, USA
- 3, EGIHOSTING-4, USA
- 2, 1AN1-NETWORK, USA
- 2, DROPBOX, USA
- 2, NLYR-ARIN-BLK5, USA
The number is number of e-mails originating from that network followed by the network name. The list was gathered during 90 day period. In the list there were additional 80 networks with only single e-mail originating from them.
Most of the crap I receive originates from India. 2nd biggest seems to be Google. Also a huge virtual server renting company OVH-net is in the 4th place. Others I cannot explain. Another conclusion I can draw from these is that the 20 worst networks are not the ones bothering me.
Open recursive DNS-resolvers
Tuesday, April 2. 2013
Since the enemy had some help, what happened next was Spamhaus joining forces with Cloudflare, a company specializing in mitigating the effects of a DDoS-attack. What happened at the end of March 2013 has been described as "The DDoS That Almost Broke the Internet" by Cloudflare blog.
The spam-blocking service Spamhaus is providing technically works on top of DNS. Anybody running a receiving mail-server can configure it to confirm the connecting client's IP-address with a simple DNS-query returning funny-but-pre-determined names as an answer to determine the "spamminess" level of connecting client. The judgement who is a spammer and who is not is made solely by the Spamhaus. That's what the dispute between them and Cyberbunker is all about.
As described by Cloudflare, technically Cyberbunker's (alleged) DDoS works by amplifying incoming 36 UDP-bytes containing a valid query for RIPE.net's zone into 100-fold. There are at least 30.000 open DNS-servers responding to recursive queries. All they have to do is spoof the original UDP-packet's sender's IP into Spamhaus and they have harnessed a huge Internet traffic amplification machine targeting a single IP-address.
Since I myself am running a couple of DNS-boxes, I wanted to re-verify my servers, that they cannot be used into such activity. I googled some and found The Measurement Factory's Open resolver test. That appears to be a piece of crap. You punch in an IP-address and get open/closed status as a response. You can enter any invented IP-address to get the closed-verdict. WTF?!
The second thing I found is much convincing: Open DNS Resolver Project. The problem with that one is, that they just browse The Net and try to find open DNS-servers. For example my boxes were not listed. Not as open, closed or existing. They don't publish information about properly configured DNS-servers. It still leaves the original question unanswered: Can my DNS-server be used for attacking innocent or not.
Here is my answer to the problem: http://opensource.hqcodeshop.com/DNStest/dnstest-cgi.pl
It caches the result of any query for 24 hours, and cannot be used for bullying somebody. That feature I simply stole copied from The Measurement Factory. Its fully written in Perl and even the source code is available for you to get.
Initial feedback after putting the thing on-line was to support FQDNs. The answer is NO. My thing won't do any unnecessary DNS-requests, if possible. But if you have any other suggestions, please drop a comment.
Chinese domain scam - revisited
Wednesday, February 20. 2013
Earlier I wrote about elaborate Chinese scam to shake down money from unsuspecting corporate domain owners.
This time the e-mail really didn't specify any URLs to fake companies, nor really specify any fake company names. The e-mail was sent by clark.yang@picweb.net. There is a web site http://www.picweb.net/, which is located in Los Angeles, USA. The content has lot of references to China. Un-surprisingly, most of the links are either non-existent or actually are not links. Especially the top menu for "Products and Services" is not a link, so they actually don't sell anything. The web site looked like this:
The e-mail for picweb.net is handled by mx168.cn4e.com which is in CHINANET Fujian province network. There actually is a mail server in the address, but I don't know if they handle any e-mail for picweb.net. In the e-mail headers they tried to fake SMTP-route and point finger to a most likely innocent Chinese IP-address. They are not very good in forging headers and the attempt is rather childish.
Also un-surprisingly, the e-mail arrived to Google via IP-address 117.27.141.168, which is in same network as the previous domain scam e-mail. Also the above mail server is in the same ISP's block. They still don't care what kind of crybercrime is going on on their wire.
I'll post the contents of the e-mail here:
(Mail to the brand holder, thanks)
Dear Brand Holder,
We are the department of Asian Domain Registration Service in China. I have something to confirm with you. We formally received an application on February 20, 2013 that a company which self-styled "HongDa International Co.,Ltd" were applying to register "hqcodeshop" as their Net Brand and some domain names through our firm.
Now we are handling this registration, and after our initial checking, we found the name were similar to your company's, so we need to check with you whether your company has authorized that company to register these names. If you authorized this, we will finish the registration at once. If you did not authorize, please let us know within 7 workdays, so that we will handle this issue better. Out of the time limit we will unconditionally finish the registration for "HongDa International Co.,Ltd". Looking forward to your prompt reply.
Best Regards,
Clark Yang
Regional Manager
AnHui Office:
Phone: +86-551 6512 0117
Fax: +86-551 6512 3308
Postal Code:230022
Address:AnGao World Cities,No. 99,WangJiang West Road,HeFei,AnHui Province,China
ShangHai Headquarters:
Postal Code:201315
Address:No.11,Lane 788,Xiupu Road,Nanhui District,ShangHai,China
The e-mail headers are here:
Delivered-To: jatu@hqcodeshop.fi
Received: by 10.64.148.67 with SMTP id tq3csp180927ieb;
Wed, 20 Feb 2013 00:58:50 -0800 (PST)
X-Received: by 10.66.243.169 with SMTP id wz9mr52609194pac.34.1361350730222;
Wed, 20 Feb 2013 00:58:50 -0800 (PST)
Return-Path: <clark.yang@picweb.net>
Received: from mail.umail168.cn4e.com (mail.umail168.cn4e.com. [117.27.141.168])
by mx.google.com with ESMTP id o4si27012995paw.72.2013.02.20.00.58.48;
Wed, 20 Feb 2013 00:58:50 -0800 (PST)
Received-SPF: neutral (google.com: 117.27.141.168 is neither permitted nor denied by best guess record for domain of clark.yang@picweb.net) client-ip=117.27.141.168;
Authentication-Results: mx.google.com;
spf=neutral (google.com: 117.27.141.168 is neither permitted nor denied by best guess record for domain of clark.yang@picweb.net) smtp.mail=clark.yang@picweb.net
Received: from clarkyangpc (localhost.localdomain [127.0.0.1])
by mail.umail168.cn4e.com (Postfix) with SMTP id 9B02BA28004;
Wed, 20 Feb 2013 16:58:46 +0800 (CST)
Received: from clarkyangpc (unknown [124.73.90.238])
by mail.umail168.cn4e.com (Postfix) with ESMTPA;
Wed, 20 Feb 2013 16:58:46 +0800 (CST)
From: "Clark Yang"<clark.yang@picweb.net>
To:
Subject: "hqcodeshop" Net Brand and domain name registration
Date: Wed, 20 Feb 2013 17:00:53 +0800
Message-Id: <DM__130220165401_37568426463@mail.picweb.net>
MIME-Version: 1.0
Content-Type: multipart/related;
boundary="----=_NextPart_13022017005237571425618_001"
X-Priority: 1
X-Mailer: DreamMail 4.6.9.2
Disposition-Notification-To: clark.yang@picweb.net
Yet again I did report this scam to Google. Looks like they are powerless with these ones.