New Internet connection - Fiber to the Home
Friday, May 2. 2014
My blog has been on a low maintenance mode, due to the fact that I had other engagements. I was moving to a new house which has an FTTH or Fiber to the Home -connection.
The incoming connection is a 1000BASE-BX Ethernet and the connection carries also a IPTV streams in it. Of the given 1 Gbit/s I'm currently purchasing 250 Mbit/s downlink and 50 Mbit/s uplink. This costs me ~60 € per month. Not very expensive for such a quality bit, huh? There is a one-time building cost for 1800+ € for the connection, but that is not slowing me down.
In detail the connection is implemented with an Ethernet fiber to copper media converter and a CATV-module doing IPTV to DVB-C (SD-channels) and DVB-C2 (HD channels) media conversion. During the setup, the fiber-guy first built couple of meters of indoors cabling connected to the thick ground fiber. Indoor fiber connects to a Swedish made Inteno FG500 box. It looks like this:
In the first picture, there are 4 gigabit Ethernet (copper) connectors and a power connector to the CATV-module. Ethernet ports 1 to 3 are routed with a firewall and NAT, but luckily the port 4 is a bridged one and that's the one I'm using. From that I can get a public IP-address directly from ISP's DHCP-server.
The box splits into two halves, CATV and the "regular" Ethernet part. Parts are connected via power and fiber connectors:
The router has internal IP-address of 192.168.1.1 and it contains very typical web-interface for managing the box. Management looks like this:
WAN-parts are configured to copper Ethernet-ports with interface grouping:
It took me a while to figure all this out. It really matters to which LAN-port the cable is connected to. The WLAN-part can be deactivated and I did just that. I dislike those integrated crappy boxes very much, they are unreliable, insecure and what else ... Being a Linux-man I want to use my favorite Linksys and DD-WRT for any wireless needs.
This has got to be the best part of having a FTTH. The speed! Oh my god! Here are some sample results:
When analysing the results, two things come to my mind. First, Speedtest.net and the measuring servers it has don't go beyond 100 Mbit/s in any conditions (the software displays a maximum of 100 Mbit/s in it). Second, when measuring really fast speeds like I have here, the measurements are very unreliable and the results vary quite a lot when running number of tests in a sequence. In all cases, the downlink measurement is never 100 Mbit/s or even near it. 88 Mbit/s is pretty far from 100, or from the 250 Mbit/s what I'm paying. The uplink measurements are even worse. Those servers don't expect to be hit that hard (perhaps they see a DoS-burst?). Finally, ping 0 ms does not sound very reliable either. I'd assume 2-3 ms to be a best reading in the optimal conditions.
I've been running with this connection for a couple days only, but the assumption is that the uptimes it can keep are amazing. An ADSL-connection is pretty stable, but a fiber should be the rock-solid way to go, that's what I'm expecting from it.
Update Oct 2014:
Speedtest.net indeed goes beyond 100 Mbit/s:
APC Smart-UPS battery change
Monday, April 14. 2014
My home Linux box is always on and to make sure it stays on I have an uninterruptible power supply for it. APC is very well supported in Linux by open-source apcupsd. Pretty much all other UPSes are supported by NUT, but for some strange reason APC has a separate project.
Update 20th May 2018: There is a new post about APC UPS batteries here.
Couple days ago the UPS started to beep. My house had electricity and there was no apparent fault. Except that I got an e-mail from apcupsd "UPS battery needs changing NOW". Aow crap! The batteries had reached their end of life. I confirmed the status with apcaccess and got status of:
STATUS : ONLINE REPLACEBATT NOBATT
Every 12 hours, the UPS releases a burst of beeps and I find that very annoying. It was time to go shopping for new batteries. Luckily my unit uses Replacement Battery Cartridge #7, which is popular and generally well available and the price ~200 € isn't too bad. The next thing is to get the old ones out and new ones in. APC user's manual says that battery pack is hot-swappable, so it should be an easy thing to do. Apparently the first thing to do is remove the front cover:
It has nice holes on the sides. I guess the idea is to put your fingers into them an pull hard.
When the front cover is loose, a ribbon-cable and a metal cover is revaled.
There are two phillips-head screws to hold the metal cover in place:
When the metal cover is removed, the battery pack is visible:
There is a plastic tab at the bottom. I guess it is there, so that batteries can be pulled out. Now that the metal cover is removed, the batteries are not attached to anything:
The final thing to do is to disconnect the battery pack:
Now the UPS keeps beeping a lot. It is annoyed by the fact, that it cannot find a battery. Since I don't have any original APC software on my computer I found out that the audible beep can be silenced. First make sure that the apcupsd is not running:
systemctl stop apcupsd.service
Now the apctest can be used to change any of the UPS-parameters:
# apctest
2014-04-14 20:42:05 apctest 3.14.10 (13 September 2011) redhat
Checking configuration ...
Attached to driver: usb
cable.type = USB Cable
mode.type = USB UPS Driver
Please select the function you want to perform.
6) View/Change alarm behavior
Current alarm setting: ENABLED
Press...
E to Enable alarms
D to Disable alarms
Q to Quit with no changes
Your choice: Select function: d
New alarm setting: DISABLED
Now there is the blissful silence. It is much nicer to operate with the new battery pack.
Obviously the idea is to reverse the instruction. Connect the new battery, slide it in and tighten the screws and attach the front cover back.
I guess, the new battery should last at least 3 years before I'll be getting a completely new UPS. There simply is no point in running a really old UPS with brand new batteries. I guess the UPS technology will still evolve so that a new tech is required after half a decade.
Long live Windows XP: Building a Windows 7 desktop from scrap heap
Tuesday, April 8. 2014
Today, this tuesday, is the last time Windows XP gets security updates from Microsoft. Apparently Dutch and UK governments are willing to pay seven figure sums per year to keep their XPs running, but the rest of us, anything smaller than a government we don't get to have those. Last autumn I visited Windows 8.1 release in Helsinki and there a Microsoft MVP said "Microsoft is not that evil company, they just might release something really critical if it comes down to that". There was some chuckling in the crowd. Anyway, very interesting things will happen to XP, see the CVE-list for non-fixed flaws.
In Finnish media, the XP has gotten a lot of media coverage all year. I guess it got to the point of scaring regular users in the national TV. This I figured when my mother called me about her Windows 7. The another thing was at my neighbor's computer. She has an old XP-box with really poor specs. She asked for my help, but after initial assessment there was nothing I could do with her old computer. I happened to have an old Dell OptiPlex GX260 desktop computer with steroids, a SSD-drive in a ATA-to-SATA -adapter. I was about to do the decent thing and comply with EU 2002/96/EC directive, aka. throw the electronic junk where all electronic junk ends up at. The computer has only 512 MiB memory, but given the 2,0 GHz Pentium 4 CPU, it runs 2009 released Windows 7 in a reasonable acceptable manner. The best part for my neighbor was, that I promised to donate the hardware free of charge. It was only a matter of getting a license key for Windows 7.
My old OptiPlex booted from installation DVD without problems, I must say I was amazed about that. Any old low-quality CD/DVD-drive typically has issues with dust in the lens and the read head eventually deteriorates to the point, that the drive refuses to read anything. Anyway, the Windows 7 installer found the SATA-connected SSD-drive, the installer copied the files, booted for the initial Out-Of-the-Box-Experience. Everything seemed to be ok, I had network connectivity and even some sound.
What every legacy PC does after install is, that the display resolution is something unbelievable. This case I had a VGA 640x480. It sure looked nice on my 22" LCD-screen!
The thing is: no W7 drivers for my Dell. It seems to have a built-in Intel 82845G graphics adapter in it. The Dell support knows about my machine and the display driver in its Download Center, but the latest supported OS they have is Windows XP. WDDM driver model was introduced in Vista and XDDM drivers was the last one Intel ever did for 82845G. This was confirmed in Microsoft TechNet's Windows 7 IT Pro forums discussion How do I get Windows 7 Graphic driver for the Intel 82845G Graphics Controller?
Lucky for me, somebody had posted a link to another article For Older Hardware to Run on Windows 7 How to Install intel 82845G graphics driver on Window 7, and there was a solution for my exact problem. I didn't realize that it was possible to still use XDDM drivers, but it is possible. In device manager, there is a Add Legacy hardware:
I took the 82845G driver package 14.10.3, unzipped it and Windows 2000 XDDM driver installed. After rebooting the Windows 7, everything was ok. The driver actually claims it can go up to 2048x1536, but given the analog VGA-connector I chose not to go that far.
Finally, I installed Microsoft Security Essentials into it and I was done. My neighbor had a nice "new" computer till year 2020. I'm wishing people would upgrade before that, but knowing that doing nothing is always the preferred action on non-hackers, so plenty of people will be taken by surprise when Windows 7 updates run out.
Fixing Postfix: Make mail exit your Linux-box
Monday, April 7. 2014
Us server guys are becoming a rare breed. Any server-related tasks can be outsourced to any free-of-charge cloud service provider. One very good example of that is e-mail, nobody runs their own mail server nowadays. Companies are using external services and any regular Joe and Jane have gone for Gmail years ago. However, once in a while something funny happens and people want a NSA-free e-mail account.
The fact is that every Linux-installation has a mail server installed and running in it. The purpose for the mail server is to deliver local in-server mail to user(s), also depending on your Internet-connection it may be possible to send outgoing mail to the wild wild Net. Most ISPs block this due to excessive spamming from consumer's computers.
Setting up a receiving Postfix MTA is outside of this blog post, but my idea is to present you configuration help to:
- Configure Postfix to tunnel outgoing e-mail via you ISP's SMTP
- (optional) Redirect root's e-mail to an external mailbox
- (optional) Increase mail transport security by using TLS encryption for e-mail
So ... here goes.
Configure Postfix to tunnel outgoing e-mail via you ISP's SMTP
To fight spam most (all reasonable) ISPs have blocked outgoing TCP/25 traffic. This very effectively makes your computer not being able to inject new e-mails into receiving servers. Because of this, for example Google instructs users to use TCP-ports 465 or 587 instead, see their setup instructions.
For a mail server (like your Postfix), that is not a solution. Your box must be able to deliver e-mail to any server's TCP/25. All ISPs I've ever seen share a SMTP-server for their customers. The idea is to route all your external e-mail to that server, it will relay the e-mail to the final destination. Remember, that as a spam prevention measure, the amount of mail you can send via ISP's SMTP is limited. For example my ISP has a 50 e-mails per hour policy. It simply refuses to serve any excess requests based on IP-address of the sender.
The subject of routing outgoing e-mail is covered (among many others) in this article with title Configuring postfix to forward all email to a smtp gateway. All you have to do is add:
transport_maps = hash:/etc/postfix/transport
into your Postfix's main.cf. Typically there is a transport-file, but it does not contain your ISP's configuration in it.
Example:
My ISP TeliaSonera Finland has a SMTP running @ mail.inet.fi. Initially I had following line in my transport-file:
* smtp:mail.inet.fi
But it didn't work! Looking into source code src/trivial-rewrite/resolve.c reveals that Postfix keeps resolving the address via MX-record. The correct one will be:
* smtp:195.156.147.15
This setting will skip any resolving of the given address and use the given IP-address as-is.
Test the setup by sending e-mail to one of your own external addresses. It can be achieved by running something like this:
date | mail -s "Testing outgoing mail" test@user.at.somewhere
Confirm the functionality from Postfix's log, it should read something like this:
postfix/pickup[12869]: D13F8209AF: uid=0 from=<user>
postfix/cleanup[13427]: D13F8209AF: message-id=<20140407161546.D13F8209AF@my.linux.box>
postfix/qmgr[2185]: D13F8209AF: from=<user@my.linux.box>, size=482, nrcpt=1 (queue active)
postfix/smtp[13429]: D13F8209AF: to=<test@user.at.somewhere>, relay=195.156.147.15[195.156.147.15]:25, delay=0.35, delays=0.06/0.01/0.04/0.24, dsn=2.0.0, status=sent (250 <529734CF0ADA3B46> Mail accepted)
postfix/qmgr[2185]: D13F8209AF: removed
It clearly says "Mail accepted" and Postfix's queue manager eventually removes the outgoing mail from outgoing queue. Remember to confirm, that the mail landed to the external mailbox.
Redirect root's e-mail to an external mailbox
A good starting point is to look at /etc/aliases. For example OpenSuse has this in it:
# It is probably best to not work as user root and redirect all
# email to "root" to the address of a HUMAN who deals with this
# system's problems. Then you don't have to check for important
# email too often on the root account.
So, I put this into my aliases:
root: test@user.at.somewhere
There is a catch ... Having that in /etc/aliases won't work for your system's internal e-mails. Now that your box is not a receiving mail server, all of you mail is internal. You can confirm the non-functionality by:
date | mail -s "Testing outgoing root mail" root
Your maillog will read something like in the previous example. Mail will be routed to your ISP's SMTP, but the problem is, that the e-mail address is wrong. It will read root@your.server.name, your ISP does not have a clue what to do with such a mail, and it will bounce back. Now that your server cannot receive mail, the bounce will be dropped and is lost.
There is a fix for that. The two articles of Rewriting to address on postfix local aliases and Postfix masquerading or changing outgoing SMTP email or mail address will contain clues how to do it. I added following line into my main.cf:
smtp_generic_maps = hash:/etc/postfix/generic
The /etc/postfix/generic will read:
root@your.server.name test@user.at.somewhere
Postmap the transport-file, reload the postfix-service and test again. Now Postfix will re-write the outgoing e-mail properly as planned. The re-write can not be confirmed from the maillog, it will display the original root@your.server.name in there. However, on the receiving end the e-mail address will be correct.
Increase mail transport security by using TLS encryption for e-mail
The last item on my checklist is to start encrypting the mail. Note that this is pointless if your ISP does not support encryption. If it does and your Postfix is not configured to use encryption, you will get a lot of "warning: no entropy for TLS key generation: disabling TLS support" in your maillog.
As a prerequisite, you will need a SSL-certificate. Any certificate will do, even a crappy self-signed one. I'd never recommend using self-signed certificates, but if you're lazy and don't want to get a free one from the net, go for the path of least resistance. This is what I have in main.cf:
# SSL/TLS
# SMTP (sending)
smtp_tls_security_level = may
smtp_tls_key_file = /etc/ssl/private/the.certificate.key
smtp_tls_cert_file = /etc/ssl/certs/the.certificate.cer
smtp_tls_CApath = /etc/ssl/certs
To confirm that TLS is being used will look like this on received e-mails headers:
Received: from mail.inet.fi ([2001:15d8:172::]) by
mx.google.com with ESMTPS id 1si12730620lam.174.2014.04.07.09.57.37 for
<test@user.at.somewhere> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256
bits=128/128); Mon, 07 Apr 2014 09:57:37 -0700 (PDT)
Received: from your.server.name ([172.16.141.138])
(using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256
bits)) (No client certificate requested) by mail.inet.fi (Postfix) with
ESMTPS id E75004355F for <test@user.at.somewhere>; Mon, 7 Apr 2014 19:57:35
+0300 (EEST)
Notice how both servers specify the TLS cipher used. In this case Google's server uses only 128-bit encryption.
Note:
You don't need to enable tlsmgr in master.cf for any this to work. I've seen incorrect instructions about that. To repeat: For sending TLS-encrypted e-mail having tlsmgr is not necessary.
Polkomtel firmware for u-12
Sunday, April 6. 2014
An active contributor for this blog, Mr. nos_com71 was kind enough to notify me about a firmware he found. It is from Polkomtel, a Polish telco. You can download the package from this link @ Norton Zone cloud. I had issues with the download, but using a different browser on another machine solved it for me.
I didn't install the firmware into my s-12, but as usual, I extracted it and confirmed its contents and version of V100R001C69SP103, which is quite new. In the Bjørn Grønli's spreadsheet the newest one he found is a SP102, but this Polish one seems to be a notch newer.
If any of you test this one out, please tell us about it.
What is netsession_win.exe and what it is doing in my Windows?
Thursday, April 3. 2014
One day I came back to my computer and saw a firewall notice about netsession_win.exe requesting access to listen UDP-requests.
Ok, I thought, that happens sometimes. An application needs permission to access The Net. This time it was a permission to receive traffic. Wait a minute! I don't remember installing anything that would require building a server on my box.
Apparently I'm not alone with my question, Mr. Scott Hanselman at CSI: My Computer - What is netsession_win.exe from Akamai and how did it get on my system? is pondering the same issue. He chose to keep it running, I chose not to.
Reasoning:
- I confirmed the facts from Mr. Hanselman blog post: the Akamai signature in the binary is valid, at least it looks like a valid one to me. This alone is definitely not a reason to kill the app.
- I don't know any purpose for such an application.
- A server binary is located at my %LOCALAPPDATA%, definitely not the place for a piece of server software to be running from.
- Anybody who installed the application and made it run did not ask for my permission for doing so.
- The application will run on each login based on a registry setting placed into HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run Mr. Hanselman located a service in his computer. My computer did not have that. I still don't like unknown applications to run all the time.
- It does not feel legit to me. All these combined make the application shady and it gives me chills.
Guys at Akamai:
This is not the way to do it! Revise your policy of putting spyware into people's computers.
EPIC4 maildir patch
Sunday, March 30. 2014
I'm in the IRC 24/7. For the "idling" on my favorite channel I have used EPIC4 for a very long time. Couple of decades, in fact. The project is in a bad shape. Anything IRC-related is. For the record: I'll be the last dinosaur to punch the clock for the last time and turn off the lights when I notice that I'll be idling there alone. It won't come for another couple of decades, though.
Based on epicsol.org website, there is actually nobody to contact about EPIC4 bugs, no mailing list anymore (last one died 2009) nor any contact e-mail or a form. So, there literally is nobody who I could notify about anything. Writing on my own blog about it is pretty much all I can do for the project.
Back to business... My Linux-box is a mail-host and whenever something new arrives, it is really nice to get notified about that while doing absolutely nothing on the channel. However, when I stopped using mbox for storing the mail in my box, my favorite IRC-client stopped doing the notifying. It didn't not have the code for the more effective Maildir format. It does now.
My stuff is at http://opensource.hqcodeshop.com/EPIC/4/
It contains 64-bit RPM for Fedora 20 and the .src.rpm if you want to do the build by yourself. Note that my version is the latest EPIC4 2.10.4, not the Fedora-boxed 2.10.2.
To start using the Maildir-mode, say:
set mail_type maildir
in your .ircrc-file. The thing relies on $MAIL-environment variable to know where your mail is stored at.
Update 31th March 2014:
I actually got hold of Mr. Jeremy Nelson, the author or EPIC4 and EPIC5. He took my patch and said that it will be released in 2.10.5. We had a brief conversation in #epic-channel and he also said, that he is about to publish the EPIC5 project in Github.
My patch (epic4-2.10.1-maildir.patch) is as follows:
diff -aur epic4-2.10.1/include/config.h epic4-2.10.1.JT/include/config.h
--- epic4-2.10.1/include/config.h 2006-06-18 20:33:51.000000000 +0300
+++ epic4-2.10.1.JT/include/config.h 2012-08-30 13:22:20.319515332 +0300
@@ -412,7 +412,7 @@
#define DEFAULT_LOGFILE "irc.log"
#define DEFAULT_MAIL 2
#define DEFAULT_MAIL_INTERVAL 60
-/ #define DEFAULT_MAIL_TYPE "mbox" /
+#define DEFAULT_MAIL_TYPE "mbox"
#define DEFAULT_MAX_RECONNECTS 4
#define DEFAULT_METRIC_TIME 0
#define DEFAULT_MODE_STRIPPER 0
diff -aur epic4-2.10.1/include/vars.h epic4-2.10.1.JT/include/vars.h
--- epic4-2.10.1/include/vars.h 2006-06-18 20:33:51.000000000 +0300
+++ epic4-2.10.1.JT/include/vars.h 2012-08-30 13:24:19.719723226 +0300
@@ -93,7 +93,7 @@
LOG_REWRITE_VAR,
MAIL_VAR,
MAIL_INTERVAL_VAR,
- / MAIL_TYPE_VAR, /
+ MAIL_TYPE_VAR,
MANGLE_INBOUND_VAR,
MANGLE_LOGFILES_VAR,
MANGLE_OUTBOUND_VAR,
diff -aur epic4-2.10.1/source/mail.c epic4-2.10.1.JT/source/mail.c
--- epic4-2.10.1/source/mail.c 2006-06-18 20:33:51.000000000 +0300
+++ epic4-2.10.1.JT/source/mail.c 2012-08-30 15:25:05.568641118 +0300
@@ -353,7 +353,7 @@
return 0;
}
- maildir_path = malloc_strdup(tmp_maildir_path);
+ maildir_path = malloc_strdup(maildir);
maildir_last_changed = -1;
return 1;
}
@@ -375,13 +375,29 @@
{
int count = 0;
DIR dir;
+ Filename tmp_maildir_path;
+ struct dirent* dir_data;
- if ((dir = opendir(maildir_path)))
+ strlcpy(tmp_maildir_path, maildir_path, sizeof(Filename));
+ strlcat(tmp_maildir_path, "/new", sizeof(Filename));
+ if ((dir = opendir(tmp_maildir_path)))
{
- while (readdir(dir) != NULL)
- count++;
+ while ((dir_data = readdir(dir)) != NULL) {
+ if (dir_data->d_name[0] != '.')
+ count++;
+ }
+ closedir(dir);
+ }
+
+ strlcpy(tmp_maildir_path, maildir_path, sizeof(Filename));
+ strlcat(tmp_maildir_path, "/cur", sizeof(Filename));
+ if ((dir = opendir(tmp_maildir_path)))
+ {
+ while ((dir_data = readdir(dir)) != NULL) {
+ if (dir_data->d_name[0] != '.')
+ count++;
+ }
closedir(dir);
- count -= 2; / Don't count . or .. /
}
return count;
@@ -398,6 +414,7 @@
{
Stat sb;
Stat stat_buf;
+ Filename tmp_maildir_path;
if (ptr)
stat_buf = (Stat )ptr;
@@ -408,8 +425,11 @@
if (!init_maildir_checking())
return 0; / Can't find maildir /
+ strlcpy(tmp_maildir_path, maildir_path, sizeof(Filename));
+ strlcat(tmp_maildir_path, "/new", sizeof(Filename));
+
/ If there is no mailbox, there is no mail! /
- if (stat(maildir_path, stat_buf) == -1)
+ if (stat(tmp_maildir_path, stat_buf) == -1)
return 0;
/
@@ -547,6 +567,10 @@
update_mail_level2_maildir();
if (status == 2)
{
+ Filename tmp_maildir_path;
+ strlcpy(tmp_maildir_path, maildir_path, sizeof(Filename));
+ strlcat(tmp_maildir_path, "/new", sizeof(Filename));
+
/ XXX Ew. Evil. Gross. /
ts.actime = stat_buf.st_atime;
ts.modtime = stat_buf.st_mtime;
@@ -642,6 +666,27 @@
void set_mail_type (const void stuff)
{
- / EPIC4 cannot switch mailbox types (yet) /
+ const char value;
+ struct mail_checker new_checker;
+ char old_mailval[16];
+
+ value = (const char )stuff;
+
+ if (value == NULL)
+ new_checker = NULL;
+ else if (!my_stricmp(value, "MBOX"))
+ new_checker = &mail_types[0];
+ else if (!my_stricmp(value, "MAILDIR"))
+ new_checker = &mail_types[1];
+ else
+ {
+ say("/SET MAIL_TYPE must be MBOX or MAILDIR.");
+ return;
+ }
+
+ snprintf(old_mailval, sizeof(old_mailval), "%d", get_int_var(MAIL_VAR));
+ set_var_value(MAIL_VAR, zero);
+ checkmail = new_checker;
+ set_var_value(MAIL_VAR, old_mailval);
}
diff -aur epic4-2.10.1/source/vars.c epic4-2.10.1.JT/source/vars.c
--- epic4-2.10.1/source/vars.c 2008-03-17 05:42:46.000000000 +0200
+++ epic4-2.10.1.JT/source/vars.c 2012-08-30 13:14:54.801014647 +0300
@@ -194,7 +194,7 @@
{ "LOG_REWRITE", STR_TYPE_VAR, 0, 0, NULL, NULL, 0, 0 },
{ "MAIL", INT_TYPE_VAR, DEFAULT_MAIL, 0, NULL, set_mail, 0, 0 },
{ "MAIL_INTERVAL", INT_TYPE_VAR, DEFAULT_MAIL_INTERVAL, 0, NULL, set_mail_interval, 0, 0 },
- / { "MAIL_TYPE", STR_TYPE_VAR, 0, 0, NULL, set_mail_type, 0, 0 }, /
+ { "MAIL_TYPE", STR_TYPE_VAR, 0, 0, NULL, set_mail_type, 0, 0 },
{ "MANGLE_INBOUND", STR_TYPE_VAR, 0, 0, NULL, set_mangle_inbound, 0, 0 },
{ "MANGLE_LOGFILES", STR_TYPE_VAR, 0, 0, NULL, set_mangle_logfiles, 0, 0 },
{ "MANGLE_OUTBOUND", STR_TYPE_VAR, 0, 0, NULL, set_mangle_outbound, 0, 0 },
@@ -350,7 +350,7 @@
set_string_var(HIGHLIGHT_CHAR_VAR, DEFAULT_HIGHLIGHT_CHAR);
set_string_var(LASTLOG_LEVEL_VAR, DEFAULT_LASTLOG_LEVEL);
set_string_var(LOG_REWRITE_VAR, NULL);
- / set_string_var(MAIL_TYPE_VAR, DEFAULT_MAIL_TYPE); /
+ set_string_var(MAIL_TYPE_VAR, DEFAULT_MAIL_TYPE);
set_string_var(MANGLE_INBOUND_VAR, NULL);
set_string_var(MANGLE_LOGFILES_VAR, NULL);
set_string_var(MANGLE_OUTBOUND_VAR, NULL);
Fixing inaccurate Windows 7 NTP-client
Saturday, March 29. 2014
I don't have a Windows-domain at home, so the Internet time client (NTP) is on relaxed settings. Your typical Microsoft documentation about NTP will have phrases like: "The default value for domain members is 10. The default value for stand-alone clients and servers is 15" in it. So, it really makes a difference if the computer is in a domain or not.
It is a well established fact, that the hardware clock on your computer is quite inaccurate. On a modern computer, there is no point in using expensive hardware to make the clock run smoothly, you can always set the time from a reliable clock source from Internet. That's what the NTP was made decades ago, to make sure that everybody has the same time in their boxes.
The real question here is: Why does my Windows 7 clock skew so much? I have set up the internet time, but it still is inaccurate.
As a Linux-guy I love doing my stuff on the command-line. To question about the clock skew I'll do:
w32tm /monitor /computers:-the-NTP-server-
... and it will respond something like NTP: -0.7900288s offset from local clock. So it's almost a second behind the accurate time source.
The initial fix is easy, force it to get the accurate time from the configured time server:
w32tm /resync
But I cannot be doing that all the time. Why cannot the computer maintain a well disciplined clock like I configured it to do? There must be something fishy about that.
A command like:
w32tm /query /status
will say that Poll Interval: 10 (1024s), but I cannot confirm that requests for every 1024 seconds (or less). It simply does not do that. There is a TechNet article with the title of Windows Time Service Tools and Settings describing a registry setting of MaxPollInterval located in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config, but that has no real relevance here. The update mechanism does not obey that setting.
However, Microsoft's knowledge base article 884776 titled How to configure the Windows Time service against a large time offset gives more insight about the update interval. It describes a registry value of SpecialPollInterval located in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient for manual peers. I'm guessing I have a manual peer, whatever that means. I don't have a domain and I did set the server manually. The original value seems to be 604800 seconds, making that 7 days or a week. Whoa! Way too much for me.
While sniffing the network traffic with the Wireshark, indeed I can confirm that putting a small value into that will make my Windows 7 to poll on that interval. I put 10 seconds there, and it seems to work. For any real life scenario 10 seconds to update time is ridiculous. For a computer on a domain, the value is 3600 seconds, making the updates for every hour. I chose to use that.
Please note that changing the registry value requires a restart for the Windows time client. From a command line a:
net stop w32time
net start w32time
will do the trick and start using the newly set registry value. You can also restart the Windows Time service from GUI.
Now my computer's time seems to stick with a reasonable accuracy. I'm still considering of purchasing a GPS-time box of my own. They seem to be quite expensive, though.
3 Italy firmware for u-12
Friday, March 28. 2014
I got a comment from Mr. nos_com71 about 3 Italy's firmware for u-12.
The download link is to Mediafire, which definitely is not my favorite place to pick up something I'd like to run on any of my computers. But if you think you can handle it, go for https://www.mediafire.com/?jut00ju7uov988z to get it.
Little bit of FMK:ing revealed, that 3 Italy is using a classic V100R001C12SP104 (see the article about 3 Denmark's firmware with exactly the same version). The important thing is, that he pointed out that the SSH passwords are stored unencrypted in /var/sshusers.cfg. This is exactly like my version does.
So, those people who are able to use the exploit, are able to know what the SSH-passwords are. A command like
B593cmd.pl "cat /var/sshusers.cfg" will give you immediate answer and a regular
ssh admin@-the-IP-here- /bin/sh will do the rest. As I previously stated, the thing is, you need to know the admin password to the web-console and old enough firmware to have the exploit in it to be table to any of that.
Wrangling permissions on an enforcing SElinux setup
Saturday, March 22. 2014
Most people don't much care about their Linux-boxes' security. You install it, you run it, you use it and occasionally run some system updates into it. Not me. When I have a box running against the wild wild Net, I absolutely positively plan to make the life of anybody cracking into one of my boxes as difficult as possible (with some usability left for myself). See Mr. Tan's article about Security-Functionality-Usability Trade-Off.
So, my choice is at the Functionality - Security -axis with less on the Ease-of-use. The rationale is that, a web application needs to run as safely as possible and can have the ease-of-use in it. The system administrator is a trained professional, he doesn't need the easy-part so much. However, there is a point, when things are set up too tight:
Image courtesy of Dilbert by Scott Adams
So, I voluntarily run software designed and implemented by NSA, SElinux. I even run it in the the Enforcing-mode which any even remotely normal system administrator thinks as being totally insane! Any small or even a tiny slip-up from the set security policy will render things completely useless. Mordac steps in and stuff simply does not work anymore.
On my Fedora-box there was a bug in BIND, the name server and an update was released to fix that. After running the update, the DNS was gone. As in, it didn't function, it didn't respond to any requests and the service didn't start. All it said was:
# systemctl status named-chroot.service --full
named-chroot.service - Berkeley Internet Name Domain (DNS)
Loaded: loaded (/usr/lib/systemd/system/named-chroot.service; enabled)
Active: failed (Result: timeout)
Any attempt to start the service resulted in a 60 second wait and a failure. dmesg-log had nothing about the issue, nor BIND's own log had nothing about the issue in it. So I started suspecting a SElinux-permission issue. My standard SElinux debugging always starts with a:
cat /var/log/audit/audit.log | audit2allow -m local
... to see if SElinux's audit logger is logging any permission-related audit faults. Indeed it did:
require {
type named_conf_t;
type named_t;
class dir write;
}
#============= named_t ==============
allow named_t named_conf_t:dir write;
That reads:
A process running in named_t security context is trying to access a directory with named_conf_t security context to gain a write access, but is denied while doing so.
It is obvious that the process in question must be the BIND name server. No other process has the named_t security context in it. When starting up, BIND name server was about to write into its own configuration directory, which is a big no no! When you write, you write only to designated directories, nowhere else (remember: running in enforcing-mode is insanity).
That is definitely a reason for a daemon not to start or to timeout while starting. Further investigation showed that also Fedora's SElinux policy had been updated a week ago: selinux-policy-3.12.1-74.19.fc19.
At this point I had all the pieces for the puzzle, it was simply a matter of putting it all together. The recently released SElinux policy has a bug in it, and nobody else was there to fix it for me.
The exact audit-log line is:
type=AVC msg=audit(1395481575.712:15239): avc:
denied { write } for
pid=4046 comm="named" name="named" dev="tmpfs" ino=14899
scontext=system_u:system_r:named_t:s0
tcontext=system_u:object_r:named_conf_t:s0 tclass=dir
So, my chrooted BIND-damon was trying to write into a tmpfs. There aren't that many of those in a system. I've even touched the tmpfs-subject earlier when I wrote a systemd-configuration into my own daemon. To find the tmpfs-usage, I ran:
# mount | fgrep tmpfs
tmpfs on /var/named/chroot/run/named type tmpfs
BIND's chroot-environment has one. That is very likely the culprit. That can be confirmed:
# ls -Z /var/named/chroot/run/
drwxrwx---. named named system_u:object_r:named_conf_t:s0 named
Yep! That's it. The directory has incorrect security context in it. To compare into system's non-chrooted one:
# ls -Zd /run/
drwxr-xr-x. root root system_u:object_r:var_run_t:s0 /run/
There is a difference between named_conf_t and var_run_t. You can write temporary files into latter, but not to the first one. The fix is very simple (assuming, that you speak fluent SElinux):
semanage fcontext -a -t var_run_t "/var/named/chroot/run(/.*)?"
restorecon -R -v named/
The two commands are:
First, re-declare a better security-context for the directory in question and then start using the new definition. Now my BIND started and was fully operational! Nice.
My investigation ran further. I needed to report this to Fedora-people. I looked into the policy-file of /etc/selinux/targeted/contexts/files/file_contexts and found the faulty line in it:
/var/named/chroot/var/run/named.* system_u:object_r:named_var_run_t:s0
That line almost works. The directory in question has only two files in it. One of them even has a matching name. The problem, obviously, is that the another one does not:
# ls -l /var/named/chroot/run/named/
total 8
-rw-r--r--. 1 named named 5 Mar 22 12:02 named.pid
-rw-------. 1 named named 102 Mar 22 12:02 session.key
See Bug 1079636 at Red Hat Bugzilla for further developments with this issue.
Disabling non-disableable Internet Explorer add ons
Friday, March 21. 2014
One day my laptop shut itself down while I was getting a cup of coffee. No big deal, I thought. I'll just plug it into charger and things will be ok again. It took me by surprise to see, that the battery was 80% charged and the laptop had done a "crash landing". Apparently it chose to turn itself off. I'm guessing to avoid an over-heating situation.
Couple of weeks later I realized that a machine that does not do anything, chews about 25% CPU constantly. The natural guess would be a virus scanner, but it turned out to be a process called IEWebSiteLogon.exe:
I've never heard of such an application. Google didn't reveal anything useful, but the process properties revealed that the file was located at C:\Program Files\Lenovo Fingerprint Reader\x86\, so the conclusion is that my fingerprint reader's software is running a piece of software to eat up a lot of CPU-resources to do exactly nothing.
The file name gave me a hint, that it has something to do with Internet Explorer. I was running IE 11:
I opened the add ons manager:
and there it was. My initial idea of disabling the stupid thing didn't pan out. The Disable-button is grayed out. Searching The Net revealed two interesting pieces of information: How to Remove Unneeded Plug-Ins in Internet Explorer By Andy Rathbone from Windows 8 For Dummies, which proved to be useless, it instructs to disable the add on. The second yielded results: Can't remove Internet Explorer Add-On. It described a way to track down the component by its class ID. Nice, but not nice enough. Somewhere there is a piece of code to attempt to load the missing component. Why not remove the requirement?
The details of the add on are:
Now I had the class ID of {8590886E-EC8C-43C1-A32C-E4C2B0B6395B}. According to SystemLookup.com is a valid piece of software, they say: "This entry is classified as legitimate". That class ID can be found in my Windows system's registry from the following locations:
- HKEY_CLASSES_ROOT\CLSID\
- HKEY_CLASSES_ROOT\Wow6432Node\CLSID\
- HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Wow6432Node\CLSID\
- HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Browser Helper Objects\
- HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Approved Extensions
- HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Ext\Settings\
The interesting ones are the system setting of Browser Helper Objects and user setting of Approved Extensions. Removing the helper object surely will disable the add on completely. Also it will be a good idea to make it a not-approved extension. And to un-register the component. All that should give the stupid add on a decisive blow and make it not waste my precious CPU-cycles.
The following PowerShell-commands run with administrator permissions will do the trick:
Remove-Item -path
"HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Browser Helper Objects\{8590886E-EC8C-43C1-A32C-E4C2B0B6395B}"
Remove-Item -path
"HKCU:\Software\Microsoft\Windows\CurrentVersion\Ext\Settings\{8590886E-EC8C-43C1-A32C-E4C2B0B6395B}"
Remove-ItemProperty -path
"HKCU:\Software\Microsoft\Internet Explorer\Approved Extensions" -name "{8590886E-EC8C-43C1-A32C-E4C2B0B6395B}"
If you don't have admin-permissions, the commands will fail. Also please note that every time Internet Explorer is started, it will make sure that permissions in the registry HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Approved Extensions are set so, that user is denied any modification access. See this:
I tried to remove the deny ACL with PowerShell, but it seems to be impossible. The API is not mature enough.
After removing the deny ACL and running the PowerShell-commands and finally stopping and starting the Internet Explorer, the add on was gone. I managed to "disable" it completely.
PHP large file uploads
Thursday, March 20. 2014
Here I bumped into a really popular subject. My ownCloud had a really small upload limit of 32 MiB and I was aiming for the 1+ GiB range. The "cloud" is in a tiny box and is running a 32-bit Linux, so 2 GiB is the absolute maximum for a file that can pass trough Apache and PHP. The limits are documented in ownCloud Administrators Manual - Dealing with Big File Uploads.
Raising the file size limits is something I could do myself. Here is a reference for you: How to Upload Large Files in PHP. Its simply about finding the parameters for limits and setting them to a bigger value.
I created different size sample files and tested with them. I found out that there is a point after Apache started the upload, uploaded for a while and exited with a HTTP/500. In my case 600 MiB file passed ok, but 800 MiB file did not. I later found out, that it wasn't about the file sizes itself, but max input time. I had missed that one on my setup.
The max input time is a classic, for example a conversion with topic "PHP file upload affected or not by max_input_time?" discusses the issue in detail. The conclusion is that, the actual upload speed (or network bandwidth available) has nothing to do with the input processing, or maximum value of it. There is a PHP manual page of http://php.net/manual/en/features.file-upload.common-pitfalls.php and it clearly says:
max_input_time sets the maximum time, in seconds, the script is allowed to receive input;
this includes file uploads. For large or multiple files, or users on slower connections,
the default of 60 seconds may be exceeded.
But that simply is not true! In the another section of PHP manual the integer directive max_input_time is defined as:
This sets the maximum time in seconds a script is allowed to parse input data, like POST and GET. Timing begins at the moment PHP is invoked at the server and ends when execution begins.
When is PHP invoked? Let's say you're running Apache. You're actually uploading the file to Apache, which after receiving the file passes the control to a handler. PHP in this case. Surely the input processing does not start at the point where uploading starts.
Test setup
The upload is affected by following PHP configuration directives:
- file_uploads: The master switch. This one is rarely disabled as it makes any file upload processing impossible on PHP.
- Changeable: PHP_INI_SYSTEM
- upload_max_filesize: Max size of a single file.
- PHP_INI_PERDIR
- post_max_size: Max size of the entire upload batch. A HTTP POST can contain any number of files. In my test only one file is used.
- PHP_INI_PERDIR
- max_input_time: As discussed above, the time to parse the uploaded data and files. This would include populating $_FILES superglobal.
- PHP_INI_PERDIR
- max_execution_time: The time a script is allowed to run after its input has been parsed. This would include any processing of the file itself.
- PHP_INI_ALL
- memory_limit: The amount of memory a script is allowed to use during its execution. Has absolutely nothing to do with the size of the file uploaded, unless the script loads and processes the file.
- PHP_INI_ALL
- upload_tmp_dir: This is something I threw in based on testing. None of the articles ever mention this one. This defines the exact location where the uploaded file initially goes. If the PHP-script does not move the uploaded file out of this temporary location, the file will be deleted when script stops executing. Make sure you have enough space at this directory for large files!
- PHP_INI_SYSTEM
A PHP script cannot change all of the introduced configuration values. The changeable limits are defined as:
- PHP_INI_USER: Entry can be set in user scripts (like with ini_set())
- PHP_INI_PERDIR: Entry can be set in php.ini, .htaccess, httpd.conf
- PHP_INI_SYSTEM: Entry can be set in php.ini or httpd.conf
For testing purposes I chose the POST and upload max sizes to be 1 GiB (or 1024 MiB). To test the timeout values, I chose relatively small values of 2 seconds both for input parsing and script execution. Also to prove that memory limit does not limit the file upload, I chose the available memory for the script to be 1 MiB. The memory limit is not an issue, as my script does not touch the file, does not try to load or process it.
My test script carefully enforces the above limits just to make sure, that there is no configuration mistakes.
Sample files were generated out of randomness with a command:
dd if=/dev/urandom of=900M bs=1024 count=921600
A number of files of different size was used, but since the POST-limit was set to 1 GiB or 1073741824 bytes, it is impossible to upload a file of the same size. There is always some overhead in a HTTP POST-request. So, the maximum file size I succesfully used with these parameters was 900 MiB. Interestingy it was the 2 second input processing time which caused problems.
The sample code:
<?php
// Adapted by JaTu 2014 from code published in
// http://stackoverflow.com/questions/11387113/php-file-upload-affected-or-not-by-max-input-time
$iniValues = array(
'file_uploads' => '1', // PHP_INI_SYSTEM
'upload_max_filesize' => '1024M', // PHP_INI_PERDIR
'post_max_size' => '1024M', // PHP_INI_PERDIR
'max_input_time' => '2', // PHP_INI_PERDIR
'max_execution_time' => '2', // PHP_INI_ALL
'memory_limit' => '1M', // PHP_INI_ALL
);
$iniValuesToSet = array('max_execution_time', 'memory_limit');
$upload_max_filesize_inBytes = 1073741824; // 1 GiB
foreach ($iniValues as $variable => $value) {
$cur = ini_get($variable);
if ($cur !== $value) {
if (in_array($variable, $iniValuesToSet)) {
$prev = ini_set($variable, $value);
if ($prev === false) {
// Assume the previous value was not FALSE, but the set failed.
// None of those variables can reasonable have a boolean value of FALSE anyway.
die('Failed to ini_set() a value into variable ' . $variable);
}
} else {
die('Failed to ini_set() a value into variable ' . $variable . ' and make it stick.');
}
}
}
if (!empty($_FILES) && isset($_FILES['userfile'])) {
switch ($_FILES['userfile']["error"]) {
case UPLOAD_ERR_OK:
$status = 'There is no error, the file uploaded with success.';
break;
case UPLOAD_ERR_INI_SIZE:
$status = 'The uploaded file exceeds the upload_max_filesize directive in php.ini.';
break;
case UPLOAD_ERR_FORM_SIZE:
$status = 'The uploaded file exceeds the MAX_FILE_SIZE directive that was specified in the HTML form.' .
' Value is set to: ' . $_POST['MAX_FILE_SIZE'];
break;
case UPLOAD_ERR_PARTIAL:
$status = 'The uploaded file was only partially uploaded.';
break;
case UPLOAD_ERR_NO_FILE:
$status = 'No file was uploaded.';
break;
case UPLOAD_ERR_NO_TMP_DIR:
$status = 'Missing a temporary folder.';
break;
case UPLOAD_ERR_CANT_WRITE:
$status = 'Failed to write file to disk.';
break;
case UPLOAD_ERR_EXTENSION:
$status = 'A PHP extension stopped the file upload. PHP does not provide a way to ascertain which extension caused the file upload to stop; examining the list of loaded extensions with phpinfo() may help.';
break;
default:
$status = 'No idea. Huh?';
}
print "Status: {$status}<br/>\n";
print '<pre>';
var_dump($_FILES);
print '</pre>';
}
?>
<form enctype="multipart/form-data" method="POST">
<input type="hidden" name="MAX_FILE_SIZE" value="<?php print $upload_max_filesize_inBytes ?>" />
File: <input name="userfile" type="file" />
<input type="submit" value="Upload" />
</form>
Test 1: PHP 5.5.10 / Apache 2.4.7
This is a basic Fedora 19 box with standard packages installed. PHP reports Server API as Apache 2.0 Handler.
To get the required setup done I had a .htaccess-file with following contents:
php_value upload_max_filesize "1024M"
php_value post_max_size "1024M"
php_value max_input_time 2
I used time-command from bash-shell combined with a cURL-request like this:
curl --include --form userfile=@800M http://the.box/php/upload.php
Timing results would be:
real 0m7.595s
user 0m1.044s
sys 0m3.259s
That is 7.5 seconds wallclock time to upload a 800 MiB file. The time includes any transfer over my LAN and processing done on the other side. No failures were recorded for the 2 second time limits or memory limits.
Errors would include:
- PHP Warning: POST Content-Length of 1073742140 bytes exceeds the limit of 1073741824 bytes in Unknown on line 0
- When POST-limit was exceeded
- PHP Fatal error: Maximum execution time of 2 seconds exceeded in Unknown on line 0
- When input processing took too long time
Warning!
Apache paired with PHP was especially difficult on situations where a HTTP/500 would occur for any reason. The temporary file would NOT be cleaned up immediate after the PHP-script died. The cleaning would occur at the point where Apache worker process would be recycled. Sometimes my temp-drive ran out of disc space an I had to manually trigger an Apache service restart to free up the space. But if you're in server exploiting business and manage to find one that allows large file uploads, it is possible to cause a resource exhaustion for the disc space by simply uploading very large files repeatedly. When upload fails the space is not immediately freed.
Test 2: PHP 5.4.26 / Nginx 1.4.6
To confirm that this is not an Apache-thing or limited to latest version of PHP, I did a second run with a different setup. I took my trustworthy Nginx equipped with PHP-FPM running on a virtualized CentOS. This time I didn't use standard components and used only packages compiled and tailored for my own web server. PHP reports Server API as FPM/FastCGI.
My /etc/php-fpm.d/www.conf had:
php_admin_value[upload_max_filesize] = "1024M"
php_admin_value[post_max_size] = "1024M"
php_admin_value[max_input_time] = "2"
php_admin_value[max_execution_time] = 2
php_admin_value[memory_limit] = 1M
PHP's own ini_set()-function was unable to set any of the values, including those it was allowed to change. I didn't investigate the reason for that and chose to declare all of the required settings in the worker definition.
To get large POSTs into Nginx, my /etc/nginx/nginx.conf had:
location ~ \.php$ {
client_max_body_size 1024M;
}
Timing results would be:
real 0m16.170s
user 0m1.060s
sys 0m2.854s
That is 16.1 seconds wallclock time to upload a 800 MiB file. The time includes any transfer over my LAN and processing done on the other side. No failures were recorded for the 2 second time limits or memory limits.
Errors would include:
- 413 Request Entity Too Large
- On the browser end
- *22 client intended to send too large body: 838861118 bytes
- On the Nginx error log
If max POST size was hit.
Conclusions
As found in the net max_input_time and max_execution_time have nothing to do with the network transfer. Both of those limits affect only server's processing after the bytes are transferred.
Trivial mod_rewrite: Redirect to another file in the same directory
Wednesday, March 19. 2014
I found a funny quote at Htaccess Rewrites - Rewrite Tricks and Tips, it says:
``Despite the tons of examples and docs, mod_rewrite is voodoo. Damned cool voodoo, but still voodoo. ''
-- Brian Moore
bem@news.cmc.net
The quote is originally at http://httpd.apache.org/docs/2.0/rewrite/. Now obsoleted documentation for old Apache version.
I'll have to second Brian's opinion. I've touched the subject earlier at Advanced mod_rewrite: FastCGI Ruby on Rails /w HTTPS.
My YUM-repo definition RPM had a bug in it (see: CentOS 6 PHP 5.4 and 5.5 yum repository) and I had to release a new version of it. There exist already couple of links to the file. Why didn't I think of a situation where an update is released? Darn! So, let's keep the URL alive, even if a new version of the file with different name is released. That way everybody stays happy.
Attempt 1: Failure
An over enthusiastic "hey, that should be simple!" -type of naive solution. Create a .htaccess-file into the appropriate directory with content:
RedirectPermanent oldname.rpm newname.rpm
Well ... no. The result is a HTTP/500 and in the error log there was a:
/home/the/entire/path/here/.htaccess: Redirect to non-URL
Ok. It didn't work.
Attempt 2: Failure
Let's ramp this up. Forget the simple tools, hit it with mod_rewrite! Make .htaccess contain:
RewriteEngine on
RewriteRule ^oldname\.rpm$ newname.rpm [R=301]
Well ... no. The result is a HTTP/404, because the redirect goes really wrong. The result will be http://my.server.name/home/the/entire/path/here/newname.rpm, which is pretty far from being correct. There is a funny mix of URL and the actual filesystem storage.
The reason can be found from the Apache docs at RewriteRule PT-flag:
"The target (or substitution string) in a RewriteRule is assumed to be a file path, by default. The use of the [PT] flag causes it to be treated as a URI instead."
and
"Note that the PT flag is implied in per-directory contexts such as <Directory> sections or in .htaccess files."
That phrase can be translated as:
- Internally RewriteRule works with filesystem paths
- When using RewriteRule from a .htaccess-file it does not use filesystem paths, but URLs
- A .htaccess-file really messes things up
Something more elegant is obviously needed.
Attempt 3: Failure
I studied the Apache docs and found a perfect solution! What about if there was a way to discard the filesystem path entirely. Nice! Let's go that way, make .htaccess contain:
RewriteEngine on
RewriteRule ^oldname\.rpm$ newname.rpm [R=301,DPI]
Well ... no. I have the DiscardPathInfo-flag there, but it changes absolutely nothing. It is the same with or without the flag. It clearly says that "The DPI flag causes the PATH_INFO portion of the rewritten URI to be discarded" in the docs. Apparently the flag is used for completely different thing (which I'm having hard time to comprehend), but the thing is that I cannot use it to fix my redirect.
Attempt 4: Success!
After browsing the Apache-docs even more I struck gold. The docs for RewriteBase-directive say:
"This directive is required when you use a relative path in a substitution in per-directory (htaccess) context"
and
"This misconfiguration would normally cause the server to look for an "opt" directory under the document root."
That's exactly what I'm doing here. I have a relative path. I'm using a substitution in a .htaccess-file. It even mis-behaves precisely like in the example from the docs.
The solution is to make .htaccess contain:
RewriteEngine on
RewriteBase /path/here/
RewriteRule ^oldname\.rpm$ newname.rpm [R=301]
Now it works exactly as I want it to do! Nice!
When a request is done for the old filename, Apache will do an external redirect and notify browser about the new version. wget fails to save the file with the new name (it will use the old name), but for example Firefox does that correctly.
Conclusion
Darn that voodoo is hard.
The mod_rewrite's complexity simply laughs at any system administrator. I consider myself to be one of the experienced ones, but still ... I find myself struggling with the subject.
Cisco ASA protected SSH-connection hangs - [Fixed]
Thursday, March 13. 2014
Couple of my users were complaining, that their SSH-connection dies when idling for a while. The client does not notice, that server is gone. It cannot regain communications and dies only after a transmission is attempted, failed and timed out.
My initial reaction was, that a firewall disconnects any "non-used" TCP-connections. The non-used may or may not be true, but the firewall thinks that and since it can make the decision, it disconnects the socket. There is one catch: if the TCP-socket is truly disconnected, both the server and the client should notice that and properly end the SSH-session. In this case they don't. For those readers not familiar with the details of TCP/IP see the state transition diagram and think half-closed connection as being ESTABLISHED, but unable to move into FIN_WAIT_1 because firewall is blocking all communications.
Googling got me to read a discussion thread @ Cisco's support forums titled SSH connections through asa hanging. There Mr. Julio Carvaja asks the original poster a question: "Can you check the Timeout configuration on your firewall and also the MPF setup. What's the Idle time you have configured for a TCP session?" So I did the same. I went to the box and on IOS CLI ran the classic show running-config, which contained the timeout values:
timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
From that I deduce that any TCP-connection is dropped after one hour of idling. It is moved into half-closed state after 10 minutes of idle. The 10 minutes is in the time range of my user complaints. One hour is not. So essentially Cisco ASA renders the TCP-connection unusable and unable to continue transmitting data.
In the discussion forum there is suggestion to either prolong the timeout or enable SSH keepalive. I found the way of defining a policy for SSH in the ASA. There is an article titled ASA 8.3 and Later: Set SSH/Telnet/HTTP Connection Timeout using MPF Configuration Example, which describes the procedure in detail.
However, I choose not to do that, but employ keepalive-packets on my OpenSSHd. I studied my default configuration at /etc/ssh/sshd_config and deduced that keepalive is not in use. In the man-page of sshd_config(5) I can find 3 essentially required configuration directives:
- TCPKeepAlive: The master switch to enable/disable the mechanism.
- This is on by default, but this alone does not dicatate if the keepalive will be used or not
- ClientAliveInterval: The interval [in seconds] at how often a keepalive packet is being transmitted
- As default, this is 0 seconds, meaning that no packets will be sent.
- ClientAliveCountMax: The number of packets that a client did not respond to before declaring the connection dead
- As default this is 3. Still, no packets are sent ever, thus a client is never declared M.I.A. based on this criteria.
So to fix the failing SSH-session problem, the only thing I changed was to set a client alive interval. Since after 10 minutes of idling (600 seconds), the Cisco ASA will mess up the connection, I chose half of that, 300 seconds.
After restarting the sshd, opening a connection and idling for 5 minutes while snooping the transmission with Wireshark, I found out that my SSH server and client exchanged data after ever 300 seconds. The best thing about the fix is that it works! It solves the problem and SSH-connection stays functional after long period of idling.
Using own SSL certificate in Cisco ASA
Tuesday, March 11. 2014
Yesterday I was mighty pissed about Oracle's Java breaking my stuff. Then it occurred to me: I shouldn't be using self-signed certificates in the first place! See my post about Certificate Authority setup: Doing it right with OpenSSL, where I wrote "My personal recommendation is to never use self-signed certificates for anything". And there I was! Darn.
I figured, that there must be a way of stop the stupidity and install an own certificate to the network appliance. Then I bumped into ASA 8.x: Renew and Install the SSL Certificate with ASDM, a PDF-document from Cisco instructing how to achieve that. Nice! Exactly what I needed.
This is how to do it. Log into ASDM and go to Configuration -> Device Management -> Certificate Management -> Identify Certificates. It looks like this:
There you can find Add-button:
You should add a new identify certificate. I used the Default-RSA-Key, but Cisco's own documentation says to generate a new one. In case SSH-keys need to be regenerated, the SSL-certificate won't work anymore. In my case I can simply recreate the certificate also, so it is not an issue to me. After you click Add Certificate:
You will get a dialog to save the Certificate Signing Request (CSR) into a local drive to be sent to a Certification Authority (CA) to be processed.
After your CSR has gone trough and you have your certificate in PEM-format go back to ASDM and select Install this time. You will get a dialog:
Upload or copy/paste the PEM certificate there and click Install Certificate. After that you'll have an identity:
Now the next thing to do is to start using the newly created identity. In Configuration -> Device Management -> Advanced -> SSL Settings there is an option to choose an identity to use when ASDM is being accessed via HTTPS or ASDM-IDM.
To get better results from Qualsys SSL Labs server test I did following changes:
- SSL-version is set to TLS V1 Only, that is TLS 1.0 only. 1.1 nor 1.2 is not available.
- For encryption I'm only using:
- 112-bit 3DES-SHA1
- 128-bit RC4-SHA1
- 128-bit RC4-MD5
- The AES-encryptions for 128-bit or 256-bit failed on my box for some reason. If you have them, please use those! The chosen 3 crypto algos provide reasonable security, but the AES-ones are better.
After an Apply the new certificate is in use. You can verify that via a web browser from HTTPS-interface or go to Control Panel's Java security settings and remove the self-signed certificate from secure site certificates -list. The ASDM-login will work again.