CentOS 6 PHP 5.4 and 5.5 yum repository
Wednesday, February 19. 2014
I maintain RPM-packages for PHP 5.4 and 5.5, see earlier post about it.
As any sysadmin can expect, there was too much trouble running the updates. Since CentOS 6 native way is running yum repositories, I created one.
Kick things going by installing the repo-definition:
yum install \
http://opensource.hqcodeshop.com/CentOS/6%20x86_64/Parallels%20Plesk%20Panel/plesk-php-repo-1.0-1.el6.noarch.rpm
After that, a simple yum install command:
yum install plesk-php55
... will yield something like this:
/opt/php5.5/usr/bin/php -v
PHP 5.5.9 (cli) (built: Feb 9 2014 22:04:05)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies
I'll be compiling new versions to keep my own box in shape.
Parallels Plesk Panel: Bug - DNS zone twice in DB
Sunday, February 16. 2014
Earlier I had trouble with with a disabled DNS-zone not staying disabled. I'm running version 11.5.30 Update #32.
The problem bugged me and I kept investigating. To debug, I enabled the DNS-zone and actually transferred it with AXFR to an external server. There I realized, that the SOA-record was bit strange. Further trials reveald that in the NS-records, there was always an extra one. With that in mind, I went directly to the database to see what I had stored there for the zone.
To access the MySQL database named psa, I have to get the password for that. See KB article ID 170 [How to] How can I access MySQL databases in Plesk? for details about that. The database schema is not documented, but it has become familiar to me during all the years I've been sysadmining Plesk Panels. To get the ID for the DNS-zone I did:
SELECT *
FROM dns_zone
WHERE name = '-the-zone-';
And what do you know! There were two IDs for the given name. That is a big no-no. It's like you having two heads. A freak of nature. It cannot happen. It is so illegal, that there aren't even laws about it. To fix that I did rename the one with a smaller ID:
UPDATE dns_zone
SET name = '-the-zone-_obsoleted', displayName = '-the-zone-_obsoleted'
WHERE id = -the-smaller-ID-;
After that a manual refresh of the BIND records from the DB:
/usr/local/psa/admin/bin/dnsmng --update -the-zone-
And confirmation from the raw BIND-file:
less /var/named/chroot/var/-the-zone-
Now everything was in order. I'm hoping that will help and keep the zone disabled. To me it is now obvious why that happened. Database had become badly skewed.
Triggering Adobe Flash Player update manually
Wednesday, February 12. 2014
No matter how much I think it, it simply does not make any sense to me. Why an earth, isn't there a button to manually update Adobe Flash Player? What good will it do to download it every single time you want it updated? All the parts are already there in your computer, but there is no reasonable way of telling it to:
Go! Update! Now!
With the help of an excellent tool, Windows Sysinternals Process Explorer, I snooped out the location and parameters of the update application.
On a 64-bit Windows
It is highly likely, that your browser is 32-bit. You need to be some sort of hacker (like me) not to have a 32-bit browser. So, the assumption is this applies to you.
All the good parts are in C:\Windows\SysWOW64\Macromed\Flash\
On a 32-bit Windows
If your PC is old, then you'll have this. (Or, alternate case: you are a hacker and running 64-bit browser.)
All the good stuff is in C:\Windows\System32\Macromed\Flash\
Triggering the update
NOTE:
The version number of the application will change on each update. What I demonstrate here was valid at the time of writing this, but I assure you, the exact name of the application will be something else next month.
The location of the files is ...
For all other browsers than Internet Explorer:
FlashUtil32_12_0_0_43_Plugin.exe -update plugin
For Internet Explorer:
FlashUtil32_12_0_0_44_ActiveX.exe -update activex
Running that command as a regular user will trigger the same process that would be triggered during a Windows login. Since I login rarely, the update almost never triggers for me. I simply put the computer to sleep and wake it up and unlock the screen, which again does not trigger the version check.
This isn't the only stupid thing Adobe does. They don't take your security seriously. Shame on them!
Advanced mod_rewrite: FastCGI Ruby on Rails /w HTTPS
Friday, February 7. 2014
mod_rewrite comes handy on number of occasions, but when the rewrite deviates from the most trivial things, understanding how exactly the rules are processed is very very difficult. The documentation is adequate, but the information is spread around number of configuration directives, and it is a challenging task to put it all together.
RewriteRule Order of processing
Apache has following levels of configuration from top to bottom:
- Server level
- Virtual host level
- Directory / Location level
- Filesystem level (.htaccess)
Typically configuration directives have effect from bottom to top. A lower level directive overrides any upper level directive. This also the case with mod_rewrite. A RewriteRule in a .htaccess file is processed first and any rules on upper layers in reverse order of the level. See documentation of RewriteOptions Directive, it clearly says: "Rules inherited from the parent scope are applied after rules specified in the child scope". The rules on the same level are executed from top to bottom in a file. You can think all the Include-directives to be combining a large configuration file, so the order can be determined quite easily.
However, this order of processing rather surprisingly contradicts the most effective order of execution. The technical details documentation of mod_rewrite says:
Unbelievably mod_rewrite provides URL manipulations in per-directory context, i.e., within
.htaccess
files, although these are reached a very long time after the URLs have been translated to filenames. It has to be this way because.htaccess
files live in the filesystem, so processing has already reached this stage. In other words: According to the API phases at this time it is too late for any URL manipulations.
This results in a looping approach for any .htaccess rewrite rules. The documentation of RewriteRule Directive PT|passthrough says:
The use of the [PT] flag causes the result of the RewriteRule to be passed back through URL mapping, so that location-based mappings, such as Alias, Redirect, or ScriptAlias, for example, might have a chance to take effect.
and
The PT flag implies the L flag: rewriting will be stopped in order to pass the request to the next phase of processing.
Note that the PT flag is implied in per-directory contexts such as <Directory> sections or in .htaccess files.
What that means:
- L-flag does not stop anything, it especially does not stop RewriteRule processing in .htaccess file.
- All RewriteRules, yes all of them, are being matched over and over again in a .htaccess file. That will result in a forever loop if they keep matching. RewriteCond should be used to stop that.
- RewriteRule with R-flag pointing to the same directory will just make another loop. R-flag can be used to exit looping by redirecting to some other directory.
- When not in .htaccess-context, L-flag and looping does not happen.
So, the morale of all this is that doing any rewriting on .htaccess-level performs really bad and will cause unexpected results in the form of looping.
Case study: Ruby on rails -application
There are following requirements:
- The application is using Ruby on Rails
- Interface for Ruby is mod_fcgid to implement FastCGI
- All non-HTTPS requests should be redirected to HTTPS for security reasons
- There is one exception for that rule, a legacy entry point for status updates must not be forced to HTTPS
- The legacy entry point is using Basic HTTP authentication. It does not work with FastCGI very well.
That does not sound too much, but in practice it is.
Implementation 1 - failure
To get Ruby on Rails application running via FastCGI, there are plenty of examples and other information. Something like this in .htaccess will do the trick:
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ /dispatch.fcgi/$1 [QSA]
The dispatch.fcgi comes with the RoR-application and mod_rewrite is only needed to make the Front Controller pattern required by the application framework to function properly.
To get the FastCGI (via mod_fcgid) working a simple AddHandler fastcgi-script .fcgi will do the trick.
With these, the application does work. Then there is the HTTPS-part. Hosting-setup allows to edit parts of the virtual host -template, so I added own section of configuration, rest of the file cannot be changed:
<VirtualHost _default_:80 >
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{HTTP_HOST} ^www.my.service$ [NC]
RewriteRule ^(.*)$ http://my.service$1 [L,R=301]
</IfModule>RewriteCond %{HTTPS} !=on
RewriteCond %{REQUEST_URI} !^/status/update
RewriteRule ^(.*)$ https://%{HTTP_HOST}$1 [R=301,QSA,L]</VirtualHost>
The .htaccess file was taken from RoR-application:
# Rule 1:
# Empty request
RewriteRule ^$ index.html [QSA]# Rule 2:
# Append .html to the end.
RewriteRule ^([^.]+)$ $1.html [QSA]# Rule 3:
# All non-files are processed by Ruby-on-Rails
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ /dispatch.fcgi/$1 [QSA]
It failed. Mainly because HTTPS rewriting is done too late. There were lot of repetition in the replaced URLs and HTTPS-redirect was the last thing done after /dispatch.fcgi/, so the result looked rather funny and not even close what I was hoping for.
Implementation 2 - success
After the failure I started really studying how the rewrite-mechanism works.
The first thing I did was dropped the HTTPS out of virtual host configuration to the not-so-well-performing .htaccess -level. The next thing I did was got rid of the loop-added dispatch.fcgi/dispatch.fcgi/dispatch.fcgi -addition. During testing I noticed, that I didn't account the Basic authentication in any way.
The resulting .htaccess file is here:
# Rule 0:
# All requests should be HTTPS-encrypted,
# except: message reading and internal RoR-processing
RewriteCond %{HTTPS} !=on
RewriteCond %{REQUEST_URI} !^/status/update
RewriteCond %{REQUEST_URI} !^/dispatch.fcgi
RewriteRule ^(.*)$ https://%{HTTP_HOST}/$1 [R=301,QSA,skip=4]# Rule 0.1
# Make sure that any HTTP Basic authorization is transferred to FastCGI env
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]# Rule 1:
# Empty request
RewriteRule ^$ index.html [QSA,skip=1]# Rule 2:
# Append .html to the end, but don't allow this thing to loop multiple times.
RewriteCond %{REQUEST_URI} !\.html$
RewriteRule ^([^.]+)$ $1.html [QSA]# Rule 3:
# All non-files are processed by Ruby-on-Rails
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_URI} !/dispatch.fcgi
RewriteRule ^(.*)$ /dispatch.fcgi/$1 [QSA]
Now it fills all my requirements and it works!
Testing the thing
Just to develop the thing and make sure all the RewriteRules work as expected and didn't interfere with each other in a bad way I had to take a test-driven approach to it. I created a set of "unit" tests in the form of manually executed wget-requests. There was no automation on it, just simple eyeballing the results. My tests were:
- Test the index-page, must redirect to HTTPS:
- wget http://my.service/
- Test the index-page, no redirects, must display the page:
- wget https://my.service/
- Test the legacy entry point, must not redirect to HTTPS:
- curl --user test:password http://my.service/status/update
- Test an inner page, must redirect to HTTPS of the same page:
- Test an inner page, no redirects, must return the page:
Those tests cover all the functionality defined in the above .htaccess-file.
Logging rewrites
The process of getting all this together would have been impossible without rewrite logging. The caveat is that logging must be defined in virtual host -level. This is what I did:
RewriteLogLevel 3
RewriteLog logs/rewrite.log
Reading the logfile of level 3 is very tedious. The rows are extremely long and all the good parts are at the end. Here is a single line of log split into something humans can read:
1.2.3.219 - -
[06/Feb/2014:14:56:28 +0200]
[my.service/sid#7f433cdb9210]
[rid#7f433d588c28/initial] (3)
[perdir /var/www/my.service/public/]
add path info postfix:
/var/www/my.service/public/status/update.html ->
/var/www/my.service/public/status/update.html/update
It simply reads that in a .htaccess file was being processed and it contained a rule which was applied. The log file clearly shows the order of rules being executed. However, most of the regular expressions are '^(.*)$' , so it is impossible to distinguish the rules from each other simply by reading the log file.
Final words
This is an advanced topic. Most sysadmins and developers don't have to meet the complexity of this magnitude. If do, I'm hoping this helps. It took me quite a while to put all those rules together.
SplashID wasted my entire password database
Wednesday, February 5. 2014
I've been using SplashID as my password solution. See my earlier post about that. Today I tried to log in into the application to retrieve a password, but it turned out my user account was changed into null. Well... that's not reassuring.
After the initial shock I filed a support ticket to them, but I'm not expecting any miracles. The database has been lost in my bookkeeping. The next thing I did was checked my trustworthy(?) Acronis True Image backups. I had them running on daily rotation and this turned out to be the first time I actually needed it for a real situation.
They hid the "Restore files and directories" -option well. My laptop is configured to run backups for the entire disk, so the default recover-option is to restore the entire disk. In this case that seems a bit overkill. But in the gear-icon, there actually is such an option. After discovering the option (it took me a while reading the helps), the recover was user friendly and intuitive enough. I chose to restore yesterday's backup to the original location. The recover went fine, but SplashID database was flawed on that point. I simply restored two days old backup and that seemed to be an intact one.
Luckily I don't recall any additions or changes to my passwords during the last two days. It looks like I walked away with this incident without harm.
Update 7th Feb 2014:
I got a reply to my support ticket. What SplashData is saying, that the password database is lost due to a bug (actually they didn't use that word, but they cannot fool me). The bug has been fixed in later version of SplashID. Luckily I had a backup to restore from. IMHO the software should have better notification about new versions.