Zend Framework 2: Abusing ZeDb to manage multiple connections
Tuesday, February 18. 2014
My favorite way of approaching DAL in ZF2 is ZeDb. It uses lot of existing functionality, but adds a nice layer of its own stuff, and that stuff is really good. However, ZeDb has one single flaw, it does not support multiple database connections. The typical thinking is, that who an earth would want that. I do. I want other weird things too, but in an everyday real application, you simply need more connections than one.
I came up with an simple solution by abusing ZF2's ServiceManager. Typically you gain access to a model is via something like this:
$manager = $this->getServiceLocator()->get('ZeDbManager');
$model = $manager->get('Application\Entity\Prices');
To get that working you need to do the config in module.config.php, declare ZeDb's DatabaseManagerFactory and override Zend\Db\Adapter\Adapter with AdapterFactory and finally declare your DB-configuration and list the models. Its all in the ZeDb docs.
My idea exploits all three of those. I'll instantiate multiple DatabaseManagers via ServiceManager. Also, every model will select an existing DB-adapter for itself. To get that working there will be separate configuration for all of the connections.
Example module configuration:
return array(
'service_manager' => array(
'factories' => array(
'ZeDbManager' => 'ZeDb\Service\DatabaseManagerFactory',
'Zend\Db\Adapter\Adapter' => 'ZeDb\Service\AdapterFactory',
)
),
'zedb_db_in' => array(
'adapter' => array(
'driver' => 'pdo_pgsql',
),
'models' => array(
'Application\Model\Int\Products' => array(
'tableName' => 'products',
'entityClass' => 'Application\Entity\Int\Products',
),
),
),
'zedb_db_out' => array(
'adapter' => array(
'driver' => 'pdo_mysql',
),
'models' => array(
'Application\Model\Internal\Customers' => array(
'tableName' => 'customers',
'entityClass' => 'Application\Entity\Internal\Customers',
),
),
),
Here I declare the obligatory parts, and two separate configurations with names zedb_db_in and zedb_db_out.
I'll have to prime both the DatabaseManagers and Adapters in Module's onBootstrap(). After the objects exist, they are made accessible with:
public function getServiceConfig()
{
return array(
'factories' => array(
'zedb_db_in' => function (ServiceLocatorInterface $sm) {
return Module::getInDbManager();
},
'ZeDbAdapter_in' => function (ServiceLocatorInterface $sm) {
return Module::getInDbAdapter();
},
),
);
}
The model has to do some heavy lifting to get rid of the default Adapter:
class Products extends ZeDb\Model
{
public function __construct(Adapter $adapter, $options = null)
{
$adapter = $this->getDatabaseManager()
->getServiceLocator()
->get('ZeDbAdapter_in');
parent::__construct($adapter, $options);
}public function getDatabaseManager()
{
return Module::getInternalDbManager();
}
Now the setup is ready. In an action you can simply do:
$inDbManager = $this->getServiceLocator()->get('zedb_db_in');
$productsModel = $inDbManager->get('Application\Entity\Int\Products');
$products = $productsModel->SearchProductByCode('ABC123', null);
There is no limit on how many DB-connections you can have. Note that, the ZeDb-module bootstrap will initialize the default manager and adapter, but the idea is not to use them at all. Anyway, I'll stick with this until something better comes.
It's alive! Printer ran out of paper
Monday, February 17. 2014
This one really made me laugh. The entire situation was like from T-2: "Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m."
My new printer ran out of paper and it sent me an e-mail about it.
Subject: [CLP-410 Series]Paper Empty
System Error: M1-5112 Tray 1 is empty. User intervention required to add paper to Tray 1. Printing can continue if the required media is available from other trays, all other machine services (if installed) are unaffected. .
The e-mail had all the information about the printer, its IP-address, serial number and printed page counts. If I would have 17 of those, it would be possible to differentiate which one is in trouble.
Looks like they just might be taking over after all. I tried looking my printer's Instagram, Facebook and Twitter accounts, but could not find them.
Parallels Plesk Panel: Bug - DNS zone twice in DB
Sunday, February 16. 2014
Earlier I had trouble with with a disabled DNS-zone not staying disabled. I'm running version 11.5.30 Update #32.
The problem bugged me and I kept investigating. To debug, I enabled the DNS-zone and actually transferred it with AXFR to an external server. There I realized, that the SOA-record was bit strange. Further trials reveald that in the NS-records, there was always an extra one. With that in mind, I went directly to the database to see what I had stored there for the zone.
To access the MySQL database named psa, I have to get the password for that. See KB article ID 170 [How to] How can I access MySQL databases in Plesk? for details about that. The database schema is not documented, but it has become familiar to me during all the years I've been sysadmining Plesk Panels. To get the ID for the DNS-zone I did:
SELECT *
FROM dns_zone
WHERE name = '-the-zone-';
And what do you know! There were two IDs for the given name. That is a big no-no. It's like you having two heads. A freak of nature. It cannot happen. It is so illegal, that there aren't even laws about it. To fix that I did rename the one with a smaller ID:
UPDATE dns_zone
SET name = '-the-zone-_obsoleted', displayName = '-the-zone-_obsoleted'
WHERE id = -the-smaller-ID-;
After that a manual refresh of the BIND records from the DB:
/usr/local/psa/admin/bin/dnsmng --update -the-zone-
And confirmation from the raw BIND-file:
less /var/named/chroot/var/-the-zone-
Now everything was in order. I'm hoping that will help and keep the zone disabled. To me it is now obvious why that happened. Database had become badly skewed.
Updating Metasploit with Packet Storm exploits
Saturday, February 15. 2014
For any computer security geek (like I am), Metasploit is the de facto tool to use. What especially makes Metasploit so good, that there are number of places where you can get brand new exploits to run it with. It is obvious that Packet Storm is the #1 place for your new dose of exploits to run. They release exploits in ready packages for Metasploit on monthly basis. To help people catch up in their installations, there are also yearly packages combining year's monthly updates into a single file.
Due to popularity of Metasploit and quality of Packet Storm's exploits, there are number of instructions how to do the update. I googled "upadate metasploit with packet storm exploits" and got a ton of garbage. Really?! Couple dozen YouTube-videos showing how to do it. Videos?! Really!! Apparently all of those videos are created by script-kiddies for script-kiddies. It is soooo easy to copy/paste information from a YouTube-video, or ... then again, NOT. No matter which search phrases I used, I simply could not find a sensible way of doing the updates. When you don't find the required information from the web, it must be one of these two: 1) you're wrong or 2) everybody else is wrong. My typical choice is 2), ask anybody who knows me.
The single page with idea I found was at Mr. ljy396's blog "How to update exploits from packetstorm website". I was looking at his scripts of collecting and installing the exploits, and thought that there should be a single easy-to-use script for doing all this.
When I started writing my own script, the thought came to me again: "This is so wrong! There must be the real way of doing the updates, but I just didn't find it. It is obvious, that I'm writing an inferior duplicate thing for this." Anyway, that logic never stopped me so far. Here are the results.
The script operates in three modes:
- Do monthly updates:
updateMetasploitFromPacketStormExploits.pl --year=2014 - Download and install a specific package:
updateMetasploitFromPacketStormExploits.pl \
--package_url=http://packetstorm.interhost.co.il/1401-exploits/1401-exploits.tgz - Install an existing package:
updateMetasploitFromPacketStormExploits.pl --package_file=/tmp/1401-exploits.tgz
A Metasploit user has the "secret" $HOME/.msf4/-directory. My script adds the updates/-directory for keeping track of which monthly updates it already installed. As Metasploit requires, all of the installed exploits go to modules/. My script renames the .rb.txt-files in the packages properly during the installation. NOTE: I'm not touching the Metasploit install directory. The new updates really go to per-user directory and are not available system-wide.
Any bugs, comments and suggestions are welcome. Please drop a comment.
AMI BIOS iKVM (Asus Z8PE-D12) failing to open console redirection
Friday, February 14. 2014
I was trying to see the remote console of one of my servers:
It said: "Could not create token for this session. Max session limit might have reached". This was yet another reminder that Asus iKVM isn't the sharpest tool in the box, it generally responds slowly and sometimes it does weird things. Thanks to Java, it wasn't easy to get the console application running, it said a number of things, including a complaint:
... but after sorting out a number of Java issues it spat out an error-message with XML-content of:
<jnlp spec="1.0+" codebase="http://192.168.1.11/Java">
<information>
<title>JViewer</title>
<vendor>American Megatrends, Inc.</vendor>
<description kind="one-line">JViewer Console Redirection Application</description>
<description kind="tooltip">JViewer Console Redirection Application</description>
<description kind="short"> JViewer enables a user to view the video display of managed
server via KVM. It also enables the user to redirect his local keyboard, mouse for
managing the server remotely. </description>
</information>...
<resources>
<j2se version="1.5+"/>
<jar href="release/JViewer.jar"/>
</resources>
<application-desc>
<argument>192.168.1.11</argument>
<argument>7578</argument>
<argument>
<html>
<head>
<title>Document Error: Page not found</title>
</head>
<body>
<h2>Access Error: Page not found</h2>
<p>Could not create token for this session. Max session limit might have reached
</p>
</body>
</html>
</argument>
</application-desc>
</jnlp>
I found out somebody who had the same issue at Asus' discussion forums article [Problem]iKVM Remote Control Console w/ Java exception error. There was a suggestion to unplug the power cord. In my case the server is couple hundred kilometers away in a secure data center where I certainly do not have any access. Sure, I could write the support and ask them to do that but ... I'm pretty sure there has to be a way to reset the BMC remotely.
Then I remembered something I did a while ago with ipmitool on Linux-shell. Running:
# ipmitool bmc help
MC Commands:
reset <warm|cold>
guid
info
There is a reset-command in the list. Google led me to an article "Restarting an IBM BMC without restarting the server itself". I wasn't sure what would happen. My first guess was that it would do the same thing as typing "google" into a Google search - the entire world would implode to a singularity. So it is very important not to that. But I really needed the remote console. I was willing to try:
# ipmitool bmc reset cold
Sent cold reset command to MC
Nothing serious happened immediately. The Linux was responsive and world didn't come to an end. After a minute or so, ipmitool bmc info -command yielded a response! The IPMI was back on-line. A logon to web-console:
Yep. I had my console back.
I sure hope that all remote console manufacturers would stop using Java for their apps. It is such a bitch to get a Java application working on after 1.7.0_51 was released. See: "Java 7u51 will not accept JNLP with self-signed certificate?" about other people complaining about that. Anyway, all ended well and I didn't have to drive to the server to do some maintenance.
Zend Framework 2: Disable layout
Thursday, February 13. 2014
This topic pops up every once in a while. As default every rendered page implicitly renders a layout too. This is especially bad for JSON or XML or binary responses your application may generate as a response. So far the best explanation and some helpful insights is in Mr. Chris Schreiber's blog scribles.com in the article Disabling Layout in Zend Framework 2.
I was working on a web application, which was a machine-to-machine app and not meant for humans at all. So a layout is completely unnecessary. As Chris instructs, I went directly to the module-class and copy/pasted his code. It failed. He has some sort of typo in the code:
$sharedEventManager->attach(
'Zend\Mvc\Controller\AbstractController',
function(\Zend\Mvc\MvcEvent $event) {
When I was looking for SharedEventManagerInterface PHP-code, it says:
/**
* Attach a listener to an event
*
* @param string|array $id Identifier(s) for event emitting
* @param string $event
* @param callable $callback PHP Callback
* @param int $priority Priority at which listener should
* @return void
*/
public function attach($id, $event, $callback, $priority = 1);
There clearly are 3 obligatory parameters. Chris' code has three parameters with the optional priority. Something is missing. This is the fix:
$sharedEventManager->attach(
MvcEvent::EVENT_DISPATCH,
'Zend\Mvc\Controller\AbstractController',
function(\Zend\Mvc\MvcEvent $event) {
Now it works! However, as my application was in its early stages, I was missing the default controller with the class name of IndexController. Adding the code into onBootstrap() didn't help. None of the callback's code was run during event dispatch. More debugging revealed, that my code never triggered the MvcEvent::EVENT_DISPATCH. It did trigger a MvcEvent::EVENT_DISPATCH_ERROR instead. The reason is obvious, I didn't have the class.
For clarity I'll copy/paste my onBootstrap() entirely here:
public function onBootstrap(MvcEvent $evt)
{
$eventManager = $evt->getApplication()->getEventManager();
$sharedEventManager = $eventManager->getSharedManager();
// Make sure layout is not rendered for regular pages
$sharedEventManager->attach('Zend\Mvc\Controller\AbstractController',
MvcEvent::EVENT_DISPATCH,
function (MvcEvent $event) {
$dispatchResult = $event->getResult();
if ($dispatchResult instanceof ViewModel) {
$dispatchResult->setTerminal(true);
}
}, -99
);
// Make sure layout is not rendered for HTTP/404 pages
$eventManager->attach(MvcEvent::EVENT_DISPATCH_ERROR,
function (MvcEvent $event) {
$dispatchResult = $event->getResult();
if ($dispatchResult instanceof ViewModel) {
$dispatchResult->setTerminal(true);
}
}, -99
);
}
Now both events are handled properly, without attempting to find a layout. To test my code, I added a layout/layout.phtml to the project and commented out my above code. It did renader the file contents. Then I removed the comments and deleted the layout/-directory completely. My code still ran on both occasions: when action can be found and when action cannot be found. Actually I also have a CLI-interface to the app, but that won't render the layout anyway.
This is yet another example of the complexity of ZF2. Any trivial task turns out to be a huge pain in the butt. I don't want to start ranting about using PhpRenderer in CLI-app, that's a completely another complaint.
Triggering Adobe Flash Player update manually
Wednesday, February 12. 2014
No matter how much I think it, it simply does not make any sense to me. Why an earth, isn't there a button to manually update Adobe Flash Player? What good will it do to download it every single time you want it updated? All the parts are already there in your computer, but there is no reasonable way of telling it to:
Go! Update! Now!
With the help of an excellent tool, Windows Sysinternals Process Explorer, I snooped out the location and parameters of the update application.
On a 64-bit Windows
It is highly likely, that your browser is 32-bit. You need to be some sort of hacker (like me) not to have a 32-bit browser. So, the assumption is this applies to you.
All the good parts are in C:\Windows\SysWOW64\Macromed\Flash\
On a 32-bit Windows
If your PC is old, then you'll have this. (Or, alternate case: you are a hacker and running 64-bit browser.)
All the good stuff is in C:\Windows\System32\Macromed\Flash\
Triggering the update
NOTE:
The version number of the application will change on each update. What I demonstrate here was valid at the time of writing this, but I assure you, the exact name of the application will be something else next month.
The location of the files is ...
For all other browsers than Internet Explorer:
FlashUtil32_12_0_0_43_Plugin.exe -update plugin
For Internet Explorer:
FlashUtil32_12_0_0_44_ActiveX.exe -update activex
Running that command as a regular user will trigger the same process that would be triggered during a Windows login. Since I login rarely, the update almost never triggers for me. I simply put the computer to sleep and wake it up and unlock the screen, which again does not trigger the version check.
This isn't the only stupid thing Adobe does. They don't take your security seriously. Shame on them!
Supermicro SuperServer 5015A-EHF-D525
Tuesday, February 11. 2014
I needed a new rack-server to fit in a standard 19" telco-rack. Your average rack-mounted server is 60 to 100 cm long. A rack-server may be really thin. 1U (4,5 cm) or less, but really long. Then again, your average telco rack is not long enough for a long server, they may be 40 to 50 cm and not all of them have brackets for mounting the back. My requirement of mounting a server into a telco-rack hugely limits the options to choose a server from. One final requirement I had was that the server must have IPMI 2.0, because that would guarantee KVM over IP.
But after a longish search I found one. The bad news was, that it was a Supermicro, a SuperServer 5015A-EHF-D525 to be exact. Supermicro's track-record on failing IPMI-security is known, see an earlier blog post about it. Anyway, I got a vendor from UK who would order one. They didn't have them in stock and such small rack-servers are quite inexpensive, but don't sell like hotcakes. Who would be crazy enough to want one? I am, so put an order in with the Boston Ltd. It didn't take them many weeks and I got my server delivered.
Here are the pics from front and back:
In the front there is couple of LEDs, power and reset buttons. In the back there are couple of USB 2.0 -ports and two RJ-45 for 100 Mbit/s Ethernet. It is possible to run IPMI on a shared Ethernet-port or a dedicated one. In my setup I didn't want to have the extra cable and went for a shared setup. The port announces itself as two different MAC-addresses and effectively gets two different DHCP-addresses. One for IPMI and one for the operating system.
This is what it looks inside:
Notice how the Intel SSD -drive is not attached properly yet at the time of taking the pic. The motherboard is really small. The only fan is in the PSU, no other ones are required as the Intel Atom CPU does not produce too much heat and the GPU isn't much of a rocket. The tiny box is almost empty, even with a 2.5" SSD in it.
In the BIOS I found that there is a lot of watchdogging to do, there are two of them:
I don't know what the BIOS watchdog is good for as it keeps triggering if turned on. For the IPMI's watchdog there is a Linux-daemon.
Anyway, except the small size and 32-bit Atom CPU there is nothing special in it. 10 years ago such a server would have been a state-of-the-art box, but on today's standards its just a small low-power unit.
On the positive side, IPMI's web console has a setting for IP Access Control. I tested it and it seems to work. It closes all the ports for everybody else. Since the raw rules are not available for inspection, there still may be couple of network blocks which are allowed. These days you cannot help being suspicious.
This is a special box. If you happen to need a silent low-power server, then go with a 5015A-EHF-D525.
Installing OpenSuse 13.1 into a MacBook
Monday, February 10. 2014
OpenSuse 13.1 was released November 2013. During Christmas holidays I started a project of upgrading my previous installation.
Since I'm running on a MacBook 1,1 it was obvious that I was looking for trouble. Previously I had rEFIt running just to get a GRUB 2 -prompt. This time I decided to start from a clean slate. Literally. I ran
dd if=/dev/zero of=/dev/sda
for the 10 first MiB of the drive to make sure, that it definitely has no trace of any of my previous settings. Since rEFIt has been abandoned years ago, I went for the replacement project rEFInd. I approached the author Mr. Roderick W. Smith and he was very helpful, but no matter what I did, I could not get rEFInd running on my very old 32-bit Mac. So, I had two options left: to go back to abandonware or see what would happen without a Boot Manager.
I failed on the installer settings-dialog, by trying to out-smart OpenSuse logic. My completed installation didn't boot. On 2nd try I simply went with the flow. As Mr. Smith instructed me, I didn't touch the most critical thing: MBR is not the way to go on a Mac! Here are my settings:
And guess what, it worked! OpenSuse 13.1 installer has enough logic to create a bootable Linux-installation to a completely blank hard drive. Nice!
The installer was pretty smart. Wi-Fi network was configured properly, it worked out-of-the-box. Apple-keys work: screen brightness, volume, etc. work as is. Also the typical trouble-makers sleep (on RAM) / hibernate (to disk), battery info, sound, and what not. There were only two minor issues: iSight does not work without the Apple proprietary firmware and the keyboard Apple-keys don't do anything usable.
To get the iSight camera working, see ift-extract -tool at Apple Built-in iSight Firmware Tools for Linux. It can dig the guts out of Mac OS X iSight-driver and equip your Linux with a functioning camera. The keyboard is a trivial one. Like previously, I just keyfuzz'ed the keys into something useful. See the OpenSuse 12.3 installation blog entry for details.
There is one thing you may want to check. If you enable SSHd, like I always do on all servers. As default /etc/sysconfig/SuSEfirewall2.d/services/sshd defines as TCP/22 to be open. That is the general idea, but apparently there is so much SSHd bombing going on, that I always tar pit my installations. For some strange reason Suse engineers chose not to allow that in a specific service definition file, but it has to be in the classic /etc/sysconfig/SuSEfirewall2 file, section FW_SERVICES_ACCEPT_EXT="0/0,tcp,22,,hitcount=3,blockseconds=60,recentname=ssh"
I urge everyone of you to rename the services/sshd into something else and add the above line. This makes bombing your SSH-port so much more difficult. And it does not affect your own login performance, unless you choose to bomb it yourself.
You may want to check OpenSuse's hardware compatibility list for details about Apple Laptops. The HCL has info about what works and what doesn't.
In general OpenSuse folks did a very good job with this one. There was a real improvement on ease installation. Thanks Roderick W. Smith for his help during my installation and thanks to Novell for giving this great distro for free!
Huawei E3276 reviewed
Sunday, February 9. 2014
I got my hands on Huawei E3276. It is a 4G LTE USB-stick. The model I tested appears to be a model s-150. According to 4GLTEmall.com it has following frequencies available in FDD: 800 MHz, 900 MHz, 1800 MHz, 2100 MHz and 2600 MHz. The newer 4G modulation of TDD is not available.
It looks like this:
On the side there is a slot for MicroSD-card and a standard CRC9-connector which Huawei is using for all of its USB-products. See this vid (with cheesy music) to see an external antenna connected to a Huawei. The pics:
You open the stick by sliding the white cover to left. Then you can insert a mini SIM-card (in my pic there is a micro-SIM to mini-SIM adapter sleeve):
In general it resembles your average Huawei 3G-stick a lot, the only difference is that it is little bit wider. I still remember how big a Huawei E220 felt when the 2nd gen 3G-sticks (E160/E169/etc.) came. As you would expect, when the USB-stick is inserted, it appears as a CD-drive with the classic Huawei Mobile Partner setup in the "disc". After you install the drivers and the software, there is pretty much no way of telling that you are not running on a 3G but a 4G. The mobile partner has features to lock it into LTE-mode and it indicates the connection type of 3G WCDMA, 3G DC and LTE properly.
This is how it looks like on a side of a Lenovo Ideapad S10:
Generally I was pretty pleased with the stick. It is clearly an evolution version of a 3G stick. The feeling is that it is a 3G on steroids. The antenna is very sensitive and my testing revealed that it keeps the connection open even if there is zero bars of reception. However, it is quite expensive currently. In Finland you can get one from a telco with 99,- € with a contract. The unlocked ones are around 150,- €.
It worked on Windows 7 without problems, but as a Linux-man I had to try and see if it works. Unfortunately there are issues with that. Linux recognizes the stick as a Huawei E352 3G-stick, does the modeswitch from storage mode to modem mode properly and a /dev/ttyUSB0 appears, but it does not connect to The Net. A short minicom session indicated, that the modem is there and is receving commands:
atz
OK
ati
Manufacturer: huawei
Model: E3276
Revision: 21.192.05.00.00
IMEI: 863781010000000
+GCAP: +CGSM,+DS,+ES
OK
It seems to have a new Type Allocation Code code of 86378101 for Huawei. Not all databases have that yet.
The Linux chatter in The Net is that 4g-connect-linux -script should be able to make a connection. I tried it with varying parameters, but somehow the new wwan0-interface didn't have a proper device attached into it and all I got was ioctl SIOCGIFHWADDR: No such device -errors. Search for "cdc_ncm e3276" to get a number of posts about that.
I also did some ping-testing on 3G Dual-Carrier versus 4G LTE. The bandwidth is really a no contest, LTE will win that one hands down as 3G DC has a theoretical cap on 42 Mbit/s. That's pretty much the point where LTE starts! My measurements also confirmed that. A 50 Mbit/s connection limit imposed by the telco measures 69 Mbit/s in reality. Not bad, huh!
Anyway, a simple ping-test says:
- Forced 3G DC: Minimum = 31ms, Maximum = 1217ms, Average = 66ms
- Forced LTE: Minimum = 24ms, Maximum = 460ms, Average = 35ms (Winner!)
The fluctuation in a mobile connection is there. When something happens in the air, there will be delays and retransmissions. But when conditions are nearing optimal, on LTE it is difficult to differentiate mobile connection from a wired one. That's the reason I'm so thrilled about LTE availability. 3G is ok for surfing the web, but LTE is the first mobile technology that really performs!
Final words:
The Huawei E3276 is definitely a keeper. Now that Finnish telcos have an agreement about the 800 MHz 4G usage, they are investing heavily on their LTE-networks. Even pre-paid data has 4G on some telcos. The stick does not replace my B593 which is meant for non-mobile usage, but for a laptop I'll be definitely keep using this one.
Led Lenser K2 vs. MagLite Solitaire LED
Saturday, February 8. 2014
A while ago I a friend send a link to Jamie and Adam Tested -YouTube channel. I'm a fan of Mythbusters, so he knew that I'd love their stuff. One of the videos they have there is Inside Adam Savage's Cave: Hacking a Flashlight for Adam's EDC. So, I felt that I should blog about flashlights too.
Last year my old and trustworthy MagLite Solitaire broke down after serving me well for 18 years and I had to get a replacement. My old Solitare became un-fixable due to some sort of stress in the inside plastic parts. They broke down to a number of new pieces that didn't fit anymore. Apparently my key chain with number of keys in it cause stress to a flashlight's guts.
In the above video Adam is doing a hack to his JETBeam. Me as a Leatherman man I went for a Led Lenser (apprently they are owned by same company). Model K2 to be specific. However it turned to be a mistake. The LED is bright, it really is, and the flashlight is really tiny, but its aluminum body is not built to be hung in a key chain and stuffed into a pocket over and over again. It broke after 8 months of "usage". Actually I didn't use the lamp that much, but ... It broke. Aow come on! My previous lamp lasted for 18 years!
Here is a pic of the broken Led Lenser K2 (the short one) next to my new flashlight:
Thankfully my favorite flashlight company is back! I don't know what MagLite did for 15 years or so, but they certainly lost the market leader position by not releasing any new products for a very, very long time. So... after failing with Led Lenser I went back to MagLite. Their new LED-products are really good and I got one of their new releases a Solitaire LED. I'm hoping it lasts a minimum of 18 years!
Advanced mod_rewrite: FastCGI Ruby on Rails /w HTTPS
Friday, February 7. 2014
mod_rewrite comes handy on number of occasions, but when the rewrite deviates from the most trivial things, understanding how exactly the rules are processed is very very difficult. The documentation is adequate, but the information is spread around number of configuration directives, and it is a challenging task to put it all together.
RewriteRule Order of processing
Apache has following levels of configuration from top to bottom:
- Server level
- Virtual host level
- Directory / Location level
- Filesystem level (.htaccess)
Typically configuration directives have effect from bottom to top. A lower level directive overrides any upper level directive. This also the case with mod_rewrite. A RewriteRule in a .htaccess file is processed first and any rules on upper layers in reverse order of the level. See documentation of RewriteOptions Directive, it clearly says: "Rules inherited from the parent scope are applied after rules specified in the child scope". The rules on the same level are executed from top to bottom in a file. You can think all the Include-directives to be combining a large configuration file, so the order can be determined quite easily.
However, this order of processing rather surprisingly contradicts the most effective order of execution. The technical details documentation of mod_rewrite says:
Unbelievably mod_rewrite provides URL manipulations in per-directory context, i.e., within
.htaccess
files, although these are reached a very long time after the URLs have been translated to filenames. It has to be this way because.htaccess
files live in the filesystem, so processing has already reached this stage. In other words: According to the API phases at this time it is too late for any URL manipulations.
This results in a looping approach for any .htaccess rewrite rules. The documentation of RewriteRule Directive PT|passthrough says:
The use of the [PT] flag causes the result of the RewriteRule to be passed back through URL mapping, so that location-based mappings, such as Alias, Redirect, or ScriptAlias, for example, might have a chance to take effect.
and
The PT flag implies the L flag: rewriting will be stopped in order to pass the request to the next phase of processing.
Note that the PT flag is implied in per-directory contexts such as <Directory> sections or in .htaccess files.
What that means:
- L-flag does not stop anything, it especially does not stop RewriteRule processing in .htaccess file.
- All RewriteRules, yes all of them, are being matched over and over again in a .htaccess file. That will result in a forever loop if they keep matching. RewriteCond should be used to stop that.
- RewriteRule with R-flag pointing to the same directory will just make another loop. R-flag can be used to exit looping by redirecting to some other directory.
- When not in .htaccess-context, L-flag and looping does not happen.
So, the morale of all this is that doing any rewriting on .htaccess-level performs really bad and will cause unexpected results in the form of looping.
Case study: Ruby on rails -application
There are following requirements:
- The application is using Ruby on Rails
- Interface for Ruby is mod_fcgid to implement FastCGI
- All non-HTTPS requests should be redirected to HTTPS for security reasons
- There is one exception for that rule, a legacy entry point for status updates must not be forced to HTTPS
- The legacy entry point is using Basic HTTP authentication. It does not work with FastCGI very well.
That does not sound too much, but in practice it is.
Implementation 1 - failure
To get Ruby on Rails application running via FastCGI, there are plenty of examples and other information. Something like this in .htaccess will do the trick:
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ /dispatch.fcgi/$1 [QSA]
The dispatch.fcgi comes with the RoR-application and mod_rewrite is only needed to make the Front Controller pattern required by the application framework to function properly.
To get the FastCGI (via mod_fcgid) working a simple AddHandler fastcgi-script .fcgi will do the trick.
With these, the application does work. Then there is the HTTPS-part. Hosting-setup allows to edit parts of the virtual host -template, so I added own section of configuration, rest of the file cannot be changed:
<VirtualHost _default_:80 >
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{HTTP_HOST} ^www.my.service$ [NC]
RewriteRule ^(.*)$ http://my.service$1 [L,R=301]
</IfModule>RewriteCond %{HTTPS} !=on
RewriteCond %{REQUEST_URI} !^/status/update
RewriteRule ^(.*)$ https://%{HTTP_HOST}$1 [R=301,QSA,L]</VirtualHost>
The .htaccess file was taken from RoR-application:
# Rule 1:
# Empty request
RewriteRule ^$ index.html [QSA]# Rule 2:
# Append .html to the end.
RewriteRule ^([^.]+)$ $1.html [QSA]# Rule 3:
# All non-files are processed by Ruby-on-Rails
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ /dispatch.fcgi/$1 [QSA]
It failed. Mainly because HTTPS rewriting is done too late. There were lot of repetition in the replaced URLs and HTTPS-redirect was the last thing done after /dispatch.fcgi/, so the result looked rather funny and not even close what I was hoping for.
Implementation 2 - success
After the failure I started really studying how the rewrite-mechanism works.
The first thing I did was dropped the HTTPS out of virtual host configuration to the not-so-well-performing .htaccess -level. The next thing I did was got rid of the loop-added dispatch.fcgi/dispatch.fcgi/dispatch.fcgi -addition. During testing I noticed, that I didn't account the Basic authentication in any way.
The resulting .htaccess file is here:
# Rule 0:
# All requests should be HTTPS-encrypted,
# except: message reading and internal RoR-processing
RewriteCond %{HTTPS} !=on
RewriteCond %{REQUEST_URI} !^/status/update
RewriteCond %{REQUEST_URI} !^/dispatch.fcgi
RewriteRule ^(.*)$ https://%{HTTP_HOST}/$1 [R=301,QSA,skip=4]# Rule 0.1
# Make sure that any HTTP Basic authorization is transferred to FastCGI env
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]# Rule 1:
# Empty request
RewriteRule ^$ index.html [QSA,skip=1]# Rule 2:
# Append .html to the end, but don't allow this thing to loop multiple times.
RewriteCond %{REQUEST_URI} !\.html$
RewriteRule ^([^.]+)$ $1.html [QSA]# Rule 3:
# All non-files are processed by Ruby-on-Rails
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_URI} !/dispatch.fcgi
RewriteRule ^(.*)$ /dispatch.fcgi/$1 [QSA]
Now it fills all my requirements and it works!
Testing the thing
Just to develop the thing and make sure all the RewriteRules work as expected and didn't interfere with each other in a bad way I had to take a test-driven approach to it. I created a set of "unit" tests in the form of manually executed wget-requests. There was no automation on it, just simple eyeballing the results. My tests were:
- Test the index-page, must redirect to HTTPS:
- wget http://my.service/
- Test the index-page, no redirects, must display the page:
- wget https://my.service/
- Test the legacy entry point, must not redirect to HTTPS:
- curl --user test:password http://my.service/status/update
- Test an inner page, must redirect to HTTPS of the same page:
- Test an inner page, no redirects, must return the page:
Those tests cover all the functionality defined in the above .htaccess-file.
Logging rewrites
The process of getting all this together would have been impossible without rewrite logging. The caveat is that logging must be defined in virtual host -level. This is what I did:
RewriteLogLevel 3
RewriteLog logs/rewrite.log
Reading the logfile of level 3 is very tedious. The rows are extremely long and all the good parts are at the end. Here is a single line of log split into something humans can read:
1.2.3.219 - -
[06/Feb/2014:14:56:28 +0200]
[my.service/sid#7f433cdb9210]
[rid#7f433d588c28/initial] (3)
[perdir /var/www/my.service/public/]
add path info postfix:
/var/www/my.service/public/status/update.html ->
/var/www/my.service/public/status/update.html/update
It simply reads that in a .htaccess file was being processed and it contained a rule which was applied. The log file clearly shows the order of rules being executed. However, most of the regular expressions are '^(.*)$' , so it is impossible to distinguish the rules from each other simply by reading the log file.
Final words
This is an advanced topic. Most sysadmins and developers don't have to meet the complexity of this magnitude. If do, I'm hoping this helps. It took me quite a while to put all those rules together.
Huawei B593: Forcing 4G LTE mode
Thursday, February 6. 2014
First I'd like to apologize. At least twice I've said that it is impossible to force B593 to stay out of 3G-mode and force it to stay on 4G LTE. That is not true. It is an incorrect statement by me and I'm sorry that I didn't investigate the facts before making such statements.
Here is a (slightly photoshopped) screenshot of my own device:
There actually is such an option in General Settings --> Internet Settings --> Network mode. And you can select LTE Only and it will work as expected.
For device hackers, the /var/curcfg.xml will have the setting:
<?xml version="1.0" ?>
<InternetGatewayDeviceConfig>
<InternetGatewayDevice>
<WANDevice NumberOfInstances="3">
<WANDeviceInstance InstanceID="2">
<WANConnectionDevice NumberOfInstances="1">
<WANConnectionDeviceInstance InstanceID="1">
<WANIPConnection NumberOfInstances="2">
<WANIPConnectionInstance InstanceID="1"
X_NetworkPriority="LTE Only"
Valid options for X_NetworkPriority "AUTO", "WCDMA Only" and "LTE Only".
After changing the setting my connection has been more stable than ever (on Danish 3's firmware). There has been occasions where my connection has dropped to 2.5G, see the blog post about it, but after fixing the LTE-only -mode things changed to most robust ever.
SplashID wasted my entire password database
Wednesday, February 5. 2014
I've been using SplashID as my password solution. See my earlier post about that. Today I tried to log in into the application to retrieve a password, but it turned out my user account was changed into null. Well... that's not reassuring.
After the initial shock I filed a support ticket to them, but I'm not expecting any miracles. The database has been lost in my bookkeeping. The next thing I did was checked my trustworthy(?) Acronis True Image backups. I had them running on daily rotation and this turned out to be the first time I actually needed it for a real situation.
They hid the "Restore files and directories" -option well. My laptop is configured to run backups for the entire disk, so the default recover-option is to restore the entire disk. In this case that seems a bit overkill. But in the gear-icon, there actually is such an option. After discovering the option (it took me a while reading the helps), the recover was user friendly and intuitive enough. I chose to restore yesterday's backup to the original location. The recover went fine, but SplashID database was flawed on that point. I simply restored two days old backup and that seemed to be an intact one.
Luckily I don't recall any additions or changes to my passwords during the last two days. It looks like I walked away with this incident without harm.
Update 7th Feb 2014:
I got a reply to my support ticket. What SplashData is saying, that the password database is lost due to a bug (actually they didn't use that word, but they cannot fool me). The bug has been fixed in later version of SplashID. Luckily I had a backup to restore from. IMHO the software should have better notification about new versions.
Managing PostgreSQL 8.x permissions to limit application user's access
Wednesday, February 5. 2014
I was working with a legacy project with PostgreSQL 8 installation. A typical software developer simply does not care about DBA enough to think more than once about the permissions setup. The thinking is that for the purpose of writing lines of working code which executes really nice SQL-queries a user with lots of power in its sleeves is a good thing. This is something I bump into a lot. It would be a nice eye-opener if every coder would had to investigate a server which has been cracked into once or twice in the early programming career. I'm sure that would improve the quality of code and improve security thinking.
Anyway, the logic for ignoring security is ok for a development box, given the scenario that it is pretty much inaccessible outside the development team. When going to production things always get more complicated. I have witnessed production boxes which are running applications that have been configured to access DB with Admin-permissions. That happens in an environment where any decent programmer/DBA can spot out a number of other ignored things. Thinking about security is both far above the pay-grade and the skill envelope your regular coder possesses.
In an attempt to do things the-right-way(tm), it is a really good idea to create a specific user for accessing the DB. Even better idea is to limit the permissions so, that application user cannot run the classic "; DROP TABLE users; -- " because lacking the permission to drop tables. We still remember Exploits of a Mom, right?
Image courtesy of xkcd.com.
Back to reality... I was on a production PostgreSQL and evaluated the situation. Database has owner of postgres, schema public had owner of postgres, but all the tables, sequences and views where owned by the application user. So any exploit would allow the application user to drop all tables. Not cool, huh!
To solve this three things are needed: first, owner of the entire schema must be postgres. Second, the application user needs only to have enough permission for CRUD-operations, nothing more. And third, the schema must not allow users to create new items on it. As default everybody can create new tables and sequences, but if somebody really pops your box and can run anything on your DB, creating new items (besides temporary tables) is not a good thing.
On a PostgreSQL 8 something of a trickery is needed. Version 9.0 introduced us the "GRANT ... ALL TABLES IN SCHEMA", but I didn't have that at my disposal. To get around the entire thing I created two SQL-queries which were crafted to output SQL-queries. I could simply copy/paste the output and run it in pgAdmin III query-window. Nice!
The first query to gather all tables and sequences and change the owner to postgres:
SELECT 'ALTER TABLE ' || table_schema || '.' || table_name ||' OWNER TO postgres;'
FROM information_schema.tables
WHERE
table_type = 'BASE TABLE' and
table_schema NOT IN ('pg_catalog', 'information_schema')
UNION
SELECT 'ALTER SEQUENCE ' || sequence_schema || '.' || sequence_name ||' OWNER TO postgres;'
FROM information_schema.sequences
WHERE
sequence_schema NOT IN ('pg_catalog', 'information_schema')
It will output something like this:
ALTER TABLE public.phones OWNER TO postgres;
ALTER SEQUENCE public.user_id_seq OWNER TO postgres;
I ran those, and owner was changed.
NOTE: that effectively locked the application user out of DB completely.
So it was time to restore access. This is the query to gather information about all tables, views, sequences and functions:
SELECT 'GRANT ALL ON ' || table_schema || '.' || table_name ||' TO my_group;'
FROM information_schema.tables
WHERE
table_type = 'BASE TABLE' and
table_schema NOT IN ('pg_catalog', 'information_schema')
UNION
SELECT 'GRANT ALL ON ' || table_schema || '.' || table_name ||' TO my_group;'
FROM information_schema.views
WHERE
table_schema NOT IN ('pg_catalog', 'information_schema')
UNION
SELECT 'GRANT ALL ON SEQUENCE ' || sequence_schema || '.' || sequence_name ||' TO my_group;'
FROM information_schema.sequences
WHERE
sequence_schema NOT IN ('pg_catalog', 'information_schema')
UNION
SELECT 'GRANT ALL ON FUNCTION ' || nspname || '.' || proname || '(' || pg_get_function_arguments(p.oid) || ') TO my_group;'
FROM pg_catalog.pg_proc p
INNER JOIN pg_catalog.pg_namespace n ON pronamespace = n.oid
WHERE
nspname = 'public'
It will output something like this:
GRANT ALL ON public.phones TO my_user;
GRANT ALL ON SEQUENCE public.user_id_seq TO my_user;
NOTE: you need to find/replace my_user to something that fits your needs.
Now the application was again running smoothly, but with reduced permission in effect. The problem with all this is that TRUNCATE-clause (or DELETE FROM -tablename-) are still working. To get the maximum out of enhanced security, some classification of data would be needed. But the client wasn't ready to do that (yet).
The third thing is to limit schema permissions so that only usage is allowed for the general public:
REVOKE ALL ON SCHEMA public FROM public;
GRANT USAGE ON SCHEMA public TO public;
Now only postgres can create new things there.
All there is to do at this point is to test the appliation. There should be errors for DB-access if something went wrong.