Managing PostgreSQL 8.x permissions to limit application user's access
Wednesday, February 5. 2014
I was working with a legacy project with PostgreSQL 8 installation. A typical software developer simply does not care about DBA enough to think more than once about the permissions setup. The thinking is that for the purpose of writing lines of working code which executes really nice SQL-queries a user with lots of power in its sleeves is a good thing. This is something I bump into a lot. It would be a nice eye-opener if every coder would had to investigate a server which has been cracked into once or twice in the early programming career. I'm sure that would improve the quality of code and improve security thinking.
Anyway, the logic for ignoring security is ok for a development box, given the scenario that it is pretty much inaccessible outside the development team. When going to production things always get more complicated. I have witnessed production boxes which are running applications that have been configured to access DB with Admin-permissions. That happens in an environment where any decent programmer/DBA can spot out a number of other ignored things. Thinking about security is both far above the pay-grade and the skill envelope your regular coder possesses.
In an attempt to do things the-right-way(tm), it is a really good idea to create a specific user for accessing the DB. Even better idea is to limit the permissions so, that application user cannot run the classic "; DROP TABLE users; -- " because lacking the permission to drop tables. We still remember Exploits of a Mom, right?
Image courtesy of xkcd.com.
Back to reality... I was on a production PostgreSQL and evaluated the situation. Database has owner of postgres, schema public had owner of postgres, but all the tables, sequences and views where owned by the application user. So any exploit would allow the application user to drop all tables. Not cool, huh! 
To solve this three things are needed: first, owner of the entire schema must be postgres. Second, the application user needs only to have enough permission for CRUD-operations, nothing more. And third, the schema must not allow users to create new items on it. As default everybody can create new tables and sequences, but if somebody really pops your box and can run anything on your DB, creating new items (besides temporary tables) is not a good thing.
On a PostgreSQL 8 something of a trickery is needed. Version 9.0 introduced us the "GRANT ... ALL TABLES IN SCHEMA", but I didn't have that at my disposal. To get around the entire thing I created two SQL-queries which were crafted to output SQL-queries. I could simply copy/paste the output and run it in pgAdmin III query-window. Nice!
The first query to gather all tables and sequences and change the owner to postgres:
SELECT 'ALTER TABLE ' || table_schema || '.' || table_name ||' OWNER TO postgres;'
FROM information_schema.tables
WHERE
table_type = 'BASE TABLE' and
table_schema NOT IN ('pg_catalog', 'information_schema')
UNION
SELECT 'ALTER SEQUENCE ' || sequence_schema || '.' || sequence_name ||' OWNER TO postgres;'
FROM information_schema.sequences
WHERE
sequence_schema NOT IN ('pg_catalog', 'information_schema')
It will output something like this:
ALTER TABLE public.phones OWNER TO postgres;
ALTER SEQUENCE public.user_id_seq OWNER TO postgres;
I ran those, and owner was changed.
NOTE: that effectively locked the application user out of DB completely.
So it was time to restore access. This is the query to gather information about all tables, views, sequences and functions:
SELECT 'GRANT ALL ON ' || table_schema || '.' || table_name ||' TO my_group;'
FROM information_schema.tables
WHERE
table_type = 'BASE TABLE' and
table_schema NOT IN ('pg_catalog', 'information_schema')
UNION
SELECT 'GRANT ALL ON ' || table_schema || '.' || table_name ||' TO my_group;'
FROM information_schema.views
WHERE
table_schema NOT IN ('pg_catalog', 'information_schema')
UNION
SELECT 'GRANT ALL ON SEQUENCE ' || sequence_schema || '.' || sequence_name ||' TO my_group;'
FROM information_schema.sequences
WHERE
sequence_schema NOT IN ('pg_catalog', 'information_schema')
UNION
SELECT 'GRANT ALL ON FUNCTION ' || nspname || '.' || proname || '(' || pg_get_function_arguments(p.oid) || ') TO my_group;'
FROM pg_catalog.pg_proc p
INNER JOIN pg_catalog.pg_namespace n ON pronamespace = n.oid
WHERE
nspname = 'public'
It will output something like this:
GRANT ALL ON public.phones TO my_user;
GRANT ALL ON SEQUENCE public.user_id_seq TO my_user;
NOTE: you need to find/replace my_user to something that fits your needs.
Now the application was again running smoothly, but with reduced permission in effect. The problem with all this is that TRUNCATE-clause (or DELETE FROM -tablename-) are still working. To get the maximum out of enhanced security, some classification of data would be needed. But the client wasn't ready to do that (yet).
The third thing is to limit schema permissions so that only usage is allowed for the general public:
REVOKE ALL ON SCHEMA public FROM public;
GRANT USAGE ON SCHEMA public TO public;
Now only postgres can create new things there.
All there is to do at this point is to test the appliation. There should be errors for DB-access if something went wrong.
I was working with a legacy project with PostgreSQL 8 installation. A typical software developer simply does not care about DBA enough to think more than once about the permissions setup. The thinking is that for the purpose of writing lines of working code which executes really nice SQL-queries a user with lots of power in its sleeves is a good thing. This is something I bump into a lot. It would be a nice eye-opener if every coder would had to investigate a server which has been cracked into once or twice in the early programming career. I'm sure that would improve the quality of code and improve security thinking.
Anyway, the logic for ignoring security is ok for a development box, given the scenario that it is pretty much inaccessible outside the development team. When going to production things always get more complicated. I have witnessed production boxes which are running applications that have been configured to access DB with Admin-permissions. That happens in an environment where any decent programmer/DBA can spot out a number of other ignored things. Thinking about security is both far above the pay-grade and the skill envelope your regular coder possesses.
In an attempt to do things the-right-way(tm), it is a really good idea to create a specific user for accessing the DB. Even better idea is to limit the permissions so, that application user cannot run the classic "; DROP TABLE users; -- " because lacking the permission to drop tables. We still remember Exploits of a Mom, right?
Image courtesy of xkcd.com.
Back to reality... I was on a production PostgreSQL and evaluated the situation. Database has owner of postgres, schema public had owner of postgres, but all the tables, sequences and views where owned by the application user. So any exploit would allow the application user to drop all tables. Not cool, huh!
To solve this three things are needed: first, owner of the entire schema must be postgres. Second, the application user needs only to have enough permission for CRUD-operations, nothing more. And third, the schema must not allow users to create new items on it. As default everybody can create new tables and sequences, but if somebody really pops your box and can run anything on your DB, creating new items (besides temporary tables) is not a good thing.
On a PostgreSQL 8 something of a trickery is needed. Version 9.0 introduced us the "GRANT ... ALL TABLES IN SCHEMA", but I didn't have that at my disposal. To get around the entire thing I created two SQL-queries which were crafted to output SQL-queries. I could simply copy/paste the output and run it in pgAdmin III query-window. Nice!
The first query to gather all tables and sequences and change the owner to postgres:
SELECT 'ALTER TABLE ' || table_schema || '.' || table_name ||' OWNER TO postgres;'
FROM information_schema.tables
WHERE
table_type = 'BASE TABLE' and
table_schema NOT IN ('pg_catalog', 'information_schema')
UNION
SELECT 'ALTER SEQUENCE ' || sequence_schema || '.' || sequence_name ||' OWNER TO postgres;'
FROM information_schema.sequences
WHERE
sequence_schema NOT IN ('pg_catalog', 'information_schema')
It will output something like this:
ALTER TABLE public.phones OWNER TO postgres;
ALTER SEQUENCE public.user_id_seq OWNER TO postgres;
I ran those, and owner was changed.
NOTE: that effectively locked the application user out of DB completely.
So it was time to restore access. This is the query to gather information about all tables, views, sequences and functions:
SELECT 'GRANT ALL ON ' || table_schema || '.' || table_name ||' TO my_group;'
FROM information_schema.tables
WHERE
table_type = 'BASE TABLE' and
table_schema NOT IN ('pg_catalog', 'information_schema')
UNION
SELECT 'GRANT ALL ON ' || table_schema || '.' || table_name ||' TO my_group;'
FROM information_schema.views
WHERE
table_schema NOT IN ('pg_catalog', 'information_schema')
UNION
SELECT 'GRANT ALL ON SEQUENCE ' || sequence_schema || '.' || sequence_name ||' TO my_group;'
FROM information_schema.sequences
WHERE
sequence_schema NOT IN ('pg_catalog', 'information_schema')
UNION
SELECT 'GRANT ALL ON FUNCTION ' || nspname || '.' || proname || '(' || pg_get_function_arguments(p.oid) || ') TO my_group;'
FROM pg_catalog.pg_proc p
INNER JOIN pg_catalog.pg_namespace n ON pronamespace = n.oid
WHERE
nspname = 'public'
It will output something like this:
GRANT ALL ON public.phones TO my_user;
GRANT ALL ON SEQUENCE public.user_id_seq TO my_user;
NOTE: you need to find/replace my_user to something that fits your needs.
Now the application was again running smoothly, but with reduced permission in effect. The problem with all this is that TRUNCATE-clause (or DELETE FROM -tablename-) are still working. To get the maximum out of enhanced security, some classification of data would be needed. But the client wasn't ready to do that (yet).
The third thing is to limit schema permissions so that only usage is allowed for the general public:
REVOKE ALL ON SCHEMA public FROM public;
GRANT USAGE ON SCHEMA public TO public;
Now only postgres can create new things there.
All there is to do at this point is to test the appliation. There should be errors for DB-access if something went wrong.
Zend Framework 2: Touching headLink() twice on layout template
Friday, January 17. 2014
This one was one of the tricky ones. My CSS-inclusion was doubled for a very strange reason. My layout-template has:
{$this->headLink()
->prependStylesheet('/css/style.css')
->prependStylesheet('/css/jQuery/jquery.mobile.css')}
{$this->headLink([
'rel' => 'shortcut icon',
'type' => 'image/vnd.microsoft.icon',
'href' => '/images/favicon.ico'
])}
That would be pretty standard for any web application. Link a couple of CSS-definition files and declare the URL for favorite icon of the website. However, on ZF2 doing things like my above code does, makes things go bad. Rather surprisingly, the HTML gets rendered as:
<link href="/css/jQuery/jquery.mobile.css" media="screen" rel="stylesheet" type="text/css">
<link href="/css/style.css" media="screen" rel="stylesheet" type="text/css">
<link href="/css/jQuery/jquery.mobile.css" media="screen" rel="stylesheet" type="text/css">
<link href="/css/style.css" media="screen" rel="stylesheet" type="text/css">
<link href="/images/favicon.ico" rel="shortcut icon" type="image/vnd.microsoft.icon">
It doesn't actually break anything to have the CSS linked twice, but it just makes the CSS-debugging bit weird. Lot of the declarations are twice in the list and browser has to determine which ones are effective and which ones are ignored in any particular case.
To find out what's going on, I swapped my template to contain:
{$this->headLink([
'rel' => 'shortcut icon',
'type' => 'image/vnd.microsoft.icon',
'href' => '/images/favicon.ico'
])}
{$this->headLink()
->prependStylesheet('/css/style.css')
->prependStylesheet('/css/jQuery/jquery.mobile.css')}
Whatta ... hell!? Now everything works as expected. First the favicon-link and then CSS-links. Without any unnecessary doubling.
After a nice long morning of debugging ZF2-view code revealed a solution:
{$this->headLink()
->prependStylesheet('/css/style.css')
->prependStylesheet('/css/jQuery/jquery.mobile.css')}
{$this->headLink()
->deleteContainer()}
{$this->headLink([
'rel' => 'shortcut icon',
'type' => 'image/vnd.microsoft.icon',
'href' => '/images/favicon.ico'
])}
Now everything renders nicely. No doubles, all in the order I wanted them to be. The key was to erase Zend\View\Helper\HeadLink's container after doing the stylesheets. The method is actually in the class Zend\View\Helper\Placeholder\Container\AbstractStandalone. Apparently headLink's container only adds up and any subsequent calls simply add to the existing storage. The mistake is to print the contents of the container in the middle. The final solution is not to touch headLink() twice:
{$this->headLink([
'rel' => 'shortcut icon',
'type' => 'image/vnd.microsoft.icon',
'href' => '/images/favicon.ico'
])
->prependStylesheet("/css/style.css")
->prependStylesheet("/css/jQuery/jquery.mobile.css")}
Now it works much better! The rendered HTML will have the items in appropriate order:
- /css/jQuery/jquery.mobile.css
- /css/style.css
- /images/favicon.ico
This was yet again one of the funny things that have changed since ZF1. I definitely would consider that as a bug, but don't want to bother sending Zend a report out of it. They'll yet again pull a Microsoft and declare it as a feature.
This one was one of the tricky ones. My CSS-inclusion was doubled for a very strange reason. My layout-template has:
{$this->headLink()
->prependStylesheet('/css/style.css')
->prependStylesheet('/css/jQuery/jquery.mobile.css')}
{$this->headLink([
'rel' => 'shortcut icon',
'type' => 'image/vnd.microsoft.icon',
'href' => '/images/favicon.ico'
])}
That would be pretty standard for any web application. Link a couple of CSS-definition files and declare the URL for favorite icon of the website. However, on ZF2 doing things like my above code does, makes things go bad. Rather surprisingly, the HTML gets rendered as:
<link href="/css/jQuery/jquery.mobile.css" media="screen" rel="stylesheet" type="text/css">
<link href="/css/style.css" media="screen" rel="stylesheet" type="text/css">
<link href="/css/jQuery/jquery.mobile.css" media="screen" rel="stylesheet" type="text/css">
<link href="/css/style.css" media="screen" rel="stylesheet" type="text/css">
<link href="/images/favicon.ico" rel="shortcut icon" type="image/vnd.microsoft.icon">
It doesn't actually break anything to have the CSS linked twice, but it just makes the CSS-debugging bit weird. Lot of the declarations are twice in the list and browser has to determine which ones are effective and which ones are ignored in any particular case.
To find out what's going on, I swapped my template to contain:
{$this->headLink([
'rel' => 'shortcut icon',
'type' => 'image/vnd.microsoft.icon',
'href' => '/images/favicon.ico'
])}
{$this->headLink()
->prependStylesheet('/css/style.css')
->prependStylesheet('/css/jQuery/jquery.mobile.css')}
Whatta ... hell!? Now everything works as expected. First the favicon-link and then CSS-links. Without any unnecessary doubling.
After a nice long morning of debugging ZF2-view code revealed a solution:
{$this->headLink()
->prependStylesheet('/css/style.css')
->prependStylesheet('/css/jQuery/jquery.mobile.css')}
{$this->headLink()
->deleteContainer()}
{$this->headLink([
'rel' => 'shortcut icon',
'type' => 'image/vnd.microsoft.icon',
'href' => '/images/favicon.ico'
])}
Now everything renders nicely. No doubles, all in the order I wanted them to be. The key was to erase Zend\View\Helper\HeadLink's container after doing the stylesheets. The method is actually in the class Zend\View\Helper\Placeholder\Container\AbstractStandalone. Apparently headLink's container only adds up and any subsequent calls simply add to the existing storage. The mistake is to print the contents of the container in the middle. The final solution is not to touch headLink() twice:
{$this->headLink([
'rel' => 'shortcut icon',
'type' => 'image/vnd.microsoft.icon',
'href' => '/images/favicon.ico'
])
->prependStylesheet("/css/style.css")
->prependStylesheet("/css/jQuery/jquery.mobile.css")}
Now it works much better! The rendered HTML will have the items in appropriate order:
- /css/jQuery/jquery.mobile.css
- /css/style.css
- /images/favicon.ico
This was yet again one of the funny things that have changed since ZF1. I definitely would consider that as a bug, but don't want to bother sending Zend a report out of it. They'll yet again pull a Microsoft and declare it as a feature.
Zend Framework 2: preDispatch(), returning properly without executing action
Thursday, January 16. 2014
Getting ZF2 to do preDispatch() and postDispatch() like ZF1 had is widely known and documented. In your controller, add this:
protected function attachDefaultListeners()
{
parent::attachDefaultListeners();
$event_mgr = $this->getEventManager();
$event_mgr->attach('dispatch', array($this, 'preDispatch'), 100);
$event_mgr->attach('dispatch', array($this, 'postDispatch'), -100);
}
Two simple listeners are attached with proper priorities to trigger before and after the action.
However, to go somewhere else before the action is executed adds some complexity, as one can expect. In preDispatch() you can do one of two suggested things. A redirect:
// Do a HTTP/302 redirect
return $this->redirect()->toRoute(
'application', array('controller' => 'index', 'action' => 'index'
));
My issue here is, that it literally does a HTTP/302 redirect in your browser. Another problem is, that it still executes the action it was targeted to. It renders the view, runs all the listeners, does all the plugins and helpers as it started to do. It just redirects after all that. I don't want my user to do a redirect or to run all the bells and whistles including the action. Why cannot I simply return something else instead, like ZF1 could be programmed to do. On my top-1 list is to execute an action from another controller?
So, the another option to do is to simply call it quits right in the middle of preDispatch():
$url = $event->getRouter()
->assemble(
array('action' => 'index'),
array('name' => 'frontend')
);
$response = $event->getResponse();
$response->getHeaders()->addHeaderLine('Location', $url);
$response->setStatusCode(302);
$response->sendHeaders();
exit();
That's pretty much the same as previous, but uglier. exit()!! Really? In Zend Framework?! I'd rather keep the wheels rolling and machine turning like it normally would do until it has done all the dirty deeds it wants to do. Poking around The Net reveals, that nobody is really offering anything else. Apparently everybody simply are doing a copy/paste from the same sites I found.
This is what I offer. Discard the current operation, start a new one and return that! A lot better alternative.
Example 1, return JSON-data:
$event->stopPropagation(true);
// Skip executing the action requested. Return this instead.
$result = new JsonModel(array(
'success' => false,
'loginrequired' => true
));
$result->setTerminal(true);
$event->setResponse(new Response());
$event->setViewModel($result);
The key is in the setResponse()-call.
Example 2, call another action:
$event->stopPropagation(true);
// Skip executing the action requested.
// Execute anotherController::errorAction() instead.
$event->setResponse(new Response());
$result = $this->forward()->dispatch('Another', array(
'action' => 'error'
));
$result->setTerminal(true);
$event->setViewModel($result);
Hope this helps somebody else trying to do a ZF1 to ZF2 transition. In the end, there is only one thing similar between them. Their name has Zend Framwork in it. 
Getting ZF2 to do preDispatch() and postDispatch() like ZF1 had is widely known and documented. In your controller, add this:
protected function attachDefaultListeners()
{
parent::attachDefaultListeners();
$event_mgr = $this->getEventManager();
$event_mgr->attach('dispatch', array($this, 'preDispatch'), 100);
$event_mgr->attach('dispatch', array($this, 'postDispatch'), -100);
}
Two simple listeners are attached with proper priorities to trigger before and after the action.
However, to go somewhere else before the action is executed adds some complexity, as one can expect. In preDispatch() you can do one of two suggested things. A redirect:
// Do a HTTP/302 redirect
return $this->redirect()->toRoute(
'application', array('controller' => 'index', 'action' => 'index'
));
My issue here is, that it literally does a HTTP/302 redirect in your browser. Another problem is, that it still executes the action it was targeted to. It renders the view, runs all the listeners, does all the plugins and helpers as it started to do. It just redirects after all that. I don't want my user to do a redirect or to run all the bells and whistles including the action. Why cannot I simply return something else instead, like ZF1 could be programmed to do. On my top-1 list is to execute an action from another controller?
So, the another option to do is to simply call it quits right in the middle of preDispatch():
$url = $event->getRouter()
->assemble(
array('action' => 'index'),
array('name' => 'frontend')
);
$response = $event->getResponse();
$response->getHeaders()->addHeaderLine('Location', $url);
$response->setStatusCode(302);
$response->sendHeaders();
exit();
That's pretty much the same as previous, but uglier. exit()!! Really? In Zend Framework?! I'd rather keep the wheels rolling and machine turning like it normally would do until it has done all the dirty deeds it wants to do. Poking around The Net reveals, that nobody is really offering anything else. Apparently everybody simply are doing a copy/paste from the same sites I found.
This is what I offer. Discard the current operation, start a new one and return that! A lot better alternative.
Example 1, return JSON-data:
$event->stopPropagation(true);
// Skip executing the action requested. Return this instead.
$result = new JsonModel(array(
'success' => false,
'loginrequired' => true
));
$result->setTerminal(true);
$event->setResponse(new Response());
$event->setViewModel($result);
The key is in the setResponse()-call.
Example 2, call another action:
$event->stopPropagation(true);
// Skip executing the action requested.
// Execute anotherController::errorAction() instead.
$event->setResponse(new Response());
$result = $this->forward()->dispatch('Another', array(
'action' => 'error'
));
$result->setTerminal(true);
$event->setViewModel($result);
Hope this helps somebody else trying to do a ZF1 to ZF2 transition. In the end, there is only one thing similar between them. Their name has Zend Framwork in it.
git and HTTPS (fatal: HTTP request failed)
Friday, January 10. 2014
Two facts first about git:
- A number of sites tells you to use git:// or ssh:// instead of https://. Apparently there is some unnecessary complexity when piggy-backing over HTTP-secure.
- I personally don't like git due to it's complexity. Its like a requirement of being an experienced mechanic before getting a driver's license. You can drive a car without exact technical knowledge about the inner workings of a car. But that seems to be the only way to go with git.
So, I choose to run my own repo on my own box and do it over HTTPS. Since HTTPS is a 2nd class protocol in the git-world many simple things are unnecessarily difficult.
My initial attempt was to do a simple clone from my existing repo:
git clone https://me@my.server/my/Project
Well, that doesn't end well. There is this fully explanatory fatal: HTTP request failed -error. Adding --verbose does not help. Then I found a fact that git uses curl as it's HTTPS-transport client and very helpful environment variable to diagnose the problem:
export GIT_CURL_VERBOSE=1
git clone https://me@my.server/my/Project
That way I got the required debug-information about Certificate Authority -certificates being used. It didn't use my own CA's file at all.
The next fix was to tweak the configuration:
git config --global http.sslverify false
It made my clone working! That, however, is not the way I do computer security. I need my certificates verified. From git-config(1) man-page I found the required piece of information. Adding the CA-root path of my Linux-distro makes the entire thing working:
git config --global http.sslverify true
git config --global http.sslCAPath /etc/pki/tls/certs
Finally I found the good site about all this: http://stackoverflow.com/questions/3777075/ssl-certificate-rejected-trying-to-access-github-over-https-behind-firewall/4454754 It seems to contain all this information.
Unfortunately too late! But wouldn't it be great for git to emit the proper error message about the "Peer's certificate issuer is not recognized"? That was the original problem to begin with. Also, why don't CentOS-people configure their curl-library to behave like OpenSSL does?
Two facts first about git:
- A number of sites tells you to use git:// or ssh:// instead of https://. Apparently there is some unnecessary complexity when piggy-backing over HTTP-secure.
- I personally don't like git due to it's complexity. Its like a requirement of being an experienced mechanic before getting a driver's license. You can drive a car without exact technical knowledge about the inner workings of a car. But that seems to be the only way to go with git.
So, I choose to run my own repo on my own box and do it over HTTPS. Since HTTPS is a 2nd class protocol in the git-world many simple things are unnecessarily difficult.
My initial attempt was to do a simple clone from my existing repo:
git clone https://me@my.server/my/Project
Well, that doesn't end well. There is this fully explanatory fatal: HTTP request failed -error. Adding --verbose does not help. Then I found a fact that git uses curl as it's HTTPS-transport client and very helpful environment variable to diagnose the problem:
export GIT_CURL_VERBOSE=1
git clone https://me@my.server/my/Project
That way I got the required debug-information about Certificate Authority -certificates being used. It didn't use my own CA's file at all.
The next fix was to tweak the configuration:
git config --global http.sslverify false
It made my clone working! That, however, is not the way I do computer security. I need my certificates verified. From git-config(1) man-page I found the required piece of information. Adding the CA-root path of my Linux-distro makes the entire thing working:
git config --global http.sslverify true
git config --global http.sslCAPath /etc/pki/tls/certs
Finally I found the good site about all this: http://stackoverflow.com/questions/3777075/ssl-certificate-rejected-trying-to-access-github-over-https-behind-firewall/4454754 It seems to contain all this information.
Unfortunately too late! But wouldn't it be great for git to emit the proper error message about the "Peer's certificate issuer is not recognized"? That was the original problem to begin with. Also, why don't CentOS-people configure their curl-library to behave like OpenSSL does?
Barcode weirdness: Exactly the same, but different
Tuesday, December 17. 2013
I was transferring an existing production web-application to a new server. There was a slight gap in the operating system version and I ended up refactoring lot of the code to compensate newer libraries in use. One of the issues I bumped into and spent a while fixing was barcode printing. It was a pretty standard Code 39 barcode for the shipping manifest.
The old code was using a commercial proprietary TrueType font file. I don't know when or where it was purchased by my client, but it was in production use in the old system. Looks like such a fonts sell for $99. Anyway, the new library had barcode printing in it, so I chose to use it.
Mistake. It does somewhat work. It prints a perfectly valid barcode. Example:
The issue is, that it does not read. I had a couple of readers to test with, and even the really expensive laser-one didn't register the code very well. So I had to re-write the printing code to simulate the old functionality. The idea is to read the TTF-font, convert it into some internal form and output a ready-to-go file into a data directory, so that conversion is needed only once. After that the library can read the font and produce nice looking graphics including barcodes. Here is an example of the TTF-style:
The difference is amazing! Any reader reads that easily. Both barcodes contain the same data in them, but they look so much different. Wow!
Perhaps one day a barcode wizard will explain me the difference between them.
I was transferring an existing production web-application to a new server. There was a slight gap in the operating system version and I ended up refactoring lot of the code to compensate newer libraries in use. One of the issues I bumped into and spent a while fixing was barcode printing. It was a pretty standard Code 39 barcode for the shipping manifest.
The old code was using a commercial proprietary TrueType font file. I don't know when or where it was purchased by my client, but it was in production use in the old system. Looks like such a fonts sell for $99. Anyway, the new library had barcode printing in it, so I chose to use it.
Mistake. It does somewhat work. It prints a perfectly valid barcode. Example:
The issue is, that it does not read. I had a couple of readers to test with, and even the really expensive laser-one didn't register the code very well. So I had to re-write the printing code to simulate the old functionality. The idea is to read the TTF-font, convert it into some internal form and output a ready-to-go file into a data directory, so that conversion is needed only once. After that the library can read the font and produce nice looking graphics including barcodes. Here is an example of the TTF-style:
The difference is amazing! Any reader reads that easily. Both barcodes contain the same data in them, but they look so much different. Wow!
Perhaps one day a barcode wizard will explain me the difference between them.
Why cloud platforms exist - Benchmarking Windows Azure
Tuesday, October 8. 2013
I got permission to publish a grayed out version of a project I was contracted to do this summer. Since the customer paid big bucks for it, you're not going to see all the details. I'm sorry to act as a greedy idiot, but you have to hire me to do something similar to see your results.
The subject of my project is something that personally is intriguing to me: how much better does a cloud-native application perform when compared to a traditional LAMP-setup. I chose the cloud platform to be Windows Azure, since I know that one best.
The Setup
There was a pretty regular web-application for performing couple of specific tasks. Exactly the same sample data was populated to Azure SQL for IaaS-test and Azure Table Storage for PaaS -test. People who complain about using Azure SQL can imagine a faster setup being used on a virtual machine and expect the thing to perform faster.
To simulate a real web application, memory cache was used. Memcache for IaaS and Azure Cache for PaaS. On both occasions using memory cache pushes the performance of the application further as there is no need to do so much expensive I/O.
Results
In the Excel-charts there are number of simulated users at the horizontal axis. There are two vertical axis used for different items.
Following items can be read from the Excel-charts:
- Absolute number of pages served for giving measurement point (right axis)
- Absolute number of pages returned, which returned erroneous output (right axis)
- Percentage of HTTP-errors: a status code which we interpret as an error was returned (left axis)
- Percentage of total errors: HTTP errors + requests which did not return a status code (left axis)
- Number successful pages returned per second (left axis)
Results: IaaS
I took a ready-made CentOS Linux-image bundled with Nginx/PHP-FPM -pair and lured it to work under Azure and connect to ready populated data from Azure SQL. Here are the test runs from two and three medium instances.
Adding a machine to the service absolutely helps. With two instances, the application chokes completely at the end of test load. Added machine also makes the application perform much faster, there is a clear improvement on page load speed.
Results: PaaS
Exactly the same functionality was implemented with .Net / C#.
Here are the results:
Astonishing! Page load speed is so much higher on similar user loads, also no errors can be produced. I pushed the envelope with 40 times the users, but couldn't be sure if it was about test setup (which I definitely saturated) or Azure's capacity fluctuating under heavy load. The test with small role was also very satisfactory, it beats the crap out of running two medium instances on IaaS!
Conclusion
I have to state the obvious: PaaS-application performs much better. I just couldn't belive that it was impossible to get exact measurement from the point where the application chokes on PaaS.
I got permission to publish a grayed out version of a project I was contracted to do this summer. Since the customer paid big bucks for it, you're not going to see all the details. I'm sorry to act as a greedy idiot, but you have to hire me to do something similar to see your results.
The subject of my project is something that personally is intriguing to me: how much better does a cloud-native application perform when compared to a traditional LAMP-setup. I chose the cloud platform to be Windows Azure, since I know that one best.
The Setup
There was a pretty regular web-application for performing couple of specific tasks. Exactly the same sample data was populated to Azure SQL for IaaS-test and Azure Table Storage for PaaS -test. People who complain about using Azure SQL can imagine a faster setup being used on a virtual machine and expect the thing to perform faster.
To simulate a real web application, memory cache was used. Memcache for IaaS and Azure Cache for PaaS. On both occasions using memory cache pushes the performance of the application further as there is no need to do so much expensive I/O.
Results
In the Excel-charts there are number of simulated users at the horizontal axis. There are two vertical axis used for different items.
Following items can be read from the Excel-charts:
- Absolute number of pages served for giving measurement point (right axis)
- Absolute number of pages returned, which returned erroneous output (right axis)
- Percentage of HTTP-errors: a status code which we interpret as an error was returned (left axis)
- Percentage of total errors: HTTP errors + requests which did not return a status code (left axis)
- Number successful pages returned per second (left axis)
Results: IaaS
I took a ready-made CentOS Linux-image bundled with Nginx/PHP-FPM -pair and lured it to work under Azure and connect to ready populated data from Azure SQL. Here are the test runs from two and three medium instances.
Adding a machine to the service absolutely helps. With two instances, the application chokes completely at the end of test load. Added machine also makes the application perform much faster, there is a clear improvement on page load speed.
Results: PaaS
Exactly the same functionality was implemented with .Net / C#.
Here are the results:
Astonishing! Page load speed is so much higher on similar user loads, also no errors can be produced. I pushed the envelope with 40 times the users, but couldn't be sure if it was about test setup (which I definitely saturated) or Azure's capacity fluctuating under heavy load. The test with small role was also very satisfactory, it beats the crap out of running two medium instances on IaaS!
Conclusion
I have to state the obvious: PaaS-application performs much better. I just couldn't belive that it was impossible to get exact measurement from the point where the application chokes on PaaS.
Why Azure PaaS billing cannot be stopped? - revisit
Monday, October 7. 2013
In my earlier entry about Azure PaaS billing, I was complaining about how to stop the billing.
This time I managed to do it. The solution was simple: delete the deployments, but leave the cloud service intact. Then Azure stops reserving any (stopped) compute units for the cloud service. Like this:
Here is the proof:
Zero billing. Nice! 
In my earlier entry about Azure PaaS billing, I was complaining about how to stop the billing.
This time I managed to do it. The solution was simple: delete the deployments, but leave the cloud service intact. Then Azure stops reserving any (stopped) compute units for the cloud service. Like this:
Here is the proof:
Zero billing. Nice!
Why Azure PaaS billing cannot be stopped?
Tuesday, October 1. 2013
In Windows Azure stopping an IaaS virtual machine stops the billing, there is no need to delete the stopped instance. When you stop a PaaS cloud service, following happens:
Based on billing:
This is really true. On 26th and 27th I had a cloud service running on Azure, but I stopped it. On 28th and 29th there is billing for a service that has been stopped, and for which I got the warning about. I don't know why on 30th there is one core missing from the billing. Discount, perhaps? 
My bottom line is:
Why? What possible idea could be, that your PaaS cloud service needs to be deleted in order to stop billing? Come on Microsoft! Equal rules for both cloud services!
In Windows Azure stopping an IaaS virtual machine stops the billing, there is no need to delete the stopped instance. When you stop a PaaS cloud service, following happens:
Based on billing:
This is really true. On 26th and 27th I had a cloud service running on Azure, but I stopped it. On 28th and 29th there is billing for a service that has been stopped, and for which I got the warning about. I don't know why on 30th there is one core missing from the billing. Discount, perhaps?
My bottom line is:
Why? What possible idea could be, that your PaaS cloud service needs to be deleted in order to stop billing? Come on Microsoft! Equal rules for both cloud services!
Migrating data from SQL into Windows Azure Table Storage
Monday, September 16. 2013
The error messages when Azure Table Storage data insert fails are far from being descriptive.
This is the complete list of supported datatypes (or Property Types as they call them):
- Binary: An array of bytes up to 64 KB in size.
- Bool: A Boolean value.
- DateTime: A 64-bit value expressed as UTC time. The supported range of values is 1/1/1601 to 12/31/9999.
- Double: A 64-bit floating point value.
- GUID: A 128-bit globally unique identifier.
- Int: A 32-bit integer.
- Int64: A 64-bit integer.
- String: A UTF-16-encoded value. String values can be up to 64 KB in size.
Really. Nothing more. You just have to get along with that one! 
The list is taken from Windows Azure Table Storage and Windows Azure SQL Database - Compared and Contrasted.
Things you fail to notice:
- .Net DateTime Structure as range of 00:00:00 (midnight), January 1, 0001 Anno Domini (Common Era) through 11:59:59 P.M., December 31, 9999 A.D. (C.E.) in the Gregorian calendar. Not from January 1, 1601 AD.
- That shouldn't be an issue. My app had problems and it had recorded dates into year 201. This was a really nice way of finding that out.

- In intergers, there are no unsigned versions.
- In decimal numbers, there is no decimal, a 128-bit floating point number. You have to settle with Double, a IEC 60559:1989 (IEEE 754) compliant version.
- There is no reasonable way of storing money-type data which needs an exact number, no floating point conversions.
- The string really is UTF-16, a two byte -version. It stores up to 32768 characters.
- Which is Not much when compared to TEXT or varchar(max) which range from 2 GiB to anything you have
Hopefully this list helps somebody. I spent a nice while finding all these out.
The error messages when Azure Table Storage data insert fails are far from being descriptive.
This is the complete list of supported datatypes (or Property Types as they call them):
- Binary: An array of bytes up to 64 KB in size.
- Bool: A Boolean value.
- DateTime: A 64-bit value expressed as UTC time. The supported range of values is 1/1/1601 to 12/31/9999.
- Double: A 64-bit floating point value.
- GUID: A 128-bit globally unique identifier.
- Int: A 32-bit integer.
- Int64: A 64-bit integer.
- String: A UTF-16-encoded value. String values can be up to 64 KB in size.
Really. Nothing more. You just have to get along with that one!
The list is taken from Windows Azure Table Storage and Windows Azure SQL Database - Compared and Contrasted.
Things you fail to notice:
- .Net DateTime Structure as range of 00:00:00 (midnight), January 1, 0001 Anno Domini (Common Era) through 11:59:59 P.M., December 31, 9999 A.D. (C.E.) in the Gregorian calendar. Not from January 1, 1601 AD.
- That shouldn't be an issue. My app had problems and it had recorded dates into year 201. This was a really nice way of finding that out.
- In intergers, there are no unsigned versions.
- In decimal numbers, there is no decimal, a 128-bit floating point number. You have to settle with Double, a IEC 60559:1989 (IEEE 754) compliant version.
- There is no reasonable way of storing money-type data which needs an exact number, no floating point conversions.
- The string really is UTF-16, a two byte -version. It stores up to 32768 characters.
- Which is Not much when compared to TEXT or varchar(max) which range from 2 GiB to anything you have
Hopefully this list helps somebody. I spent a nice while finding all these out.
Using PHP, Zend Framework, PDO and FreeTDS in Windows Azure
Wednesday, September 4. 2013
Earlier I wrote about IPv6-connectivity with MS SQL server into Linux / PHP with FreeTDS.
This time my quest with FreeTDS continued, I put together the minimal possible CentOS 6.4 Linux with enough parts to produce a Nginx / PHP-FPM / Windows Azure SQL Database -based web application. The acronym could be not LAMP, but NPFWASD. No idea how to pronounce "npf-wasd", though. 
I packaged a Hyper-V -based Linux .vhd into Azure virtual machine IaaS-image and created couple of load-balanced HTTP-ports into it. The problem was to lure PHP's PDO to connect into Azure SQL via FreeTDS dblib. I spent a good while banging my head and kicking it, before it stopped resisting and started to obey my commands.
Everything would have gone much better, if only I had the proper version of FreeTDS installed into the Linux. When I realized that the TDS-protocol version is hyper-important in Azure SQL, I realised that my FreeTDS-version was not the one it was supposed to be. My own-package would have been the correct one (see the earlier post). My tsql -C says:
Compile-time settings (established with the "configure" script)
Version: freetds v0.92.dev.20130721
freetds.conf directory: /etc
MS db-lib source compatibility: yes
Sybase binary compatibility: yes
Thread safety: yes
iconv library: yes
TDS version: 7.1
iODBC: no
unixodbc: yes
SSPI "trusted" logins: no
Kerberos: yes
The default TDS version of 7.1 is really, really important there. With that I can do:
tsql -H -my-designated-instance-in-Azure-.database.windows.net \
-p 1433 \
-U -the-application-SQL-user-without-admin-rights- \
-D -my-own-database-in-the-SQL-box-
It simply works, displays the prompt and everything works as it should be. In my Zend Framework application configuration I say:
resources.db.adapter = "Pdo_Mssql"
resources.db.params.host = "-my-designated-instance-in-Azure-.database.windows.net"
resources.db.params.dbname = "-my-own-database-in-the-SQL-box-"
resources.db.params.username = "-the-application-SQL-user-without-admin-rights-"
resources.db.params.password = "-oh-the-top-secret-passwrod-"
resources.db.params.version = "7.1"
resources.db.params.charset = "utf8"
resources.db.params.pdoType = "dblib"
No issues there. Everything works.
I received couple of comments from other people when I announced that I would try such a feat. It appeared that most people are running their own SQL-instances of various kinds because of performance reasons. The Azure SQL -service is definitely not the fastest there is. But what if you're not in a hurry. The service is there, easily available, cheap and functional, even from Linux/PHP.
Earlier I wrote about IPv6-connectivity with MS SQL server into Linux / PHP with FreeTDS.
This time my quest with FreeTDS continued, I put together the minimal possible CentOS 6.4 Linux with enough parts to produce a Nginx / PHP-FPM / Windows Azure SQL Database -based web application. The acronym could be not LAMP, but NPFWASD. No idea how to pronounce "npf-wasd", though.
I packaged a Hyper-V -based Linux .vhd into Azure virtual machine IaaS-image and created couple of load-balanced HTTP-ports into it. The problem was to lure PHP's PDO to connect into Azure SQL via FreeTDS dblib. I spent a good while banging my head and kicking it, before it stopped resisting and started to obey my commands.
Everything would have gone much better, if only I had the proper version of FreeTDS installed into the Linux. When I realized that the TDS-protocol version is hyper-important in Azure SQL, I realised that my FreeTDS-version was not the one it was supposed to be. My own-package would have been the correct one (see the earlier post). My tsql -C says:
Compile-time settings (established with the "configure" script)
Version: freetds v0.92.dev.20130721
freetds.conf directory: /etc
MS db-lib source compatibility: yes
Sybase binary compatibility: yes
Thread safety: yes
iconv library: yes
TDS version: 7.1
iODBC: no
unixodbc: yes
SSPI "trusted" logins: no
Kerberos: yes
The default TDS version of 7.1 is really, really important there. With that I can do:
tsql -H -my-designated-instance-in-Azure-.database.windows.net \
-p 1433 \
-U -the-application-SQL-user-without-admin-rights- \
-D -my-own-database-in-the-SQL-box-
It simply works, displays the prompt and everything works as it should be. In my Zend Framework application configuration I say:
resources.db.adapter = "Pdo_Mssql"
resources.db.params.host = "-my-designated-instance-in-Azure-.database.windows.net"
resources.db.params.dbname = "-my-own-database-in-the-SQL-box-"
resources.db.params.username = "-the-application-SQL-user-without-admin-rights-"
resources.db.params.password = "-oh-the-top-secret-passwrod-"
resources.db.params.version = "7.1"
resources.db.params.charset = "utf8"
resources.db.params.pdoType = "dblib"
No issues there. Everything works.
I received couple of comments from other people when I announced that I would try such a feat. It appeared that most people are running their own SQL-instances of various kinds because of performance reasons. The Azure SQL -service is definitely not the fastest there is. But what if you're not in a hurry. The service is there, easily available, cheap and functional, even from Linux/PHP.
What programming languages to learn?
Monday, August 26. 2013
This is a classic question which I get to answer a lot. N00bs know the answer, but somebody outside the IT-business might ask something like that. This is also quite a popular question among young people trying to figure out if programming would be for them.
Anyway, here 5 Programming Languages Everyone Should Know from two people who actually have created some of the most popular languages currently used.
Nobody should call themselves a professional if they knew only one language.
- Bjarne Stroustrup
Larry Wall
See his interview: http://youtu.be/LR8fQiskYII
His list:
- JavaScript
- Java
- Haskell
- C
- Perl
Perl is not a surprise in his list. He created it in the 80s. 
Bjarne Stroustrup
See his interview: http://youtu.be/NvWTnIoQZj4
His list:
- C++
- Java
- Python
- JavaScript
- C
- C#
Again, seeing C++ in his list is not a big surprise, he was one of the authors of the language in the 80s. The funny thing is that he mentions 6 languages.
Linus Torvalds
This two year old interview keeps popping up. In this video http://youtu.be/Aa55RKWZxxI mr. Linux mentions one programming language not to use. 
The again, this person is well known from his more than colorful opinions about various issues. But anyway his work on Linux kernel and Git version management are well known, he is a fan of C.
me
Being a blog-author I have to express an opinion of my own. To solely copy/paste opinions of three very skilled persons is too much of a cheap thing. So, here goes:
- C
- Pretty much all languages created after 1970 owe something to C, it is imperative to know this.
- JavaScript
- When doing any kind of web-stuff, this is the only language being used in 100% of the cases. All browsers run this and it is the de-facto client-side language today.
- C#
- Very versatile compiled language by Microsoft, has lot of influence from C, C++, Java, PHP, Perl, etc. the list goes on. It is mainly used with .Net to create server-side stuff.
- PHP
- IMHO the most important web-server language there is. This is wildly popular and shares similarity with C, JavaScript, Perl, Visual Basic, etc.
In addition to learning programming languages, I encourage everybody to learn also following widely popular frameworks:
- Microsoft .Net
- Zend Framework
My reasoning between this is that if you understand how they work, you're pretty well covered and also going to Python/Django or Ruby on Rails is much easier task. I know that these are web-frameworks and people program a lot other stuff than web, but sticking to the topic of what languages to learn, these are the first ones to try. There are so many other frameworks, especially in PHP-land, but they don't have such an essential position as the framework made by people who created the PHP-language. In Microsoftland there are no other significiant frameworks to learn. Anyway, both are properly documented and lot of information can be found of them.
This is a classic question which I get to answer a lot. N00bs know the answer, but somebody outside the IT-business might ask something like that. This is also quite a popular question among young people trying to figure out if programming would be for them.
Anyway, here 5 Programming Languages Everyone Should Know from two people who actually have created some of the most popular languages currently used.
Nobody should call themselves a professional if they knew only one language.
- Bjarne Stroustrup
Larry Wall
See his interview: http://youtu.be/LR8fQiskYII
His list:
- JavaScript
- Java
- Haskell
- C
- Perl
Perl is not a surprise in his list. He created it in the 80s.
Bjarne Stroustrup
See his interview: http://youtu.be/NvWTnIoQZj4
His list:
- C++
- Java
- Python
- JavaScript
- C
- C#
Again, seeing C++ in his list is not a big surprise, he was one of the authors of the language in the 80s. The funny thing is that he mentions 6 languages.
Linus Torvalds
This two year old interview keeps popping up. In this video http://youtu.be/Aa55RKWZxxI mr. Linux mentions one programming language not to use.
The again, this person is well known from his more than colorful opinions about various issues. But anyway his work on Linux kernel and Git version management are well known, he is a fan of C.
me
Being a blog-author I have to express an opinion of my own. To solely copy/paste opinions of three very skilled persons is too much of a cheap thing. So, here goes:
- C
- Pretty much all languages created after 1970 owe something to C, it is imperative to know this.
- JavaScript
- When doing any kind of web-stuff, this is the only language being used in 100% of the cases. All browsers run this and it is the de-facto client-side language today.
- C#
- Very versatile compiled language by Microsoft, has lot of influence from C, C++, Java, PHP, Perl, etc. the list goes on. It is mainly used with .Net to create server-side stuff.
- PHP
- IMHO the most important web-server language there is. This is wildly popular and shares similarity with C, JavaScript, Perl, Visual Basic, etc.
In addition to learning programming languages, I encourage everybody to learn also following widely popular frameworks:
- Microsoft .Net
- Zend Framework
My reasoning between this is that if you understand how they work, you're pretty well covered and also going to Python/Django or Ruby on Rails is much easier task. I know that these are web-frameworks and people program a lot other stuff than web, but sticking to the topic of what languages to learn, these are the first ones to try. There are so many other frameworks, especially in PHP-land, but they don't have such an essential position as the framework made by people who created the PHP-language. In Microsoftland there are no other significiant frameworks to learn. Anyway, both are properly documented and lot of information can be found of them.
Exploring Dijit.Editor of Dojo toolkit
Sunday, August 25. 2013
My favorite JavaScript-library Dojo has a very nice HTML-editor. During a project I even enhanced it a bit. Anyway the Dojo/Dijit-documentation is not the best in the world, so I'll demonstrate the three operating modes that are available. All of them have the same functionality, but how they appear visually to the person doing the HTML-editing differs.
Classic fixed size
This is the vanilla operating mode. From the beginning of time, a HTML <TEXTAREA> has been like this (without any formatting, of course). A fixed block-container of multi-line text editor which will scroll on overflow.
Example:
HTML for declarative instantiation:
<div id="terms-Editor" data-dojo-type="dijit.Editor"
height="150px"
data-dojo-props="extraPlugins:['insertanchor','viewsource']">
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>
</div>
There is really nothing special, I'm just using two extra plugins insertanchor and viewsource to produce two new icons to the editor toolbar to add functionality for the end user. I found out that the plugins really need to be all lower-case for them to load properly. The class-names and filenames are in CamelCase, but putting them like that makes loading fail.
The obvious thing is that the editor is 150 pixels high. I didn't set the width, but since the editor is a simple div, any size can be set.
Auto-expanding
This is a plugin expansion for the previous. The only real difference is that this type of editor does not overflow. Ever. It keeps auto-expanding to any possible size to be able to display the entire text at once. During testing I found out that the auto-resize -code does not work on all browsers. There seems to be a discrepancy of exactly one line on for example Chrome. The bug manifests itself when you try to edit the last line, pretty much nothing is visible there. I didn't fix the bug, as I concluded that I won't be using this mode at all.
HTML for declarative instantiation:
<div id="terms-Editor" data-dojo-type="dijit.Editor"
height="" minHeight="100px"
data-dojo-props="extraPlugins:['alwaysshowtoolbar','insertanchor','viewsource']">
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>
</div>
This has three things to be taken into notice:
- The auto-expansion is achieved by plugin called alwaysshowtoolbar. It does not work in current Dojo version of 1.9.1, I had to fix it. See the patch at the end of this post.
- It is absolutely critical to set the height="". Forget that and the alwaysshowtoolbar-plugin does not work.
- It is advisable to set a minimum height, in my example I'm using 100 pixels. The editor will be really slim, if there is no text. This sets the size into something visually appealing.
Manually resizable
This is how a <TEXTAREA> behaves on many modern browsers. When using plugin statusbar you'll get a handle to resize the block. During testing I found out that it is a bad idea to allow the user to be able to make the editor wider. I enhanced the class with additional parameter which gets passed to plugin's constructor to limit the ResizeHandle functionality.
Example:
HTML for declarative instantiation:
<div id="terms-Editor" data-dojo-type="dijit.Editor"
height="200px"
data-dojo-props="extraPlugins:[{name:'statusbar',resizeAxis:'y'},'insertanchor','viewsource']">
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>
</div>
Note that specifying the resizeAxis won't work in your code. If you really want the code I can e-mail it to you, it is longish enough for me not to post it here. The height is 200 pixels initially, but user can make the editor any size.
Hope this helps to clarify the complexity of this fine editor. It would be also nice if a filed bug report would be processed in Dojo-project. Also their discussion in Google Groups is closed, the group is set to moderate any posts, but there is nobody doing that. So, effectively new people cannot post. There is nothing left, but to complain in a blog. 
Appendix 1:AlwaysShowToolbar.js patch to fix the plugin-loading
--- dojo/dijit/_editor/plugins/AlwaysShowToolbar.js.orig 2013-07-19 13:21:17.000000000 +0300
+++ dojo/dijit/_editor/plugins/AlwaysShowToolbar.js 2013-08-02 17:31:44.384216198 +0300
@@ -13,7 +13,7 @@
// module:
// dijit/_editor/plugins/AlwaysShowToolbar
- return declare("dijit._editor.plugins.AlwaysShowToolbar", _Plugin, {
+ var AlwaysShowToolbar = declare("dijit._editor.plugins.AlwaysShowToolbar", _Plugin, {
// summary:
// This plugin is required for Editors in auto-expand mode.
// It handles the auto-expansion as the user adds/deletes text,
@@ -198,4 +198,11 @@
}
});
+ // Register this plugin.
+ // For back-compat accept "alwaysshowtoolbar" (all lowercase) too, remove in 2.0
+ _Plugin.registry["alwaysshowtoolbar"] = _Plugin.registry["alwaysshowtoolbar"] = function(args){
+ return new AlwaysShowToolbar();
+ };
+
+ return AlwaysShowToolbar;
});
My favorite JavaScript-library Dojo has a very nice HTML-editor. During a project I even enhanced it a bit. Anyway the Dojo/Dijit-documentation is not the best in the world, so I'll demonstrate the three operating modes that are available. All of them have the same functionality, but how they appear visually to the person doing the HTML-editing differs.
Classic fixed size
This is the vanilla operating mode. From the beginning of time, a HTML <TEXTAREA> has been like this (without any formatting, of course). A fixed block-container of multi-line text editor which will scroll on overflow.
Example:
HTML for declarative instantiation:
<div id="terms-Editor" data-dojo-type="dijit.Editor"
height="150px"
data-dojo-props="extraPlugins:['insertanchor','viewsource']">
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>
</div>
There is really nothing special, I'm just using two extra plugins insertanchor and viewsource to produce two new icons to the editor toolbar to add functionality for the end user. I found out that the plugins really need to be all lower-case for them to load properly. The class-names and filenames are in CamelCase, but putting them like that makes loading fail.
The obvious thing is that the editor is 150 pixels high. I didn't set the width, but since the editor is a simple div, any size can be set.
Auto-expanding
This is a plugin expansion for the previous. The only real difference is that this type of editor does not overflow. Ever. It keeps auto-expanding to any possible size to be able to display the entire text at once. During testing I found out that the auto-resize -code does not work on all browsers. There seems to be a discrepancy of exactly one line on for example Chrome. The bug manifests itself when you try to edit the last line, pretty much nothing is visible there. I didn't fix the bug, as I concluded that I won't be using this mode at all.
HTML for declarative instantiation:
<div id="terms-Editor" data-dojo-type="dijit.Editor"
height="" minHeight="100px"
data-dojo-props="extraPlugins:['alwaysshowtoolbar','insertanchor','viewsource']">
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>
</div>
This has three things to be taken into notice:
- The auto-expansion is achieved by plugin called alwaysshowtoolbar. It does not work in current Dojo version of 1.9.1, I had to fix it. See the patch at the end of this post.
- It is absolutely critical to set the height="". Forget that and the alwaysshowtoolbar-plugin does not work.
- It is advisable to set a minimum height, in my example I'm using 100 pixels. The editor will be really slim, if there is no text. This sets the size into something visually appealing.
Manually resizable
This is how a <TEXTAREA> behaves on many modern browsers. When using plugin statusbar you'll get a handle to resize the block. During testing I found out that it is a bad idea to allow the user to be able to make the editor wider. I enhanced the class with additional parameter which gets passed to plugin's constructor to limit the ResizeHandle functionality.
Example:
HTML for declarative instantiation:
<div id="terms-Editor" data-dojo-type="dijit.Editor"
height="200px"
data-dojo-props="extraPlugins:[{name:'statusbar',resizeAxis:'y'},'insertanchor','viewsource']">
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>
</div>
Note that specifying the resizeAxis won't work in your code. If you really want the code I can e-mail it to you, it is longish enough for me not to post it here. The height is 200 pixels initially, but user can make the editor any size.
Hope this helps to clarify the complexity of this fine editor. It would be also nice if a filed bug report would be processed in Dojo-project. Also their discussion in Google Groups is closed, the group is set to moderate any posts, but there is nobody doing that. So, effectively new people cannot post. There is nothing left, but to complain in a blog.
Appendix 1:AlwaysShowToolbar.js patch to fix the plugin-loading
--- dojo/dijit/_editor/plugins/AlwaysShowToolbar.js.orig 2013-07-19 13:21:17.000000000 +0300
+++ dojo/dijit/_editor/plugins/AlwaysShowToolbar.js 2013-08-02 17:31:44.384216198 +0300
@@ -13,7 +13,7 @@
// module:
// dijit/_editor/plugins/AlwaysShowToolbar
- return declare("dijit._editor.plugins.AlwaysShowToolbar", _Plugin, {
+ var AlwaysShowToolbar = declare("dijit._editor.plugins.AlwaysShowToolbar", _Plugin, {
// summary:
// This plugin is required for Editors in auto-expand mode.
// It handles the auto-expansion as the user adds/deletes text,
@@ -198,4 +198,11 @@
}
});
+ // Register this plugin.
+ // For back-compat accept "alwaysshowtoolbar" (all lowercase) too, remove in 2.0
+ _Plugin.registry["alwaysshowtoolbar"] = _Plugin.registry["alwaysshowtoolbar"] = function(args){
+ return new AlwaysShowToolbar();
+ };
+
+ return AlwaysShowToolbar;
});
How not to behave as a member of software project
Friday, August 2. 2013
This is about how NOT to behave, writing about the opposite, how to behave, is way beyond me. I don't even know that myself. However, during my years in various companies and multitude of software projects, I've met lot of people. Some working quite effectively as members of a software project than others (best-of-the-best, if borrowing a quote from the movie Men in Black is allowed), and then there are the worst-of-the-worst. Today I share a story of such project member.
There is this project where I've been working as a contractor for almost 5 years now. There were some changes in the organization and I thought now would be about the time for me to do something else for a while. I discussed this with management, gave my notice and they started looking for new contractor, whom I promised to train before I leave.
Everything was fine until for my sins they gave me one (if borrowing a quote from the movie Apocalypse Now is allowed).
In the beginning it was pretty standard operations. My manager said, that there would be this new guy and he needed access to source code and ticketing. Pretty soon the new guy contacted me and asked for the credentials. I told him to hang tight so that I'd create a personal LDAP-account for him. I created the account, put in the needed groupings for a software developer -profile and handed them out to him. Nothing fancy there. Nothing that you wouldn't expect to see or do when arriving to a new assignment.
The next day he said something like "his code isn't that wonderful" to a colleague of mine. Naturally my colleague pretty soon told me what had happened. We've been working in the project together for a while and it was pretty normal reaction of him to say that the new guy is dissing your code there. I confronted the new guy and said that "Come on! We're supposed to be working together, why would you dis my work there." He surely knew how to set the initial impression. 
A couple of days passed by and my colleague comes to me again: "Did you see, that he posted his LDAP-account username and password to a public bulletin board?" There is so little to do in such a situation, except OMG!! and WTF!! Why would anybody do anything like that. This is yet again one thing way beyond me. Sharing your personal credentials with other people is grounds for termination of employment!
I don't know what's going to happen next. I informed my manager that I absolutely positively won't be working with this guy. He's obviously not qualified for this job and will most likely do more harm than good for the company. However, I've been getting a lot of visitors for my LinkedIn profile lately from his connections. Looks like I've made lot of new fans for my Fan Club.
Perhaps the confidentiality agreement doesn't apply to all of us? It is generally a bad idea to blabber about company's internal things with your contacts.
To recap:
- Don't criticize your colleagues' work behind their backs.
- If you absolutely need to criticize somebody's work, do it at him/her.
- Don't share passwords with anybody, they are meant to be kept secret.
- If you really know that a password can be shared, or you have permission to do so, then it's ok. When in doubt, don't do it.
- Don't blabber internal company or customers' issues to your friends.
- If you must do that, make sure you won't be caught doing so. When you get caught and get fired, remember I told you so.
- Take responsibility of everything you say and do. Really, that means about everything.
- This
is much easier, when you say and do things people would expect somebody
to say and do. If you go beyond the socially acceptable envelope, be
prepared to take some heat for it.
- Then again, if you code looks like shit and works like shit, some people will
call it shit. If you cannot quantify the results of your own work, then
you're in shit. It is very unlikely that your work is the best there
is. If you intentionally write code like shit and people call it shit, don't be surprised.
This is about how NOT to behave, writing about the opposite, how to behave, is way beyond me. I don't even know that myself. However, during my years in various companies and multitude of software projects, I've met lot of people. Some working quite effectively as members of a software project than others (best-of-the-best, if borrowing a quote from the movie Men in Black is allowed), and then there are the worst-of-the-worst. Today I share a story of such project member.
There is this project where I've been working as a contractor for almost 5 years now. There were some changes in the organization and I thought now would be about the time for me to do something else for a while. I discussed this with management, gave my notice and they started looking for new contractor, whom I promised to train before I leave.
Everything was fine until for my sins they gave me one (if borrowing a quote from the movie Apocalypse Now is allowed).
In the beginning it was pretty standard operations. My manager said, that there would be this new guy and he needed access to source code and ticketing. Pretty soon the new guy contacted me and asked for the credentials. I told him to hang tight so that I'd create a personal LDAP-account for him. I created the account, put in the needed groupings for a software developer -profile and handed them out to him. Nothing fancy there. Nothing that you wouldn't expect to see or do when arriving to a new assignment.
The next day he said something like "his code isn't that wonderful" to a colleague of mine. Naturally my colleague pretty soon told me what had happened. We've been working in the project together for a while and it was pretty normal reaction of him to say that the new guy is dissing your code there. I confronted the new guy and said that "Come on! We're supposed to be working together, why would you dis my work there." He surely knew how to set the initial impression.
A couple of days passed by and my colleague comes to me again: "Did you see, that he posted his LDAP-account username and password to a public bulletin board?" There is so little to do in such a situation, except OMG!! and WTF!! Why would anybody do anything like that. This is yet again one thing way beyond me. Sharing your personal credentials with other people is grounds for termination of employment!
I don't know what's going to happen next. I informed my manager that I absolutely positively won't be working with this guy. He's obviously not qualified for this job and will most likely do more harm than good for the company. However, I've been getting a lot of visitors for my LinkedIn profile lately from his connections. Looks like I've made lot of new fans for my Fan Club. Perhaps the confidentiality agreement doesn't apply to all of us? It is generally a bad idea to blabber about company's internal things with your contacts.
To recap:
- Don't criticize your colleagues' work behind their backs.
- If you absolutely need to criticize somebody's work, do it at him/her.
- Don't share passwords with anybody, they are meant to be kept secret.
- If you really know that a password can be shared, or you have permission to do so, then it's ok. When in doubt, don't do it.
- Don't blabber internal company or customers' issues to your friends.
- If you must do that, make sure you won't be caught doing so. When you get caught and get fired, remember I told you so.
- Take responsibility of everything you say and do. Really, that means about everything.
- This is much easier, when you say and do things people would expect somebody to say and do. If you go beyond the socially acceptable envelope, be prepared to take some heat for it.
- Then again, if you code looks like shit and works like shit, some people will
call it shit. If you cannot quantify the results of your own work, then
you're in shit. It is very unlikely that your work is the best there
is. If you intentionally write code like shit and people call it shit, don't be surprised.
Sybase SQL and Microsoft SQL connectivity from Linux with FreeTDS library using IPv6
Monday, July 22. 2013
Microsoft SQL server is a fork of Sybase SQL server. This is because their co-operation at their early stages during end of 80s and beginning of 90s. For that reason the client protocol to access both servers is precisely the same TDS. There is an excellent open-source library of FreeTDS to access these SQL-servers from Linux. According to me and number of other sources in The Net, this library can also access Windows Azure SQL server.
During my own projects I was building a Linux-image for Azure. My development boxes are spread around geographically, and in this case the simplest solution was to open access into a firewall to allow incoming IPv6 TCP/1433 requests.
My tests with this setup failed. IPv6-access was ok, firewall was ok, a socket would open without problems but my application could not reach my development SQL-box. Bit of a tcpdumping revealed that my Hyper-V hosted Linux-box attempted to reach my SQL-box via IPv4. What?! What?! What?!
A quick browse into FreeTDS-code revealed that it had zero IPv6-related lines of code. According to Porting IPv4 applications to IPv6, there should be usage of struct sockaddr_in6 and/or struct in6_addr. In the latest stable version of FreeTDS there is none.
After a lot of Googling I found a reference from FreeTDS developers mailing list that in January 2013 Mr. Peter Deacon started working on IPv6-support. Naturally, this was good news to me. Another message in the ML said from February 2013 said that the IPv6-support would be working nicely. Yet another good thing.
Now all I had to do is find FreeTDS source code. I found somebody's Subversion copy of it, but with Google, no avail. The IPv6-patch nowere to be found, nor the actual source code. The mailing list itself seems to be having some sort of technical difficulties. My attempts to ask for further information seemed to go nowhere. I pretty much abandoned all hope when Mr. Frediano Ziglio was kind enough to inform me that the IPv6-support would be in the latest GIT-version of FreeTDS.
FreeTDS source code can be found from Gitorious at http://gitorious.org/freetds/freetds
I can confirm that the current Git-version does work with IPv6. However, for example PHP's PDO or Perl's DBI do not support entering IPv6-addresses into connect string. With FQDN I could confirm everything being IPv6 from Wireshark, but all my attempts of entering native IPv6-addresses into connect strings failed on both libraries and FreeTDS's CLI-tool tsql.
Anyway, here is what I did to test the thing. First I confimed that there is basic connectivity:
tsql -H myownserver.here -p 1433 -U sa
Password:
locale is "en_US.UTF-8"
locale charset is "UTF-8"
using default charset "UTF-8"
1> sp_help MyCoolTable
2> go
1> quit
Then I took a simple example from Perl Monks site and modified it to work (the original code was quite crappy):
#!/usr/bin/perl -wT --
# vim: tabstop=4 shiftwidth=4 softtabstop=4 expandtab:
use DBI;
use Data::Dumper; # For debugging
use strict;
use utf8;
my $dsn = 'DBI:Sybase:server=myownserver.here;database=MyCoolDatabase';
my $dbh = DBI->connect($dsn, "sa", 'lemmein!') or
die "unable to connect to server. Error: $DBI::errstr";
my $query = "SELECT * FROM MyCoolTable";
my $sth = $dbh->prepare($query) or
die "prepare failed. Error: $DBI::errstr";
$sth->execute() or
die "unable to execute query $query. Error: $DBI::errstr";
my $rows = 0;
while (my @first = $sth->fetchrow_array) {
++$rows;
print "Row: $rows\n";
foreach my $field (@first) {
print "field: $field\n";
}
}
print "$rows rows returned by query\n";
Also I did some complex testing with PHP DBO and had no issues. I even made sure from my firewall settings, that I could not accidentally access the SQL Server via IPv4. It just works perfectly!
If you need my src.rpm or pre-compiled packages, just drop a comment.
Microsoft SQL server is a fork of Sybase SQL server. This is because their co-operation at their early stages during end of 80s and beginning of 90s. For that reason the client protocol to access both servers is precisely the same TDS. There is an excellent open-source library of FreeTDS to access these SQL-servers from Linux. According to me and number of other sources in The Net, this library can also access Windows Azure SQL server.
During my own projects I was building a Linux-image for Azure. My development boxes are spread around geographically, and in this case the simplest solution was to open access into a firewall to allow incoming IPv6 TCP/1433 requests.
My tests with this setup failed. IPv6-access was ok, firewall was ok, a socket would open without problems but my application could not reach my development SQL-box. Bit of a tcpdumping revealed that my Hyper-V hosted Linux-box attempted to reach my SQL-box via IPv4. What?! What?! What?!
A quick browse into FreeTDS-code revealed that it had zero IPv6-related lines of code. According to Porting IPv4 applications to IPv6, there should be usage of struct sockaddr_in6 and/or struct in6_addr. In the latest stable version of FreeTDS there is none.
After a lot of Googling I found a reference from FreeTDS developers mailing list that in January 2013 Mr. Peter Deacon started working on IPv6-support. Naturally, this was good news to me. Another message in the ML said from February 2013 said that the IPv6-support would be working nicely. Yet another good thing.
Now all I had to do is find FreeTDS source code. I found somebody's Subversion copy of it, but with Google, no avail. The IPv6-patch nowere to be found, nor the actual source code. The mailing list itself seems to be having some sort of technical difficulties. My attempts to ask for further information seemed to go nowhere. I pretty much abandoned all hope when Mr. Frediano Ziglio was kind enough to inform me that the IPv6-support would be in the latest GIT-version of FreeTDS.
FreeTDS source code can be found from Gitorious at http://gitorious.org/freetds/freetds
I can confirm that the current Git-version does work with IPv6. However, for example PHP's PDO or Perl's DBI do not support entering IPv6-addresses into connect string. With FQDN I could confirm everything being IPv6 from Wireshark, but all my attempts of entering native IPv6-addresses into connect strings failed on both libraries and FreeTDS's CLI-tool tsql.
Anyway, here is what I did to test the thing. First I confimed that there is basic connectivity:
tsql -H myownserver.here -p 1433 -U sa
Password:
locale is "en_US.UTF-8"
locale charset is "UTF-8"
using default charset "UTF-8"
1> sp_help MyCoolTable
2> go
1> quit
Then I took a simple example from Perl Monks site and modified it to work (the original code was quite crappy):
#!/usr/bin/perl -wT --
# vim: tabstop=4 shiftwidth=4 softtabstop=4 expandtab:
use DBI;
use Data::Dumper; # For debugging
use strict;
use utf8;
my $dsn = 'DBI:Sybase:server=myownserver.here;database=MyCoolDatabase';
my $dbh = DBI->connect($dsn, "sa", 'lemmein!') or
die "unable to connect to server. Error: $DBI::errstr";
my $query = "SELECT * FROM MyCoolTable";
my $sth = $dbh->prepare($query) or
die "prepare failed. Error: $DBI::errstr";
$sth->execute() or
die "unable to execute query $query. Error: $DBI::errstr";
my $rows = 0;
while (my @first = $sth->fetchrow_array) {
++$rows;
print "Row: $rows\n";
foreach my $field (@first) {
print "field: $field\n";
}
}
print "$rows rows returned by query\n";
Also I did some complex testing with PHP DBO and had no issues. I even made sure from my firewall settings, that I could not accidentally access the SQL Server via IPv4. It just works perfectly!
If you need my src.rpm or pre-compiled packages, just drop a comment.
Parallels Plesk Panel 11 RPC API - reading DNS records
Tuesday, July 9. 2013
Getting Parallels Plesk Panel to do something without admin's interaction is not tricky. My favorite method of remote-controlling Plesk is via its RPC API. I am a co-author of Perl-implementation API::Plesk, which is available in CPAN.
All XML RPC -requests should be directed towards your Plesk-server at URL
https://-your-plesk-box-here-:8443/enterprise/control/agent.php
Raw XML
First we'll need to get the internal site ID of a domain. A request to get all the subscriptions looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<packet version="1.6.3.5">
<webspace>
<get>
<filter/>
<dataset>
<gen_info/>
</dataset>
</get>
</webspace>
</packet>
Note: It would have been possible to filter a specific subscription by domain name, but in this case we just wanted a list of all.
A response to it will contain domain names and their Ids:
<?xml version="1.0" encoding="UTF-8"?>
<packet version="1.6.3.5">
<webspace>
<get>
<result>
<status>ok</status>
<filter-id>1</filter-id>
<id>1</id>
<data>
<gen_info>
<name>www.testdomain.org</name>
</gen_info>
</data>
</result>
</get>
</webspace>
</packet>
The response packet contains internal ID and name. We'll be using the internal ID of 1 to get all the DNS-records of the zone:
<?xml version="1.0" encoding="UTF-8"?>
<packet version="1.6.3.5">
<dns>
<get_rec>
<filter>
<site-id>1</site-id>
</filter>
</get_rec>
</dns>
</packet>
A response packet will look like this:
<?xml version="1.0" encoding="UTF-8"?>
<packet version="1.6.3.5">
<dns>
<get_rec>
<result>
<status>ok</status>
<id>111</id>
<data>
<site-id>1</site-id>
<type>CNAME</type>
<host>www.testdomain.org.</host>
<value>testdomain.org.</value>
<opt/>
</data>
</result>
</get_rec>
</dns>
</packet>
There seems not to be any other way of picking a specific record. A filter with type/name would be welcome. Any further operations would be done with the domain record's ID. In this case it is 111.
Perl-code
With a software library, the access is much easier. The same requests would be something like this in Perl:
my $plesk_client = API::Plesk->new('api_version' => '1.6.3.5',
'secret_key' => $plesk_api_key,
'url'=>'https://-your-plesk-box-here-:8443/enterprise/control/agent.php',
'debug' => 0);
$res = $plesk_client->webspace->get();
die "Subscriptions->get() failed!\n" . $res->error . "\n" if (!$res->is_success);
my @domains = @{$res->results()};
my $cnt = $#domains + 1;
for (my $idx = 0; $idx < $cnt; ++$idx) {
my $domainId = $domains[$idx]{"id"};
$domainId += 0; # toInt
my $res = $plesk_client->dns->get('site-id' => $domainId);
die "DNS->get() failed!\n" . $res->error . "\n" if (!$res->is_success);
my %dns = %{@{$res->results()}[0]};
print Dump::Dumper(%dns);
}
That is pretty much it.
Update (2nd Nov 2013)
To get all of the domains will require a two-step process (order does not matter): 1) get all the subscriptions (kind of main domains) and 2) get the other domains under subscriptions.
In my Perl-code I do it like this:
# NOTE: This is from the above code
# 1st round:
# Get all the subscriptions.
# There we have the "main" domains
$res = $plesk_client->webspace->get();
die "Subscriptions->get() failed!\n" . $res->error . "\n" if (!$res->is_success);
# NOTE: New one:
# 2nd round:
# Get all the sites.
# There we have the "non-main" domains
$res = $plesk_client->site->get();
die "Sites->get() failed!\n" . $res->error . "\n" if (!$res->is_success);
@domains = @{$res->results()};
In my case, the $res-hash is fed into a ExtractDomains()-function to get the details I need from them. If only the name is required, then no further processing is necessary.
Getting Parallels Plesk Panel to do something without admin's interaction is not tricky. My favorite method of remote-controlling Plesk is via its RPC API. I am a co-author of Perl-implementation API::Plesk, which is available in CPAN.
All XML RPC -requests should be directed towards your Plesk-server at URL
https://-your-plesk-box-here-:8443/enterprise/control/agent.php
Raw XML
First we'll need to get the internal site ID of a domain. A request to get all the subscriptions looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<packet version="1.6.3.5">
<webspace>
<get>
<filter/>
<dataset>
<gen_info/>
</dataset>
</get>
</webspace>
</packet>
Note: It would have been possible to filter a specific subscription by domain name, but in this case we just wanted a list of all.
A response to it will contain domain names and their Ids:
<?xml version="1.0" encoding="UTF-8"?>
<packet version="1.6.3.5">
<webspace>
<get>
<result>
<status>ok</status>
<filter-id>1</filter-id>
<id>1</id>
<data>
<gen_info>
<name>www.testdomain.org</name>
</gen_info>
</data>
</result>
</get>
</webspace>
</packet>
The response packet contains internal ID and name. We'll be using the internal ID of 1 to get all the DNS-records of the zone:
<?xml version="1.0" encoding="UTF-8"?>
<packet version="1.6.3.5">
<dns>
<get_rec>
<filter>
<site-id>1</site-id>
</filter>
</get_rec>
</dns>
</packet>
A response packet will look like this:
<?xml version="1.0" encoding="UTF-8"?>
<packet version="1.6.3.5">
<dns>
<get_rec>
<result>
<status>ok</status>
<id>111</id>
<data>
<site-id>1</site-id>
<type>CNAME</type>
<host>www.testdomain.org.</host>
<value>testdomain.org.</value>
<opt/>
</data>
</result>
</get_rec>
</dns>
</packet>
There seems not to be any other way of picking a specific record. A filter with type/name would be welcome. Any further operations would be done with the domain record's ID. In this case it is 111.
Perl-code
With a software library, the access is much easier. The same requests would be something like this in Perl:
my $plesk_client = API::Plesk->new('api_version' => '1.6.3.5',
'secret_key' => $plesk_api_key,
'url'=>'https://-your-plesk-box-here-:8443/enterprise/control/agent.php',
'debug' => 0);
$res = $plesk_client->webspace->get();
die "Subscriptions->get() failed!\n" . $res->error . "\n" if (!$res->is_success);
my @domains = @{$res->results()};
my $cnt = $#domains + 1;
for (my $idx = 0; $idx < $cnt; ++$idx) {
my $domainId = $domains[$idx]{"id"};
$domainId += 0; # toInt
my $res = $plesk_client->dns->get('site-id' => $domainId);
die "DNS->get() failed!\n" . $res->error . "\n" if (!$res->is_success);
my %dns = %{@{$res->results()}[0]};
print Dump::Dumper(%dns);
}
That is pretty much it.
Update (2nd Nov 2013)
To get all of the domains will require a two-step process (order does not matter): 1) get all the subscriptions (kind of main domains) and 2) get the other domains under subscriptions.
In my Perl-code I do it like this:
# NOTE: This is from the above code
# 1st round:
# Get all the subscriptions.
# There we have the "main" domains
$res = $plesk_client->webspace->get();
die "Subscriptions->get() failed!\n" . $res->error . "\n" if (!$res->is_success);
# NOTE: New one:
# 2nd round:
# Get all the sites.
# There we have the "non-main" domains
$res = $plesk_client->site->get();
die "Sites->get() failed!\n" . $res->error . "\n" if (!$res->is_success);
@domains = @{$res->results()};
In my case, the $res-hash is fed into a ExtractDomains()-function to get the details I need from them. If only the name is required, then no further processing is necessary.