Migrating into Samba 4 - part 2
Wednesday, June 12. 2013
On my previous post about Samba 4, I stumbled on printers.
I had a weird warning in my log:
../source3/param/loadparm.c:3121(lp_do_parameter)
Ignoring unknown parameter "printer admin"
printer admin = root
The fix to do is drop:
printer admin = root
from smb.conf. Instead it needs to be done run-time, like this:
smbpasswd -a root
net rpc rights grant root SePrintOperatorPrivilege
First a password is required for root-user. I have one in LDAP, but for some reason a local password is required too. After that permissions for printer administration are granted separately. That pretty much concentrates privilege handling out of any text-files.
The other issue was printer sharing to Windows. It had an easy fix. I deleted the existing printer from Windows and added it again. CUPS had renamed the printer and it was not available with the same name. A basic CUPS / Samba setup with cupsaddsmb does the trick.
Syncro Soft <oXygen/> XML Editor - Avoid! Avoid! Avoid!
Tuesday, June 11. 2013
When doing XML-editing, I always use a suitable editor for that. Recently I've been using oXygen XML editor. It has all the features I need, I like it and naturally I bought a license. On a minus side, it is Java-software, and lately I've been disliking Java very much.
A while ago, they released a new version of 15.0. They appropriately informed me about the new version and said to check the upgrade availability. They have a nice reminder -form to check what you purchased from them the last time.
There is one thing they fail to mention. If you purchase today, and don't want to pay extra $100 for software upgrade service, and they release a new version tomorrow, you won't be eligible for a free upgrade. That's how they perceive you, a paying customer, a stupid lamb not to have paid them for a service they don't tell any details about.
There is a huge number of software companies operating on different basis. First you purchase their software. At that point they give you (typically) 12 month upgrade-period free-of-charge. Then at that point, they ask if you'd like their product that much to start paying for a service. You can agree or decline. If you agree, you'll be hoping that they release often enough to get your money's worth. On the other hand, you can choose to purchase updates whenever you feel like doing it. The software company respects you and operates on a honest basis.
I'll be taking my business elsewhere. Any recommendations for a XML-editor?
Samba 4 ldaps:// server functionality
Monday, June 10. 2013
My Fedora 19 project continues... An attempt to get Samba working. They upgraded into version 4 and obviously my version 3 smb.conf had issues. See my article about getting Samba to use LDAP as userbase backend.
The obvious problem was, that it didn't work.
A log entry from the failue:
../source3/lib/smbldap.c:575(smbldap_start_tls)
Failed to issue the StartTLS instruction: Connect error
../source3/passdb/pdb_ldap.c:6531(pdb_ldapsam_init_common)
pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it.
../source3/passdb/pdb_interface.c:177(make_pdb_method_name)
pdb backend ldapsam:ldap://my.server did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO)
I confirmed the existing settings:
passdb backend = ldapsam:ldap://my.server
ldap ssl = start tls
After a nice while of reading manual pages, an attempt to fix:
passdb backend = ldapsam:ldaps://my.server
ldap ssl = off
Yielded an improvement:
../source3/lib/smbldap.c:998(smbldap_connect_system)
failed to bind to server ldaps://my.server with dn="uid=root,ou=People,dc=my,dc=domain" Error: Can't contact LDAP server
TLS error -8179:Peer's Certificate issuer is not recognized.
../source3/passdb/pdb_ldap.c:6531(pdb_ldapsam_init_common)
pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it.
../source3/passdb/pdb_interface.c:177(make_pdb_method_name)
pdb backend ldapsam:ldaps://my.server did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO)
This, however, was an easy fix. It was a simple SElinux issue:
To my amazement SElinux context does not change on a local unix-socket request. When Samba makes the request to get user information, the LDAPd certificate store needs to have proper SElinux type for the directory. OpenLDAP does not make such checks and works fully.semanage fcontext -a -t cert_t /etc/openldap/cacerts
restorecon -R -v /etc/openldap/cacerts
Also allowing requests to home directories too:
setsebool -P samba_enable_home_dirs 1
After all this, I was happy to get my Samba-shares working again. CUPS-printing does not. But I'll fix that on some day.
Migrating access control into Apache 2.4
Thursday, June 6. 2013
Fedora 19 ships with Apache 2.4. After the install completed it very soon become obvious that my previous Apache 2.2 setup didn't work without changes. At the time I just took a bigger hammer and kept banging it until it started. Needless to say, it didn't have all my virtual hosts and services.
Now I spent a while getting all the necessary things working. The most common issue I had was:
authz_core:error AH01630: client denied by server configuration
This pretty weird error is caused by the fact that as default all access is denied and the classic:
Order allow,deny
Allow from all
... does not actually do anything on 2.4, instead it needs to be written as:
Require all granted
The change in the configuration actually makes the setup much clearer, but the obvious problem is that it is not compatible with the previous versions. In the conf.modules.d/00-base.conf there is a directive to load the compat-module:
LoadModule access_compat_module modules/mod_access_compat.so
I put the line into comment and started running my Apache with new style setup only.
My second biggest issue was with the services I'm running with allow access from my own LAN, but require LDAP authentication when traffic was not originating from my LAN. Apache 2.2 example would be:
Order Deny,Allow
Deny from allAllow from my.lan
Allow from 2001:1234:5678::/64AuthType Basic
AuthName www.my.lan
AuthBasicProvider ldap
AuthLDAPURL "ldap://server:389/ou=People,dc=example,dc=com?uid?sub?(objectClass=*)"
Require valid-userSatisfy Any
The solution is very simple, just list the requirements and 2.4 somehow magically knows what you mean:
Require host my.lan
Require ip 2001:1234:5678::/64AuthType Basic
AuthName www.my.lan
AuthBasicProvider ldap
AuthLDAPURL "ldap://server:389/ou=People,dc=example,dc=com?uid?sub?(objectClass=*)"
Require valid-user
Otherwise my migration was pretty smooth.
Proftpd setup on Fedora 19
Wednesday, June 5. 2013
I needed to transfer really big files (10+ GiB) between couple of servers. Since I was not in a hurry I decided to go with FTP. It performs very well when doing large file transfers, see File Transfer Protocol Performance Study for EUMETSAT Meteorological Data Distribution by students of Institute of Mathematics and Computer Science University of Latvia. Only Rsync could do better, and that is when the target is to transfer large number of files instead of small number of large files. In reality all the time it took me to set up the servers, I'd done transferring my files with any protocol. Really, any. But since I'm expecting to need this setup sometimes later, I went ahead with it.
Anyway, I chose to go with FTPd and since my Fedora 19 beta has one in built in repo, I went with Proftpd. Initial problems:
- It didn't work well (read: not even the simplest things you'd expect work) with IPv6, it took me 3 hours to figure this out.
Fix: I used IPv4 addresses instead of FQDNs. Everything started to work. - No anonymous FTP-access
- No anonymous FTP uploads
IPv6-issues
There is plenty of code like this:
pr_log_debug(DEBUG0,
"Unable to handle PASV for IPv6 address '%s', rejecting command",
pr_netaddr_get_ipstr(session.c->local_addr));
pr_response_add_err(R_501, "%s: %s", cmd->argv[0], strerror(xerrno));
So after I found about that, I just stopped using it.
Anonymous FTP-access
FTP isn't a secure protocol. It was once by 80s standards, but in 2013 Internet... well, no. So the best bet is not to use security at all! I'll place my security on the firewall, use anonymous logins and reduce attack surface by shutting the daemon down when I'm not expecting to need it.
Getting the anonymous FTP to work turned out to be semi-tricky. The daemon needs a -DANONYMOUS_FTP in the sysconfig. Also I needed to re-direct the anonymous FTP root directory into a dedicated partition instead of the out-of-the-box /var/ftp. My enforcing SElinux didn't like that. I had appropriate owner and group setup for the directories and files, but it turned out that my mount-directory had weird SElinux type and I decided to go with pretty generic var_t as directory type, then anonymous user was able to log in. The next thing was to make sure that actual FTP-stuff had SElinux type of public_content_t for access to directories and files to work. The final thing was to convince my SElinux to allow using of appropriate TCP-ports for FTPd:
setsebool -P ftpd_connect_all_unreserved=1
After that I had somewhat working anonymous FTP box.
Passive FTP from Windows
I didn't have any fancy software for testing the client side. All I had was a trustworthy ftp.exe on a Windows PowerShell. However, it operated only on active-FTP -mode, which in my opinion is completely obsoleted. It simply does not work at all with proper firewall setup, so I'd better using passive-FTP -mode. Well, then... how to lure the ancient FTP-client to use it? It wasn't obvious or easy, but luckily somebody else had the same problem and solved it. The solution is to use QUOTE-command for passing the FTP-protocol required PASV, like this:
Connected to 192.168.0.1.
220 FTP Server ready.
User (192.168.0.1:(none)): ftp
331 Anonymous login ok, send your complete email address as your password
Password:
230-
*** Welcome to this anonymous ftp server! ***
You are user 1 out of a maximum of 10 authorized anonymous logins.
The current time here is Wed Jun 05 15:25:31 2013.
If you experience any problems here, contact : root@localhost
230 Anonymous login ok, restrictions apply.
ftp> quote pasv
227 Entering Passive Mode (192,168,0,1,236,242).
That made passive mode work ok.
Allowing anonymous uploads
The last bit was trickiest, it was almost impossible to give Proftpd what it wanted. To my surprise it wasn't about directory permissions or SElinux. It was simply about configuration issue with <Limit> acting stupidly. Out-of-the-box, the config file apparently allows anonymous uploads. The problem is that it doesn't work. There is a line like this:
<Directory uploads/*>
But it should be like this:
<Directory /uploads>
I don't know why it must be like that, but after tinkering it for a very long time, that turned to be the key to my success. I also changed a <Limit READ> into <Limit LIST READ> to prevent getting directory listings from uploads-directory.
That concluded my setup. Just opening suitable IP-addressses and TCP-ports 20 and 21 made my files fly.
Steam games installing DirectX on every start
Thursday, May 30. 2013
Steam has this reputation of games re-installing 3rd party prerequisites on every single time you launch a game. Stupid, huh? Find discussion about that on Steam's forums.
The official Steam explanation is that there are so many versions of D3DX-library. Well, in my mind that really doesn't answer the question why there is a re-install on every game launch. You'd think that the library needs to be installed only once. Luckily somebody found out a fix for that. The trick is to edit/delete the .VDF-file which is pretty much the install script for a game.
Here is an example 34870_install.vdf
:
"InstallScript"
{
"Run Process"
{
"DirectX"
{
"process 1" "%INSTALLDIR%\\Redist\\dxwebsetup.exe"
"Description" "Microsoft DirectX Redist Package"
"NoCleanUp" "1"
}
}
"Run Process"
{
"VCRedist"
{
"process 1" "%INSTALLDIR%\\Redist\\vcredist_x86_2010_sp1.exe"
"Description" "Microsoft VC Redist Package"
"NoCleanUp" "1"
}
}
}
It pretty much states that both DirectX and VisualC redistributable packages are installed every single time the game launches. A simple rename/delete for the file will fix the problem. Game will start without problems and won't do anything stupid.
The issue has been reported by multiple people for the past 3 years. Shame on you Steam!
Parallels Plesk Panel 11 hangs on backup
Monday, May 27. 2013
For some unknown reason a customer's backup job hung. A ps axuww looked like this after a failed kill 3150 attempt:
3150 ? Z 0:00 [mysqldump] <defunct>
There was no other way of getting rid of that other than a reboot. It didn't help, though. It automatically re-started the backup job and did the same thing again.
All customer's services were disabled, there was Suspend domain until backup task is completed -setting enabled. At that point I was really screwed. The backup job turned undead and all customer's services were out of action. Nice going! The message that was given from Parallels Panel at control panel was: "The subscription is temporarily suspended until the backup or restore process is complete".
Parallels forums had an article about unsuspending a domain. I did the same thing for a subscription. Apparently at the time of writing in Plesk 9, there were no subscriptions. So I ran this:
/usr/local/psa/bin/subscription -u customer's domain name here.com -status enabled
It helped! It un-suspended the services. The backup is not working yet, but the services were running again. I need to further investigate the backup issue and get back about that.
Playstation 3 failing to load updates
Friday, May 24. 2013
Something really weird happened. My Playstation 3 refused to load updates. I retried at least 50 times, but it failed to load measly 24 MiB update from Sony.
A quick Google-search reveals, that plenty of people are suffering from this problem.
In my case the fix was to change wired connection into wireless. I have no idea what the difference is, but for some reason it did the trick.
Getting the updates into PS3 is rather complex as observed from the network traffic. An update is a HTTP-request and the load is split into 4 parallel loads. Example:
GET /tppkg/np/NPEB00874/NPEB00874_T8/0bbab4e7b137739f/EP4350-NPEB00874_00-NETFLIXRIGEL01EU-A0104-V0100-PE.pkg?product=0085&downloadId=be00b7cc&q=2058b9eb8ab5f2492012c6c5b5a73320d1bde7f004d5cb6734fa2ebf322b971e&serverIpAddr=87.248.207.254&downloadType=fm HTTP/1.1
Host: b0.ww.np.dl.playstation.net
Range: bytes=6126400-12252799
I don't understand why they do that. Obviously there is some sort of bug in the 4.41 firmware.
Update 4th July 2013:
Problem still exists in 4.46 firmware. I had major trouble updating, since my PS3 didn't get 100% of the firmware. The error message reads: "An error occurred during the download operation. A connection to the server cannot be established (80710723)".
Linux 3.9 SCSI working with Hyper-V
Thursday, May 23. 2013
Earlier I wrote about how I found out the hard way, that modern Linuxes don't work much with Hyper-V. Also I wrote about Microsoft taking initiative with fixing this.
I said in my earlier post that Linux 3.9 had fixes for storvsc-driver and finally I had time to confirm that. It works perfectly! In my Arch Linux -installation all I had to do is run pacman -Syu and I got Linux 3.9.3 kernel, which after a reboot works very well with my Windows 8 Hyper-V. No previously described block device hang -messages.
Mac OS X encrypted Timemachine USB-drive
Wednesday, May 22. 2013
I got a new iMac and wanted to use an USB-drive as Time Machine backup destination for it. When it comes to easily movable backup-drives, my policy is to encrypt them. If a drive is lost/stolen/misplaced, other people don't get to access my data.
When OS X sees an empty USB-drive, and there is no Time Machin already set up, it asks if you'd like to use the newly found drive as Time Machine backup destination, which is nice and user friendly. During Time Machine setup, there is a checkbox "Encrypt drive". I checked it, but whatwhatwhat?! It refused to work. Error message:
There was an error preparing the Time Machine backup disk "TimeMachine" and an additional explanation: The given file system is not supported on Core Storage.
Well... that's strange. I have no idea what a Core Storage is, or why an Apple formatted HFS+ partition would not be a valid (encrypted) backup destination.
Here is what I did. My drive was already formatted when I got it:
$ diskutil list disk3
/dev/disk3
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk3
1: EFI 209.7 MB disk3s1
2: Microsoft Basic Data WD Backup 1000.0 GB disk3s2
There is a 1 TiB Western Digital drive with a NTFS-partition on it. So I had to delete the partition and create a better suitable one:
$ diskutil partitionDisk disk3 1 GPT HFS+ TimeMachine 1T
Started partitioning on disk3
Unmounting disk
Creating the partition map
Waiting for the disks to reappear
Formatting disk3s2 as Mac OS Extended with name TimeMachine
Initialized /dev/rdisk3s2 as a 931 GB HFS Plus volume
Mounting disk
Finished partitioning on disk3
/dev/disk3
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk3
1: EFI 209.7 MB disk3s1
2: Apple_HFS TimeMachine 999.9 GB disk3s2
This is the partition setup that refuses to work.
After couple of hours of Googling I found the solution:
$ diskutil enableJournal disk3s2
Journaling has been enabled for volume TimeMachine on disk3s2
I seem to have misplaced the links already. Sorry. Anyway, the problem was that I didn't enable journaling on the partition. I can understand why that is a requirement for Time Machine. Having a journaled partition adds robustness to write operations. The real questions are:
- Why creating a HFS+ partition does not enable journaling automatically?
- On Linux creating EXT2 or EXT3 partition does not enable journaling automatically
- On Linux creating EXT4 journaling is enabled by default
- Why doesn't Time Machine state the reason?
- Saying that Core Storage is not supported is a bit vague!
Mac OS X terminal UTF-8 over SSH
Tuesday, May 21. 2013
Something weird happens in OS X Terminal locale settings. Whenever I open an SSH-connection to one of my Linux-boxes, they refuse to properly set up an UTF-8 locale.
The session goes something like this. Checking locale settings on OS X terminal:
$ locale
LANG=
LC_COLLATE="C"
LC_CTYPE="UTF-8"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
Open SSH-connection and check locale settings on Linux end:
~> locale
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
LANG=en_US.UTF-8
LC_CTYPE=UTF-8
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
Aow come on! Something went wrong.
The fix on the OS X end is not to set the environment variables. In the Terminal settings, there is:
The setting Set locale environment variables on startup needs to be UNset. It is checked out-of-the-box. Then it yields:
$ locale
LANG=
LC_COLLATE="C"
LC_CTYPE="C"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
The LC_CTYPE is not set. Over SSH-connection to Linux, it yields:
~> locale
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
Now there are no error messages. The next thing to do is to try to find somebody responsible. Whose job it is to fix this.
Google doing the same thing as Skype
Friday, May 17. 2013
The H-Security blog wrote about Skype reading all the messages you type. They had seen weird traffic into their website after posting the URL into Skype. Read all about it from their blog entry "Skype with care – Microsoft is reading everything you write". There is also Ed Bott's article about how H-Security guys got it wrong the first time, meaning that they don't check your links.
Anyway, this is absolutely something I had to check out. A perfect candidate for this is our Development Lab test server. In testing phase we're in public Internet so that all the parties can test our upcoming version. There are 0 links to the site, but for some reason Google has indexed a number of pages from there. The information I got is from http://www.wholinks2me.com/
What's strange here is that I had to change the domain names couple of weeks ago, to make our testing process more exact what version we're testing. Also, knowing our people I'm pretty sure that nobody publishes the test-server links in any of their wesites, I know I don't. So, the question raises: where did Google get the new address from?
When I changed the domain, I did send an e-mail about it. Yep, you got it right, our team is using Google Mail. For the purpose of full discousre: on the site there is also Google AdSense advertisements and Google Translate tool.
I investigated 10 days worth of web-server logs. In that there are 18 distinct IP-addresses where the server was accessed from. A short investigation of all 18 addresses revealed that 8 of them can be easily explained. They are home and office IP-addresses of our team. What was strange that 5 of them were from Amazon EC2 and Google networks. I have no reasonable explanation why they are accessing my site. So it is very easy to come up with couple of conspiracy theories when something like that happens.
Upcoming: Proper Hyper-V guest support for Linux
Thursday, May 16. 2013
Earlier I wrote about Linux 3.8 SCSI failing to operate under Hyper-V. Finally Microsoft has done something about it. After all, they are the biggest contributor for Linux kernel when it comes to running as a Hyper-V guest.
There is a patch-set fixing number of issues, for example: "Update the storage protocol to the win8 level." This means that they have seriously investigated the problems and fixed them. Also in the patch-set there are some new features, however, which are directed towards datacenters, not nerds like me running a Linux-box in the corner.
Great work from Mr. Srinivasan from Microsoft! Thank you for that.
Dojo 1.8 / 1.9 on Zend Framework 1
Wednesday, May 15. 2013
I'm a big Dojo fan. Its loading system makes it really fast on front-end. Also Dojo integrates well with Zend Framework.
ZF 1 is being phased out, but I haven't found the time to migrate into version 2 yet. Meanwhile Dojo / Dijit / Dojox will get updates, but they're not being compensated into ZF 1.
Here is my Zend Framework 1 patch to make Dijit components AMD-loading compatible. It makes Zend Framework Dijit-modules to use the slash-notation in paths. Especially in Dojo 1.9 using dots will yield errors like:
mixin #1 is not a callable constructor.
or
base class is not a callable constructor.
The errors vary depending of what you're calling. Pretty much your JavaScript ceases to execute. The problem comes from the fact that Dijit does not function exactly the same way it used to do before 1.9.
Failing example:
<div data-dojo-type="dijit.MenuSeparator"></div>
Working example:
<div data-dojo-type="dijit/MenuSeparator"></div>
The difference is minimal, but makes everything tick again.
Serendipity commenting with proxy
Tuesday, May 14. 2013
It seems that out-of-the-box Serendipity does not support X-Forwarded-For -header. It means that any proxy in between loses original client information.
Here is my suggested patch to fix the issue:
--- serendipity/include/functions_comments.inc.php.orig 2013-01-25 14:10:03.058973150 +0200
+++ serendipity/include/functions_comments.inc.php 2013-05-14 11:34:35.302389894 +0300
@@ -782,7 +782,13 @@
$title = serendipity_db_escape_string(isset($commentInfo['title']) ? $commentInfo['title'] : '');
$comments = $commentInfo['comment'];
- $ip = serendipity_db_escape_string(isset($commentInfo['ip']) ? $commentInfo['ip'] : $_SERVER['REMOTE_ADDR']);
+ $ip = serendipity_db_escape_string(isset($commentInfo['ip']) ?
+ $commentInfo['ip'] :
+ (
+ isset($_SERVER['HTTP_X_FORWARDED_FOR']) ?
+ $_SERVER['HTTP_X_FORWARDED_FOR'] :
+ $_SERVER['REMOTE_ADDR']
+ ));
$commentsFixed = serendipity_db_escape_string($commentInfo['comment']);
$name = serendipity_db_escape_string($commentInfo['name']);
$url = serendipity_db_escape_string($commentInfo['url']);
This works on 1.6.2 and 1.7.0.