Steam games installing DirectX on every start
Thursday, May 30. 2013
Steam has this reputation of games re-installing 3rd party prerequisites on every single time you launch a game. Stupid, huh? Find discussion about that on Steam's forums.
The official Steam explanation is that there are so many versions of D3DX-library. Well, in my mind that really doesn't answer the question why there is a re-install on every game launch. You'd think that the library needs to be installed only once. Luckily somebody found out a fix for that. The trick is to edit/delete the .VDF-file which is pretty much the install script for a game.
Here is an example 34870_install.vdf
:
"InstallScript"
{
"Run Process"
{
"DirectX"
{
"process 1" "%INSTALLDIR%\\Redist\\dxwebsetup.exe"
"Description" "Microsoft DirectX Redist Package"
"NoCleanUp" "1"
}
}
"Run Process"
{
"VCRedist"
{
"process 1" "%INSTALLDIR%\\Redist\\vcredist_x86_2010_sp1.exe"
"Description" "Microsoft VC Redist Package"
"NoCleanUp" "1"
}
}
}
It pretty much states that both DirectX and VisualC redistributable packages are installed every single time the game launches. A simple rename/delete for the file will fix the problem. Game will start without problems and won't do anything stupid.
The issue has been reported by multiple people for the past 3 years. Shame on you Steam!
Parallels Plesk Panel 11 hangs on backup
Monday, May 27. 2013
For some unknown reason a customer's backup job hung. A ps axuww looked like this after a failed kill 3150 attempt:
3150 ? Z 0:00 [mysqldump] <defunct>
There was no other way of getting rid of that other than a reboot. It didn't help, though. It automatically re-started the backup job and did the same thing again.
All customer's services were disabled, there was Suspend domain until backup task is completed -setting enabled. At that point I was really screwed. The backup job turned undead and all customer's services were out of action. Nice going! The message that was given from Parallels Panel at control panel was: "The subscription is temporarily suspended until the backup or restore process is complete".
Parallels forums had an article about unsuspending a domain. I did the same thing for a subscription. Apparently at the time of writing in Plesk 9, there were no subscriptions. So I ran this:
/usr/local/psa/bin/subscription -u customer's domain name here.com -status enabled
It helped! It un-suspended the services. The backup is not working yet, but the services were running again. I need to further investigate the backup issue and get back about that.
Playstation 3 failing to load updates
Friday, May 24. 2013
Something really weird happened. My Playstation 3 refused to load updates. I retried at least 50 times, but it failed to load measly 24 MiB update from Sony.
A quick Google-search reveals, that plenty of people are suffering from this problem.
In my case the fix was to change wired connection into wireless. I have no idea what the difference is, but for some reason it did the trick.
Getting the updates into PS3 is rather complex as observed from the network traffic. An update is a HTTP-request and the load is split into 4 parallel loads. Example:
GET /tppkg/np/NPEB00874/NPEB00874_T8/0bbab4e7b137739f/EP4350-NPEB00874_00-NETFLIXRIGEL01EU-A0104-V0100-PE.pkg?product=0085&downloadId=be00b7cc&q=2058b9eb8ab5f2492012c6c5b5a73320d1bde7f004d5cb6734fa2ebf322b971e&serverIpAddr=87.248.207.254&downloadType=fm HTTP/1.1
Host: b0.ww.np.dl.playstation.net
Range: bytes=6126400-12252799
I don't understand why they do that. Obviously there is some sort of bug in the 4.41 firmware.
Update 4th July 2013:
Problem still exists in 4.46 firmware. I had major trouble updating, since my PS3 didn't get 100% of the firmware. The error message reads: "An error occurred during the download operation. A connection to the server cannot be established (80710723)".
Linux 3.9 SCSI working with Hyper-V
Thursday, May 23. 2013
Earlier I wrote about how I found out the hard way, that modern Linuxes don't work much with Hyper-V. Also I wrote about Microsoft taking initiative with fixing this.
I said in my earlier post that Linux 3.9 had fixes for storvsc-driver and finally I had time to confirm that. It works perfectly! In my Arch Linux -installation all I had to do is run pacman -Syu and I got Linux 3.9.3 kernel, which after a reboot works very well with my Windows 8 Hyper-V. No previously described block device hang -messages.
Mac OS X encrypted Timemachine USB-drive
Wednesday, May 22. 2013
I got a new iMac and wanted to use an USB-drive as Time Machine backup destination for it. When it comes to easily movable backup-drives, my policy is to encrypt them. If a drive is lost/stolen/misplaced, other people don't get to access my data.
When OS X sees an empty USB-drive, and there is no Time Machin already set up, it asks if you'd like to use the newly found drive as Time Machine backup destination, which is nice and user friendly. During Time Machine setup, there is a checkbox "Encrypt drive". I checked it, but whatwhatwhat?! It refused to work. Error message:
There was an error preparing the Time Machine backup disk "TimeMachine" and an additional explanation: The given file system is not supported on Core Storage.
Well... that's strange. I have no idea what a Core Storage is, or why an Apple formatted HFS+ partition would not be a valid (encrypted) backup destination.
Here is what I did. My drive was already formatted when I got it:
$ diskutil list disk3
/dev/disk3
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk3
1: EFI 209.7 MB disk3s1
2: Microsoft Basic Data WD Backup 1000.0 GB disk3s2
There is a 1 TiB Western Digital drive with a NTFS-partition on it. So I had to delete the partition and create a better suitable one:
$ diskutil partitionDisk disk3 1 GPT HFS+ TimeMachine 1T
Started partitioning on disk3
Unmounting disk
Creating the partition map
Waiting for the disks to reappear
Formatting disk3s2 as Mac OS Extended with name TimeMachine
Initialized /dev/rdisk3s2 as a 931 GB HFS Plus volume
Mounting disk
Finished partitioning on disk3
/dev/disk3
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk3
1: EFI 209.7 MB disk3s1
2: Apple_HFS TimeMachine 999.9 GB disk3s2
This is the partition setup that refuses to work.
After couple of hours of Googling I found the solution:
$ diskutil enableJournal disk3s2
Journaling has been enabled for volume TimeMachine on disk3s2
I seem to have misplaced the links already. Sorry. Anyway, the problem was that I didn't enable journaling on the partition. I can understand why that is a requirement for Time Machine. Having a journaled partition adds robustness to write operations. The real questions are:
- Why creating a HFS+ partition does not enable journaling automatically?
- On Linux creating EXT2 or EXT3 partition does not enable journaling automatically
- On Linux creating EXT4 journaling is enabled by default
- Why doesn't Time Machine state the reason?
- Saying that Core Storage is not supported is a bit vague!
Mac OS X terminal UTF-8 over SSH
Tuesday, May 21. 2013
Something weird happens in OS X Terminal locale settings. Whenever I open an SSH-connection to one of my Linux-boxes, they refuse to properly set up an UTF-8 locale.
The session goes something like this. Checking locale settings on OS X terminal:
$ locale
LANG=
LC_COLLATE="C"
LC_CTYPE="UTF-8"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
Open SSH-connection and check locale settings on Linux end:
~> locale
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
LANG=en_US.UTF-8
LC_CTYPE=UTF-8
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
Aow come on! Something went wrong.
The fix on the OS X end is not to set the environment variables. In the Terminal settings, there is:
The setting Set locale environment variables on startup needs to be UNset. It is checked out-of-the-box. Then it yields:
$ locale
LANG=
LC_COLLATE="C"
LC_CTYPE="C"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
The LC_CTYPE is not set. Over SSH-connection to Linux, it yields:
~> locale
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
Now there are no error messages. The next thing to do is to try to find somebody responsible. Whose job it is to fix this.
Google doing the same thing as Skype
Friday, May 17. 2013
The H-Security blog wrote about Skype reading all the messages you type. They had seen weird traffic into their website after posting the URL into Skype. Read all about it from their blog entry "Skype with care – Microsoft is reading everything you write". There is also Ed Bott's article about how H-Security guys got it wrong the first time, meaning that they don't check your links.
Anyway, this is absolutely something I had to check out. A perfect candidate for this is our Development Lab test server. In testing phase we're in public Internet so that all the parties can test our upcoming version. There are 0 links to the site, but for some reason Google has indexed a number of pages from there. The information I got is from http://www.wholinks2me.com/
What's strange here is that I had to change the domain names couple of weeks ago, to make our testing process more exact what version we're testing. Also, knowing our people I'm pretty sure that nobody publishes the test-server links in any of their wesites, I know I don't. So, the question raises: where did Google get the new address from?
When I changed the domain, I did send an e-mail about it. Yep, you got it right, our team is using Google Mail. For the purpose of full discousre: on the site there is also Google AdSense advertisements and Google Translate tool.
I investigated 10 days worth of web-server logs. In that there are 18 distinct IP-addresses where the server was accessed from. A short investigation of all 18 addresses revealed that 8 of them can be easily explained. They are home and office IP-addresses of our team. What was strange that 5 of them were from Amazon EC2 and Google networks. I have no reasonable explanation why they are accessing my site. So it is very easy to come up with couple of conspiracy theories when something like that happens.
Upcoming: Proper Hyper-V guest support for Linux
Thursday, May 16. 2013
Earlier I wrote about Linux 3.8 SCSI failing to operate under Hyper-V. Finally Microsoft has done something about it. After all, they are the biggest contributor for Linux kernel when it comes to running as a Hyper-V guest.
There is a patch-set fixing number of issues, for example: "Update the storage protocol to the win8 level." This means that they have seriously investigated the problems and fixed them. Also in the patch-set there are some new features, however, which are directed towards datacenters, not nerds like me running a Linux-box in the corner.
Great work from Mr. Srinivasan from Microsoft! Thank you for that.
Dojo 1.8 / 1.9 on Zend Framework 1
Wednesday, May 15. 2013
I'm a big Dojo fan. Its loading system makes it really fast on front-end. Also Dojo integrates well with Zend Framework.
ZF 1 is being phased out, but I haven't found the time to migrate into version 2 yet. Meanwhile Dojo / Dijit / Dojox will get updates, but they're not being compensated into ZF 1.
Here is my Zend Framework 1 patch to make Dijit components AMD-loading compatible. It makes Zend Framework Dijit-modules to use the slash-notation in paths. Especially in Dojo 1.9 using dots will yield errors like:
mixin #1 is not a callable constructor.
or
base class is not a callable constructor.
The errors vary depending of what you're calling. Pretty much your JavaScript ceases to execute. The problem comes from the fact that Dijit does not function exactly the same way it used to do before 1.9.
Failing example:
<div data-dojo-type="dijit.MenuSeparator"></div>
Working example:
<div data-dojo-type="dijit/MenuSeparator"></div>
The difference is minimal, but makes everything tick again.
Serendipity commenting with proxy
Tuesday, May 14. 2013
It seems that out-of-the-box Serendipity does not support X-Forwarded-For -header. It means that any proxy in between loses original client information.
Here is my suggested patch to fix the issue:
--- serendipity/include/functions_comments.inc.php.orig 2013-01-25 14:10:03.058973150 +0200
+++ serendipity/include/functions_comments.inc.php 2013-05-14 11:34:35.302389894 +0300
@@ -782,7 +782,13 @@
$title = serendipity_db_escape_string(isset($commentInfo['title']) ? $commentInfo['title'] : '');
$comments = $commentInfo['comment'];
- $ip = serendipity_db_escape_string(isset($commentInfo['ip']) ? $commentInfo['ip'] : $_SERVER['REMOTE_ADDR']);
+ $ip = serendipity_db_escape_string(isset($commentInfo['ip']) ?
+ $commentInfo['ip'] :
+ (
+ isset($_SERVER['HTTP_X_FORWARDED_FOR']) ?
+ $_SERVER['HTTP_X_FORWARDED_FOR'] :
+ $_SERVER['REMOTE_ADDR']
+ ));
$commentsFixed = serendipity_db_escape_string($commentInfo['comment']);
$name = serendipity_db_escape_string($commentInfo['name']);
$url = serendipity_db_escape_string($commentInfo['url']);
This works on 1.6.2 and 1.7.0.
PDF creator MSCOMCTL.OCX fail
Tuesday, May 7. 2013
My choice for creating PDFs is PDF creator from pdfforge.org. The price is right (free), all the functionality I need is there, operation is robust and source code is available.
However, something weird happens occasionally after a new version is updated. Printing fails with an error stating that MSCOMCTL.OCX has failed. The failure is strange, as it is simply about ActiveX-component registration that has been lost. Also fix is simple, run following commands as administrator:
cd C:\Windows\SysWOW64
regsvr32 MSCOMCTL.OCX
regsvr32 MSCOMCT2.OCX
Then operations continue as expected.
Note that the directory is 32-bit on 64-bit Windows.
Limiting Time Machine backup size on NAS
Monday, May 6. 2013
Earlier I wrote about getting a Linux-based NAS-box to host Apple Time Machine backups.
I was reviewing my settings on the Mac and realized the text in the Time Machine:
Well... I have lots of disk space there. Not all of it is can be taken by my Mac's backups, so ... Something needed to be done.
I mounted the AFP-share and confirmed the max size of my sparsebundle:
hdiutil imageinfo Mac\ mini.sparsebundle/
It said (unsurprisingly):
Size Information:
Total Bytes: 8855484092416
That is 8+ TiB!! Oh my! There is no way that I can afford to have that much of Apple backups. Luckily there is a way to reduce the size:
hdiutil resize -size 500g -shrinkonly Mac\ mini.sparsebundle/
Now checking the size will yield something reasonable (500 GiB):
Size Information:
Total Bytes: 537214885888
That is something, that I can easily accept.
The information came from:
- http://untoro.wordpress.com/2011/06/07/how-to-limit-space-used-by-time-machine/
- http://nikhilhaas.com/blog/limit-size-of-time-machine-backup-on-wd-my-book-live/
Also I did this to limit the size. I have no idea if it actually does anything.
defaults write /Library/Preferences/com.apple.TimeMachine MaxSize 405600
Perhaps somebody can comment this.
Acronis True Image 2013 restore failing: NTFS as raw partition
Sunday, May 5. 2013
This is a follow up for my earlier post about Acronis experiences.
So, I decided to get a new motherboard/CPU/memory -triplet. I did what I usually do, except this time I was using Acronis and this time my computer had fake RAID-1 drive from ICH10R-chip. At the time I didn't think none of this matters, I just went forward with the hardware upgrade. New components booted the first time without any real issues. I re-configured the existing drives with the RAID-1 and booted to the Windows for the first time.
My boot drive is SSD, and I have large RAID-1 array for files. I got into Windows, logged in, plenty of grievance as expected. Lots of new devices found, RAID-missing, etc. no real surprises there. I re-partitioned the RAID-1 drive and started Acronis. The idea was to do a restore of my previous data and go forward with configuring the new parts. To my surprise, Acronis announced that it cannot do a restore into my D:-drive. Initially I didn't think much of it, since Acronis said, that it needs to reboot and restore will continue. Sounds good to me.
After the reboot I was in a Linux-environment with Acronis pretty much automatically doing everything. The only choice was to abort the restore and two checkboxes. Shutdown after restore or reboot after restore. There was 800 GiB of data and estimate was 7 hours. I checked the shutdown and left.
Next day I came back to see what happened. PC was shut down. Most likely the restore succeeded and shut down after completion. I booted into Windows to see that there was no D:-drive. The partition was gone too. WTF?!
During following days I repeated the process 3 more times with varying parameters from restore. No results. Windows saw my RAID-1 mirror as a raw partition each time. No files, no resonable results. It would be fair to say that I was pissed. What a piece of crap backup software! The value of backup is in the possibility of doing a restore of it. This "fine" product didn't deliver. Unbelievable!
Next I got a Fedora Linux 18 Live DVD to see how my drives were visible there. Then the idea hit me! In out-of-the-box Linux, a fake RAID ICH10R mirror is seen as two physical drives. Not as single RAID-1 volume. Sure, the volume can be configured into Linux too, but out-of-the-box it didn't do that. What are the chances Acronis' restore Linux does the same? I'd bet serious money on that.
So, apparently in ICH10R there is offset in the drive if it is configured as RAID-drive. Many real RAID-controllers don't do that. A RAID-mirror is two physical drives mirrored and you can take either one of those and it will work as JBOD.
Acronis' knowledge base recommended doing the restore in Windows (sorry, I lost the articlea already). So, The next thing I did was to purchase a True Image 2013 Plus Pack. I got Windows 7 AIK, installed it, installed the newly purchased Plus Pack and started to create a Windows restore image. The image maker did something for a while and then "Error occurred while creating the image". Nothing more. No details. No log-file. No nothing. Completely useless piece of crap!
So, I spent around 9 days while trying to get my data back. I had purchased even more crappy software from Acronis, and had no joy from my new parts.
I went back to square one. Why doesn't the partition restore start in Windows? Why does it say, that it needs to reboot and go to Linux? WTF?! I investigated more and realized that whenever I create a D:-drive, it gets locked by operating system right after it get's ready. That must be the reason for Acronis to want a reboot.
SOLUTION:
I created the RAID-1 array, quick-formatted it as NTFS, but assigned it as H:-drive. Now Windows didn't want to lock that. Apparently my yet-to-be-restored data had something the OS wanted. The data wasn't available, but OS happily locked the drive while hoping for the data to appear. I did a restore to the H:-drive. This time it took 10 hours. Apparently Linux restore is much faster (or they both took 10 hours, I don't know).
After the restore completed, I just re-arranged the drive letters so that my succesfully restored RAID-1 mirror got the D:-letter again. A reboot and everything was fine! Uuu-jeah! Oh happy, oh joy!
PS. If anyone is interested. There are 3D Mark 20111 results from my box publicly available.