Upgrading into SplashID password vault version 7 and the security of using one
Wednesday, August 7. 2013
My weapon-of-choice for storing passwords has been SplashID for ages. I cannot even remember which version was the first I installed into my Nokia running a S60 operating system.
Yesterday SplashData went into version 7. I've been wondering what they have been doing, since there has been no minor updates for a year or so. Couple of weeks ago they informed that version 7 is coming out and hang tight to wait it to be released.
For an existing user like me, the news was pretty good. It was a free upgrade! Yes. I could get new stuff without paying anything for it. To tell you the truth, I was mighty sceptical. Their track-record of getting things right is not the best one there is. Back in the days, Elcomsoft did a study about password management software, and SplashID wasn't in the winning side there. It is generally a very bad idea to store everybody's passwords using password "g.;59?^/0n1X*{OQlRwy". Sure it is very difficult to guess, but what would happen if you don't have to guess. All you have to do is get somebody's password vault, and it pops open using that password with a blowfish. The end result is unencrypted SQlite-database.
People at SplashData did listen. They did fix the issues pointed out by those nice white-hat guys. That is a very good display of morale from the company's side. Still today, there are so many corporations who simply laugh at security investigators. For example this is one of the recent ones, the Japanese toilet manufacturer Lixil ignores Trustwave's findings, so there is nothing else to do than go public with the findings.
Fortunately my scepticism (read: pessimism) didn't pan out. I upgraded my Desktop-edition on my Windows, it activated using my existing credentials. iPhone-edition upgraded automatically from the App Store, and also activated after my Desktop-edition was activated. My iPad-edition didn't upgrade automatically, I used the normal iTunes sync and clicked there to install the already existing app into my iPad also. On version 6, there were separate versions for iPhone and iPad, now there is only a single app, which is a much better way to go on user perspective. So, no glitches there. Everything works, WiFi-sync works.
Now they offer an option to sync to cloud. To my mind, that's insanity! It is the popular way of today, but I'd never do that. Ever. The inquisitive minds already are questioning Google and their mobile platform Android about password storage. There is no difference between KeePass or Firefox Sync to mention few of those password storage providers. You'd be handing them all of your most valuable items, your passwords, to store and use. When it comes to Google and Android, they don't encrypt your data, with Apple's iOS you at least have the option of doing so.
My final words to people at SplashData: Keep up the good work!
Ran out of luck (and fuel) in Hill Climb Racing
Monday, July 29. 2013
The game has been reviewed as a major time suck. Don't get me wrong, the game doesn't suck, it just sucks my (and 70 million other players') time.
I was playing with my iPad and in Desert-stage and managed to land on front wheel with a motocross bike. The trouble was, that it was evenly balanced. It wouldn't fall on either side, it just stuck there no matter what I did. Eventually I just run out of fuel. See the pic!
Darn! No new record for that attempt.
RTMPDump core dump
Monday, July 15. 2013
Occasionally I like to stea.... erhm.... listen to stuff from Finnish Broadcasting Company's internet-site. Most of the items they have there have some sort of limit, for example 7 days or 30 days, after which the show if off-the-air for good. The technology they're using is streaming FLV, or RTMP, making it relatively easy to steal... ermhm... borrow.
On my Fedora 19 my favorite utility for doing the ste... well... stream-redirection is YLE-DL. It is a Python-wrapper for the classic RTMPDump-utility. The Python-thingie is very much needed, as the parameters required for RTMPDump can be for example:
--playpath=mp3:areena/fi/52/52cf454f9e444f498352f65cfe7ba11d \
--swfUrl=http://areena.yle.fi/static/player/1.2.8/flowplayer/flowplayer.commercial-3.2.7-encrypted.swf \
"--app=ondemand?_fcs_vhost=cp157366.edgefcs.net&auth=eb.bEdRbFdlamabdgckdRcxdvaGbxdvbZbWbscycAdcdKbncbcpbIdSdgdLbidebVbx-br5atj-c0-vga-oxmCrAxmpvkzsqrzxlzp-kblXn9lakal5nTlfn2k6kRnZ&aifp=6&slist=areena/fi/52/52cf454f9e444f498352f65cfe7ba11d" \
--pageUrl=http://areena.yle.fi/radio/1943628 \
--rtmp=rtmpe://cp157366.edgefcs.net/ondemand \
"--tcUrl=rtmpe://217.212.252.204/ondemand?_fcs_vhost=cp157366.edgefcs.net&auth=eb.bEdRbFdlamabdgckdRcxdvaGbxdvbZbWbscycAdcdKbncbcpbIdSdgdLbidebVbx-br5atj-c0-vga-oxmCrAxmpvkzsqrzxlzp-kblXn9lakal5nTlfn2k6kRnZ&aifp=6&slist=areena/fi/52/52cf454f9e444f498352f65cfe7ba11d" \
-o "Parasta ennen! pe klo 20.00-2013-07-12.flv"
This nice script does all that for me. Except it doesn't work. It just says "Segmentation fault (core dumped)". Which is rather bad.
After very short period of GDBing, I realized that some function-API changed drastically. In this case it is Diffie-Hellman -function in the GnuTLS-library, which RTMPDump can use if chosen to do so. The API-changed for example in gnutls_calc_dh_secret()-function:
Old:
bigint_t
gnutls_calc_dh_secret (bigint_t ret_x, bigint_t g, bigint_t prime)
New:
int
gnutls_calc_dh_secret (bigint_t ret_y, bigint_t * ret_x, bigint_t g, bigint_t prime,
unsigned int q_bits)
Since that function is not in the GnuTLS public API, developers of RTMPDump had to declare that function in their code to make it work. That is because gcrypt and GnuTLS are using multi precision integers to process public-key -stuff. It is much easier to multiply two huge numbers with a suitable library. In this case the libraries should be compliant with each other, but ... Having void-pointers makes it compile, but it doesn't make it run. Without core dumps, that is.
The solution is not to use GnuTLS, which seems to be the default. I chose to use OpenSSL-functions instead. That builds and runs.
Tip:
To get a core dump in Fedora 19 the automatic bug reporting tool ABRT needs to be tweaked. As a default it gets all the cores, not you. See /proc/sys/kernel/core_pattern, which out-of-the-box has value of |/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e
So it is a very good idea to edit the value of DumpLocation-directive in /etc/abrt/abrt.conf. I put it like this and created the directory:
MaxCrashReportsSize = 0
DumpLocation = /var/spool/abrt
Then I could get a directory like /var/spool/abrt/ccpp-2013-07-15-17\:30\:42-7463/ containing a coredump-file with the exact point of the crash.
Adding dynamic IP-address support into Parallels Plesk Panel 11 DNS
Wednesday, July 10. 2013
I've been a long-time user of Parallels Plesk Panel. It really is the best product for maintaining a hosting-server. However, it is lacking the support for domain owners to update their dynamically changing IP-addresses into the panel's DNS-zone automatically.
Since I really needed such a service, I built one from parts. The central piece is DHIS client/server -software. I don't know why they stopped distributing the server-component source code, but I still have it. The license in the software is very liberal, so it can be used by anybody for anything.
As default, DHIS only supports DNS TSIG updates. That is not a good idea in Plesk, since master database for DNS-records is in the MySQL database. Also, getting BIND to refresh a zone really requires a root-access. Nobody wants to run their daemons as root, so I did two things: I added possibility of doing updates with an external script and wrote such a Perl-script to interface with Plesk's RPC API to update DNS-records.
My work is distributed here: http://opensource.hqcodeshop.com/dhis/
As the next step, I'll be investigating how to get any support for this into Plesk's GUI. It really would be nice to allow client to generate his own QRC-authentication keys instead of me doing it manually.
Doing secure dynamic DNS updates with BIND
Monday, July 1. 2013
ISC BIND is the most popular DNS in the entire Internet. Most hostmasters never need to allow DNS-clients to change records, but then there are cases where it can be handy.
When thinking of the security, it will be very, very stupid to allow anybody to update records. Luckily there doesn't seem to be a script-kiddie-proof -tool for doing that (or at least I haven't found one yet). Most servers simply don't allow dynamic updates and those who do, don't allow it for all zones. Security-wise one of the simplest approaches is to allow updating a zone from specific subnet or hand-picked IP-addresses. That way most of the users have been excluded using a simple mechanism. Surely any motivated cracker will bend any rules, that exist.
To add security and allow updates only for those who actually are permitted, a smart move is to go TSIG. It is described in RFC 2845 Secret Key Transaction Authentication for DNS (TSIG) and is supported by many DNS-servers, including BIND. Getting it running is described poorly. Best description I found is in Jeff Garzik's blog the article is title "nsupdate: Painless Dynamic DNS".
The basic steps are pretty much following:
- Generate update key
- This will include executing a command like:
dnssec-keygen -a hmac-md5 -b 128 -n HOST my.dns.update.key. - Inform BIND-server about the key
- This will include changing the raw key-file into BIND-format, like:
key "my-key-name" {
algorithm hmac-md5;
secret "somethingcompletelybullshithere==";
}; - Allow a zone to be updateable by anybody knowing the key
- This can be accomplished with allow-update -configuration directive.
- Go update!
A test run for checking out if your setup succeeded would be:
# nsupdate -k my.dns.update.key
update delete a.record.my.zone. A
update add a.record.my.zone. 3600 A 192.168.0.198
show
send
If server's messsage log says something like "client 192.168.0.1#12790: request has invalid signature: TSIG dhis: tsig verify failure (BADKEY)", then your key setup failed. Either server doesn't recognize your client's key, or client failed to provide a valid key.
If server's message log says something like "client 192.168.0.1#39782: update 'my.zone/IN' denied", then the DNS-zone to be updated does not allow dynamic updates. Add something like allow-update { key "my-key-name"; }; into your zone-configuration.
A successful update will show something like this in your logs:
named[25415]: client 192.168.0.1#64975: signer "my-key-name" approved
named[25415]: client 192.168.0.1#64975: updating zone 'my.zone/IN': deleting rrset at 'a.record.my.zone' A
named[25415]: client 192.168.0.1#64975: updating zone 'my.zone/IN': adding an RR at 'a.record.my.zone' A
(Note: the named PID and client port-numbers are just copy/pasted from my log. They will differ in your case.)
Solving Microsoft Secure Download Manager issues
Friday, June 28. 2013
Ok. SDM is not the best piece of software ever written. It fails on everybody and everything. There is a lot of grievance in a MSDN discussion thread.
Problem 1:
The first issue I encountered was that it didn't install. It said "The System Administrator Has Set Policies to Prevent This Installation". That is not a standard Windows error message, and I gather it is something the lovely programmers made the app say when failing.
Solution 1:
Run it with Administrator -privileges. The installation package is distributed as MSI-package and Windows Explorer doesn't offer the "Run as Administrator" -option on it. I started a Power Shell as administrator and executed the installed from there. That fixed it.
Problem 2:
Download fails to start on "Active Scripting must be enabled". Well, I have that enabled.
Solution 2:
Make sure following domains are in Trusted sites -list. (See: Internet Options -> Security)
- http://e5.onthehub.com
- http://static.onthehub.com
- http://ajax.googleapis.com
Then the goddamn thing starts to download.
I'd like to second OtaconHC's opinion from the MSDN-thread: The SDM is a failure by design.
Losing OpenLDAP DB for a BDB0060 PANIC
Tuesday, June 18. 2013
My Fedora 19 got an update for KVM. It was a no biggie, nothing really happened at the time.
Then one of the virtual guests got a new kernel (RHSA-2013:0911-1). I rebooted the guest and BANG! My KVM hung the entires machine. I have the Magic SysRq enabled, but nothing. The box was completely hung.
The "funny" part happened after I forced a reboot from the button. The box wouldn't boot! My LDAP was corrupted. All I got was a "BDB0060 PANIC: fatal region error detected; run recovery" -message.
There is the /usr/bin/db_recover -tool, but it just said FUBAR. I didn't get the actual phrase, but surely you'll get the meaning. Then, what next? I was lucky enough to have a 3 week old slapcat of my entire LDAP. That was plenty of luck for me. But the morale of the story is, that I'll need to start dumping the LDAP or change the back-end format into something more recoverable.
Migrating access control into Apache 2.4
Thursday, June 6. 2013
Fedora 19 ships with Apache 2.4. After the install completed it very soon become obvious that my previous Apache 2.2 setup didn't work without changes. At the time I just took a bigger hammer and kept banging it until it started. Needless to say, it didn't have all my virtual hosts and services.
Now I spent a while getting all the necessary things working. The most common issue I had was:
authz_core:error AH01630: client denied by server configuration
This pretty weird error is caused by the fact that as default all access is denied and the classic:
Order allow,deny
Allow from all
... does not actually do anything on 2.4, instead it needs to be written as:
Require all granted
The change in the configuration actually makes the setup much clearer, but the obvious problem is that it is not compatible with the previous versions. In the conf.modules.d/00-base.conf there is a directive to load the compat-module:
LoadModule access_compat_module modules/mod_access_compat.so
I put the line into comment and started running my Apache with new style setup only.
My second biggest issue was with the services I'm running with allow access from my own LAN, but require LDAP authentication when traffic was not originating from my LAN. Apache 2.2 example would be:
Order Deny,Allow
Deny from allAllow from my.lan
Allow from 2001:1234:5678::/64AuthType Basic
AuthName www.my.lan
AuthBasicProvider ldap
AuthLDAPURL "ldap://server:389/ou=People,dc=example,dc=com?uid?sub?(objectClass=*)"
Require valid-userSatisfy Any
The solution is very simple, just list the requirements and 2.4 somehow magically knows what you mean:
Require host my.lan
Require ip 2001:1234:5678::/64AuthType Basic
AuthName www.my.lan
AuthBasicProvider ldap
AuthLDAPURL "ldap://server:389/ou=People,dc=example,dc=com?uid?sub?(objectClass=*)"
Require valid-user
Otherwise my migration was pretty smooth.
Steam games installing DirectX on every start
Thursday, May 30. 2013
Steam has this reputation of games re-installing 3rd party prerequisites on every single time you launch a game. Stupid, huh? Find discussion about that on Steam's forums.
The official Steam explanation is that there are so many versions of D3DX-library. Well, in my mind that really doesn't answer the question why there is a re-install on every game launch. You'd think that the library needs to be installed only once. Luckily somebody found out a fix for that. The trick is to edit/delete the .VDF-file which is pretty much the install script for a game.
Here is an example 34870_install.vdf
:
"InstallScript"
{
"Run Process"
{
"DirectX"
{
"process 1" "%INSTALLDIR%\\Redist\\dxwebsetup.exe"
"Description" "Microsoft DirectX Redist Package"
"NoCleanUp" "1"
}
}
"Run Process"
{
"VCRedist"
{
"process 1" "%INSTALLDIR%\\Redist\\vcredist_x86_2010_sp1.exe"
"Description" "Microsoft VC Redist Package"
"NoCleanUp" "1"
}
}
}
It pretty much states that both DirectX and VisualC redistributable packages are installed every single time the game launches. A simple rename/delete for the file will fix the problem. Game will start without problems and won't do anything stupid.
The issue has been reported by multiple people for the past 3 years. Shame on you Steam!
Parallels Plesk Panel 11 hangs on backup
Monday, May 27. 2013
For some unknown reason a customer's backup job hung. A ps axuww looked like this after a failed kill 3150 attempt:
3150 ? Z 0:00 [mysqldump] <defunct>
There was no other way of getting rid of that other than a reboot. It didn't help, though. It automatically re-started the backup job and did the same thing again.
All customer's services were disabled, there was Suspend domain until backup task is completed -setting enabled. At that point I was really screwed. The backup job turned undead and all customer's services were out of action. Nice going! The message that was given from Parallels Panel at control panel was: "The subscription is temporarily suspended until the backup or restore process is complete".
Parallels forums had an article about unsuspending a domain. I did the same thing for a subscription. Apparently at the time of writing in Plesk 9, there were no subscriptions. So I ran this:
/usr/local/psa/bin/subscription -u customer's domain name here.com -status enabled
It helped! It un-suspended the services. The backup is not working yet, but the services were running again. I need to further investigate the backup issue and get back about that.
PDF creator MSCOMCTL.OCX fail
Tuesday, May 7. 2013
My choice for creating PDFs is PDF creator from pdfforge.org. The price is right (free), all the functionality I need is there, operation is robust and source code is available.
However, something weird happens occasionally after a new version is updated. Printing fails with an error stating that MSCOMCTL.OCX has failed. The failure is strange, as it is simply about ActiveX-component registration that has been lost. Also fix is simple, run following commands as administrator:
cd C:\Windows\SysWOW64
regsvr32 MSCOMCTL.OCX
regsvr32 MSCOMCT2.OCX
Then operations continue as expected.
Note that the directory is 32-bit on 64-bit Windows.
Acronis True Image 2013 restore failing: NTFS as raw partition
Sunday, May 5. 2013
This is a follow up for my earlier post about Acronis experiences.
So, I decided to get a new motherboard/CPU/memory -triplet. I did what I usually do, except this time I was using Acronis and this time my computer had fake RAID-1 drive from ICH10R-chip. At the time I didn't think none of this matters, I just went forward with the hardware upgrade. New components booted the first time without any real issues. I re-configured the existing drives with the RAID-1 and booted to the Windows for the first time.
My boot drive is SSD, and I have large RAID-1 array for files. I got into Windows, logged in, plenty of grievance as expected. Lots of new devices found, RAID-missing, etc. no real surprises there. I re-partitioned the RAID-1 drive and started Acronis. The idea was to do a restore of my previous data and go forward with configuring the new parts. To my surprise, Acronis announced that it cannot do a restore into my D:-drive. Initially I didn't think much of it, since Acronis said, that it needs to reboot and restore will continue. Sounds good to me.
After the reboot I was in a Linux-environment with Acronis pretty much automatically doing everything. The only choice was to abort the restore and two checkboxes. Shutdown after restore or reboot after restore. There was 800 GiB of data and estimate was 7 hours. I checked the shutdown and left.
Next day I came back to see what happened. PC was shut down. Most likely the restore succeeded and shut down after completion. I booted into Windows to see that there was no D:-drive. The partition was gone too. WTF?!
During following days I repeated the process 3 more times with varying parameters from restore. No results. Windows saw my RAID-1 mirror as a raw partition each time. No files, no resonable results. It would be fair to say that I was pissed. What a piece of crap backup software! The value of backup is in the possibility of doing a restore of it. This "fine" product didn't deliver. Unbelievable!
Next I got a Fedora Linux 18 Live DVD to see how my drives were visible there. Then the idea hit me! In out-of-the-box Linux, a fake RAID ICH10R mirror is seen as two physical drives. Not as single RAID-1 volume. Sure, the volume can be configured into Linux too, but out-of-the-box it didn't do that. What are the chances Acronis' restore Linux does the same? I'd bet serious money on that.
So, apparently in ICH10R there is offset in the drive if it is configured as RAID-drive. Many real RAID-controllers don't do that. A RAID-mirror is two physical drives mirrored and you can take either one of those and it will work as JBOD.
Acronis' knowledge base recommended doing the restore in Windows (sorry, I lost the articlea already). So, The next thing I did was to purchase a True Image 2013 Plus Pack. I got Windows 7 AIK, installed it, installed the newly purchased Plus Pack and started to create a Windows restore image. The image maker did something for a while and then "Error occurred while creating the image". Nothing more. No details. No log-file. No nothing. Completely useless piece of crap!
So, I spent around 9 days while trying to get my data back. I had purchased even more crappy software from Acronis, and had no joy from my new parts.
I went back to square one. Why doesn't the partition restore start in Windows? Why does it say, that it needs to reboot and go to Linux? WTF?! I investigated more and realized that whenever I create a D:-drive, it gets locked by operating system right after it get's ready. That must be the reason for Acronis to want a reboot.
SOLUTION:
I created the RAID-1 array, quick-formatted it as NTFS, but assigned it as H:-drive. Now Windows didn't want to lock that. Apparently my yet-to-be-restored data had something the OS wanted. The data wasn't available, but OS happily locked the drive while hoping for the data to appear. I did a restore to the H:-drive. This time it took 10 hours. Apparently Linux restore is much faster (or they both took 10 hours, I don't know).
After the restore completed, I just re-arranged the drive letters so that my succesfully restored RAID-1 mirror got the D:-letter again. A reboot and everything was fine! Uuu-jeah! Oh happy, oh joy!
PS. If anyone is interested. There are 3D Mark 20111 results from my box publicly available.
Recipe: Trac via Nginx-front
Tuesday, April 23. 2013
I'm a fan of Trac wiki / issue tracker. It has the correct price (free) combined with all the features I need in software development. Since all my Linux-development is done in RHEL / CentOS -environment, getting a Trac to run requires tweaking. Also after our production server hit the 10k-connection limit and we had to change to Nginx, I don't have any Apache daemons running. Given that constraint, I definitely need some tweaking of my own.
Software needed:
- Nginx, get my RPM from http://opensource.hqcodeshop.com/CentOS/6%20x86_64/Nginx/
- uWSGI, get my RPM from http://opensource.hqcodeshop.com/CentOS/6%20x86_64/uWSGI/
- Trac, get my RPM from http://opensource.hqcodeshop.com/Trac/
Setup:
Traci is built with Python, but it is typically installed anyway. uWSGI is the glue between Nginx and a Python app. My uWSGI should run out of the box. It defaults to seeing Python apps in directory /var/www/uwsgi/, so make sure to create the Trac parent file trac_env_parent.py into it:
# -*- coding: utf-8 -*-
# file: trac_env_parent.wsgi
import sys
sys.stdout = sys.stderr
import os
os.environ['TRAC_ENV_PARENT_DIR'] = '/var/www/uwsgi/trac'
os.environ['PYTHON_EGG_CACHE'] = '/var/www/uwsgi/.egg-cache'
import trac.web.main
application = trac.web.main.dispatch_request
Also it is a good idea to make sure, that uwsgi-user can write into the .egg-cache-directory. Permissions should be:
drwxr-xr-x. 2 uwsgi uwsgi 4096 Jan 8 2012 .egg-cache
Then bind Nginx into uWSGI-app. In my case I defined a virtual host for that. Fragment of nginx.conf:
server {
listen [::]:80;
server_name my.trac.own.com;
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:9001;
}
}
The file /etc/nginx/uwsgi_params is something out of a default Nginx source. I didn't change anything in it.
After that it's only getting the Trac properly configured with your DB-backend and filesystem.
What others are doing:
Experiences on running Acronis True Image 2013
Monday, April 8. 2013
This is a follow up on Windows Backups -article I wrote earlier.
I've been running Acronis on daily basis on a couple of Windows-machines to take a backup into a NAS-box. I have been mostly happy with the software, but ... Here is my grievance list:
- Consolidation: Completely worthless! A number of users according to Acronis' forums are having problems with it. The recommended approach is not to use it. See: 36241: Consolidation Process Hanging. There was an occasion when I waited for 30+ hours for the consolidation to complete. It did, but come on! 30 hours to copy 80 GiB of data! On a best case-scenario it does really weird things, aborting with no obvious reason and leaving temp files lying around.
- Laptop backup: Works ok when destination NAS-box is reachable, completely worthless when not. There is NO way to cancel the job besides holding down the power button (my laptop does not have a reset). I spent roughly an hour trying to cancel / kill / nuke / stop a backup job which is failing to write a backup for obvious reason that the destination drive was not there to be written into. I shouldn't be too hard to automatically abort a backup job when the network connection is not there. This is the worst issue I've been experiencing.
- NAS-drive not found: This happens occasionally on a laptop, but not on a desktop PC. The problem is a weird one. Every possible test I do from TrueImage or Windows Explorer indicate that everything is there and accessible, but the job does not see that for some unknown reason. This can be circumvented with a cancel job / start job -pair. Sometimes a previous, already existing backup needs to be pointed manually to ATI. I don't know why it lost connection with the file which has existed after ATI wrote it.
- Support period: Pretty short for new user. Couple of weeks or so. I got my product installed and running, but when there were real issues (see above), the support was already gone.
I've been looking for any suitable competing products, but apparently there are none available. The lack of better backup solution leaves me running Acronis TI. This is pretty close to a good product, but apparently the developers cannot be bothered with this type of easily fixable minor issues.
Transferring MySQL Enterprise Monitor Service Manager to a new server
Wednesday, March 20. 2013
MySQL Enterprise Monitor is a really good tool to see what's going on in the DB. At least I'd like to give my DB-box all the possible resources, I'm running the Service Manager -part in another server.
Sometimes there is a need to upgrade servers. This time it moving other roles out of the way was piece-of-cake, except the Service Manager. In the entire Internet, there is no spot-on information about how to do it in detail. The only really relevant information I could find is B.5. Backing up MySQL Enterprise Service Manager in the MySQL documentation. In the doc they manage to describe how to back things up, but not really how to restore anything.
I did the operation in following steps:
- Fresh installation of Service Manager in the new server
- I chose not to configure anything and ended the installation there
- Backup of the data as described in the doc:
mysqldump --single-transaction -uservice_manager -p -P13306 -h127.0.0.1 mem > mem.dump - Restore of data into the new server using command like:
/opt/mysql/enterprise/monitor/mysql/bin/mysql -u service_manager -p -P13306 -h 127.0.0.1 - In the DB-server the Enterprise Monitor Agent needs to be reconfigured to send information to a new Service Manager
- Edit file /opt/mysql/enterprise/agent/etc/mysql-monitor-agent.ini
- Confirm value of agent-mgmt-hostname
- Confirm value of aggr-mem-baseurl
- After these changes a login to the newly setup Service Manager showed the DB as fully functional
Hope this helps somebody.