Health-C-P scam: Danish Consumer Center officials gave a slap on the wrist
Friday, February 28. 2014
Last summer, in Facebook, the source of all nasty things, had ads published for Trimatol-, Corexin- and Life Detox products by really evil Danish company called Health-C-P. See my blog post about it.
Everybody knows, that government officials work really hard and do what they can, but it so happens that no matter what they do, their actions are really slow. Today we had news from Finnish Consumer Center, that Nordic countries joint their efforts and made Danish government to extend a harsh warning. The news about that is in Finnish-only, sorry. The original warning about their mis-guided marketing tactics can be found here in English.
There was no punishment by Danish government. They just instructed Health-C-P to correct their marketing. Total bullshit!
DNS testing tools
Thursday, February 27. 2014
Couple of times I've mentioned the Qualsys SSL Server Test in this blog. It is a really good (currently free) tool for evaluating your TLS/SSL server's security. Typically people run it to test their web servers, but it is no way limited to that. Anyway, it is a really good tool for any sysadmin.
Last week I was bailing out a customer when Namecheap's FreeDNS was under a huge DDoS. The irony of DDoSing a DNS was, that it was partly done with badly configured DNS's, actually a number of UDP-based protocols were used. See US-CERT's info about that.
In general the weak link in today's Internet is DNS. People don't recognize its importance for a business. I personally would like to see DNSSEC gaining more popularity to prevent cache poisoning, perhaps I'll blog about it later. But to fight those huge DDoS-attacks there is really no other way, than to use services from a service provider who has really big guns. The second thing any sysadmin can do is to make sure, they're not part of the problem. See my earlier post about that.
Last week I found a DNS-tool equal to Qualsys' tool. Its called DNS inspect and you can use it (currently) freely at http://www.dnsinspect.com/. It is a really good tool and gives you American school grade from A to F (mostly unknown by us European peope) about the target domain inspected. I warmly recommend all admins to check their domain and DNS setup with that tool.
Both of the tools I mentioned are really good. Good to the point I'd pay money to use them. Thanks to both companies for giving out those freely.
Debugging PHP applications with PhpStorm
Friday, February 21. 2014
Anybody who has reached a certain level of skill in software development eventually will start thinking about how to make the job easier or be more productive during the day. With computers and programming, the tools you're using is a good starting point.
I've written code with a number of text editors (including vim and notepad), some of them really bad and some of them more suited to the task at hand. IMHO one of the best editors for a software developer is Microsoft's Visual Studio. Since many of my projects involve PHP, that is not the optimal editor for that. For years I used Notepad++, but then I got a recommendation for JetBrains PhpStorm. Since that it has become my weapon of choice for PHP.
For example, debugging the application is very simple operation. PhpStorm has all the client-stuff built into it, the only thing that is needed is Xdebug-extension to the server end. In my case, my web server is on a Linux and I typically work on a Windows 7. The debugging is done over my LAN. There is a very good article on PhpStorm author's web site with name Zero-configuration Web Application Debugging with Xdebug and PhpStorm. I don't think the zero-configuration part is true, but it is almost zero anyway.
On the server end I have a PHP-fpm running, it has
php_admin_flag[xdebug.remote_enable] = on
php_admin_value[xdebug.remote_host] = my_machine_name
php_admin_value[xdebug.remote_port] = 9000
In the worker configuration. That enables the server side to initiate a DBGP-connection to the given client (PhpStorm) and start sending data there. There reference for all the configuration directives can be found from Xdebug's docs.
On the PhpStorm end all you have to do, is enable the listener:
Also I had to drill a hole into my Windows 7 firewall, to allow my web server to connect into TCP/9000. If you're running a single machine setup, that won't be necessary.
Any settings for the setup can be found in the PhpStorm project:
But I think they're ok as a default. At least I didn't change anything there. If your debugger does not work, you fumbled the settings or your listener is not enabled. On Windows, run this on command prompt to confirm:
PS J:\> netstat -ano
Active Connections
Proto Local Address Foreign Address State PID
TCP 0.0.0.0:9000 0.0.0.0:0 LISTENING ?
It will return a long list of things, but one of the entries must be similar to that.
To make the choice between debugging and not debugging can be made really easy. There are a number of ways of starting a debug session, but for Firefox users there is an excellent add-on The easiest Xdebug. It will add an icon to your add-on bar:
If the bug is green, it will send a special cookie to the server during a page load request:
Cookie: XDEBUG_SESSION=netbeans-xdebug
That will initiate the debugging session for you and PhpStorm will stop on a breakpoint if one is set.
Switching to PhpStorm vastly improved my productivity. I would imagaine, that it will do that for anybody.
GeoTrust service sucks!
Thursday, February 20. 2014
I was renewing a SSL-certificate for a customer. They had been using GeoTrust earlier, so I went there. It was the first and most likely the last time I do business with them.
The order process was pretty similar to the competition. A CSR was submitted and all the necessary information was given. After submission there was a thank you -page and they sent an automated e-mail with information that the order is pending, and will be processed after 5 to 7 business days.
After that, nothing. After waiting for 12 days out of with 8 were business days, my patience ran out. I contacted them and requested to expedite the process. Yet another business day passed, and then they called me after 7 in the night and informed, that there was a mistake in the information I had gave them. I corrected the info, got automated e-mail about it and went into yet another wait.
Two more days later they sent an e-mail that the verification call to customer failed. Their mail had the number in it and it was obvious, that they assumed that the customer was located in USA. Even though, the information stated Finland as the country. I got an e-mail about that too.
At the point, when they issued the new certificate, I was surprised. Against all the odds, they managed to verify the customer in less than three weeks. The real suprise was, that at that point their e-mail replies started pouring in. The lag in their e-mail processing was huge. Last of the replies came 6 days after the certificate was issued. It's beyond stupid, that none of the communication I had with them were actually recorded for the purchase process. Apparely all of them went to a distant support site, which has nothing to do with any of their other actions or operations.
Is it just me, or does GeoTrust's way of doing business suck?
CentOS 6 PHP 5.4 and 5.5 yum repository
Wednesday, February 19. 2014
I maintain RPM-packages for PHP 5.4 and 5.5, see earlier post about it.
As any sysadmin can expect, there was too much trouble running the updates. Since CentOS 6 native way is running yum repositories, I created one.
Kick things going by installing the repo-definition:
yum install \
http://opensource.hqcodeshop.com/CentOS/6%20x86_64/Parallels%20Plesk%20Panel/plesk-php-repo-1.0-1.el6.noarch.rpm
After that, a simple yum install command:
yum install plesk-php55
... will yield something like this:
/opt/php5.5/usr/bin/php -v
PHP 5.5.9 (cli) (built: Feb 9 2014 22:04:05)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies
I'll be compiling new versions to keep my own box in shape.
Zend Framework 2: Abusing ZeDb to manage multiple connections
Tuesday, February 18. 2014
My favorite way of approaching DAL in ZF2 is ZeDb. It uses lot of existing functionality, but adds a nice layer of its own stuff, and that stuff is really good. However, ZeDb has one single flaw, it does not support multiple database connections. The typical thinking is, that who an earth would want that. I do. I want other weird things too, but in an everyday real application, you simply need more connections than one.
I came up with an simple solution by abusing ZF2's ServiceManager. Typically you gain access to a model is via something like this:
$manager = $this->getServiceLocator()->get('ZeDbManager');
$model = $manager->get('Application\Entity\Prices');
To get that working you need to do the config in module.config.php, declare ZeDb's DatabaseManagerFactory and override Zend\Db\Adapter\Adapter with AdapterFactory and finally declare your DB-configuration and list the models. Its all in the ZeDb docs.
My idea exploits all three of those. I'll instantiate multiple DatabaseManagers via ServiceManager. Also, every model will select an existing DB-adapter for itself. To get that working there will be separate configuration for all of the connections.
Example module configuration:
return array(
'service_manager' => array(
'factories' => array(
'ZeDbManager' => 'ZeDb\Service\DatabaseManagerFactory',
'Zend\Db\Adapter\Adapter' => 'ZeDb\Service\AdapterFactory',
)
),
'zedb_db_in' => array(
'adapter' => array(
'driver' => 'pdo_pgsql',
),
'models' => array(
'Application\Model\Int\Products' => array(
'tableName' => 'products',
'entityClass' => 'Application\Entity\Int\Products',
),
),
),
'zedb_db_out' => array(
'adapter' => array(
'driver' => 'pdo_mysql',
),
'models' => array(
'Application\Model\Internal\Customers' => array(
'tableName' => 'customers',
'entityClass' => 'Application\Entity\Internal\Customers',
),
),
),
Here I declare the obligatory parts, and two separate configurations with names zedb_db_in and zedb_db_out.
I'll have to prime both the DatabaseManagers and Adapters in Module's onBootstrap(). After the objects exist, they are made accessible with:
public function getServiceConfig()
{
return array(
'factories' => array(
'zedb_db_in' => function (ServiceLocatorInterface $sm) {
return Module::getInDbManager();
},
'ZeDbAdapter_in' => function (ServiceLocatorInterface $sm) {
return Module::getInDbAdapter();
},
),
);
}
The model has to do some heavy lifting to get rid of the default Adapter:
class Products extends ZeDb\Model
{
public function __construct(Adapter $adapter, $options = null)
{
$adapter = $this->getDatabaseManager()
->getServiceLocator()
->get('ZeDbAdapter_in');
parent::__construct($adapter, $options);
}public function getDatabaseManager()
{
return Module::getInternalDbManager();
}
Now the setup is ready. In an action you can simply do:
$inDbManager = $this->getServiceLocator()->get('zedb_db_in');
$productsModel = $inDbManager->get('Application\Entity\Int\Products');
$products = $productsModel->SearchProductByCode('ABC123', null);
There is no limit on how many DB-connections you can have. Note that, the ZeDb-module bootstrap will initialize the default manager and adapter, but the idea is not to use them at all. Anyway, I'll stick with this until something better comes.
It's alive! Printer ran out of paper
Monday, February 17. 2014
This one really made me laugh. The entire situation was like from T-2: "Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m."
My new printer ran out of paper and it sent me an e-mail about it.
Subject: [CLP-410 Series]Paper Empty
System Error: M1-5112 Tray 1 is empty. User intervention required to add paper to Tray 1. Printing can continue if the required media is available from other trays, all other machine services (if installed) are unaffected. .
The e-mail had all the information about the printer, its IP-address, serial number and printed page counts. If I would have 17 of those, it would be possible to differentiate which one is in trouble.
Looks like they just might be taking over after all. I tried looking my printer's Instagram, Facebook and Twitter accounts, but could not find them.
Parallels Plesk Panel: Bug - DNS zone twice in DB
Sunday, February 16. 2014
Earlier I had trouble with with a disabled DNS-zone not staying disabled. I'm running version 11.5.30 Update #32.
The problem bugged me and I kept investigating. To debug, I enabled the DNS-zone and actually transferred it with AXFR to an external server. There I realized, that the SOA-record was bit strange. Further trials reveald that in the NS-records, there was always an extra one. With that in mind, I went directly to the database to see what I had stored there for the zone.
To access the MySQL database named psa, I have to get the password for that. See KB article ID 170 [How to] How can I access MySQL databases in Plesk? for details about that. The database schema is not documented, but it has become familiar to me during all the years I've been sysadmining Plesk Panels. To get the ID for the DNS-zone I did:
SELECT *
FROM dns_zone
WHERE name = '-the-zone-';
And what do you know! There were two IDs for the given name. That is a big no-no. It's like you having two heads. A freak of nature. It cannot happen. It is so illegal, that there aren't even laws about it. To fix that I did rename the one with a smaller ID:
UPDATE dns_zone
SET name = '-the-zone-_obsoleted', displayName = '-the-zone-_obsoleted'
WHERE id = -the-smaller-ID-;
After that a manual refresh of the BIND records from the DB:
/usr/local/psa/admin/bin/dnsmng --update -the-zone-
And confirmation from the raw BIND-file:
less /var/named/chroot/var/-the-zone-
Now everything was in order. I'm hoping that will help and keep the zone disabled. To me it is now obvious why that happened. Database had become badly skewed.
Updating Metasploit with Packet Storm exploits
Saturday, February 15. 2014
For any computer security geek (like I am), Metasploit is the de facto tool to use. What especially makes Metasploit so good, that there are number of places where you can get brand new exploits to run it with. It is obvious that Packet Storm is the #1 place for your new dose of exploits to run. They release exploits in ready packages for Metasploit on monthly basis. To help people catch up in their installations, there are also yearly packages combining year's monthly updates into a single file.
Due to popularity of Metasploit and quality of Packet Storm's exploits, there are number of instructions how to do the update. I googled "upadate metasploit with packet storm exploits" and got a ton of garbage. Really?! Couple dozen YouTube-videos showing how to do it. Videos?! Really!! Apparently all of those videos are created by script-kiddies for script-kiddies. It is soooo easy to copy/paste information from a YouTube-video, or ... then again, NOT. No matter which search phrases I used, I simply could not find a sensible way of doing the updates. When you don't find the required information from the web, it must be one of these two: 1) you're wrong or 2) everybody else is wrong. My typical choice is 2), ask anybody who knows me.
The single page with idea I found was at Mr. ljy396's blog "How to update exploits from packetstorm website". I was looking at his scripts of collecting and installing the exploits, and thought that there should be a single easy-to-use script for doing all this.
When I started writing my own script, the thought came to me again: "This is so wrong! There must be the real way of doing the updates, but I just didn't find it. It is obvious, that I'm writing an inferior duplicate thing for this." Anyway, that logic never stopped me so far. Here are the results.
The script operates in three modes:
- Do monthly updates:
updateMetasploitFromPacketStormExploits.pl --year=2014 - Download and install a specific package:
updateMetasploitFromPacketStormExploits.pl \
--package_url=http://packetstorm.interhost.co.il/1401-exploits/1401-exploits.tgz - Install an existing package:
updateMetasploitFromPacketStormExploits.pl --package_file=/tmp/1401-exploits.tgz
A Metasploit user has the "secret" $HOME/.msf4/-directory. My script adds the updates/-directory for keeping track of which monthly updates it already installed. As Metasploit requires, all of the installed exploits go to modules/. My script renames the .rb.txt-files in the packages properly during the installation. NOTE: I'm not touching the Metasploit install directory. The new updates really go to per-user directory and are not available system-wide.
Any bugs, comments and suggestions are welcome. Please drop a comment.
AMI BIOS iKVM (Asus Z8PE-D12) failing to open console redirection
Friday, February 14. 2014
I was trying to see the remote console of one of my servers:
It said: "Could not create token for this session. Max session limit might have reached". This was yet another reminder that Asus iKVM isn't the sharpest tool in the box, it generally responds slowly and sometimes it does weird things. Thanks to Java, it wasn't easy to get the console application running, it said a number of things, including a complaint:
... but after sorting out a number of Java issues it spat out an error-message with XML-content of:
<jnlp spec="1.0+" codebase="http://192.168.1.11/Java">
<information>
<title>JViewer</title>
<vendor>American Megatrends, Inc.</vendor>
<description kind="one-line">JViewer Console Redirection Application</description>
<description kind="tooltip">JViewer Console Redirection Application</description>
<description kind="short"> JViewer enables a user to view the video display of managed
server via KVM. It also enables the user to redirect his local keyboard, mouse for
managing the server remotely. </description>
</information>...
<resources>
<j2se version="1.5+"/>
<jar href="release/JViewer.jar"/>
</resources>
<application-desc>
<argument>192.168.1.11</argument>
<argument>7578</argument>
<argument>
<html>
<head>
<title>Document Error: Page not found</title>
</head>
<body>
<h2>Access Error: Page not found</h2>
<p>Could not create token for this session. Max session limit might have reached
</p>
</body>
</html>
</argument>
</application-desc>
</jnlp>
I found out somebody who had the same issue at Asus' discussion forums article [Problem]iKVM Remote Control Console w/ Java exception error. There was a suggestion to unplug the power cord. In my case the server is couple hundred kilometers away in a secure data center where I certainly do not have any access. Sure, I could write the support and ask them to do that but ... I'm pretty sure there has to be a way to reset the BMC remotely.
Then I remembered something I did a while ago with ipmitool on Linux-shell. Running:
# ipmitool bmc help
MC Commands:
reset <warm|cold>
guid
info
There is a reset-command in the list. Google led me to an article "Restarting an IBM BMC without restarting the server itself". I wasn't sure what would happen. My first guess was that it would do the same thing as typing "google" into a Google search - the entire world would implode to a singularity. So it is very important not to that. But I really needed the remote console. I was willing to try:
# ipmitool bmc reset cold
Sent cold reset command to MC
Nothing serious happened immediately. The Linux was responsive and world didn't come to an end. After a minute or so, ipmitool bmc info -command yielded a response! The IPMI was back on-line. A logon to web-console:
Yep. I had my console back.
I sure hope that all remote console manufacturers would stop using Java for their apps. It is such a bitch to get a Java application working on after 1.7.0_51 was released. See: "Java 7u51 will not accept JNLP with self-signed certificate?" about other people complaining about that. Anyway, all ended well and I didn't have to drive to the server to do some maintenance.
Zend Framework 2: Disable layout
Thursday, February 13. 2014
This topic pops up every once in a while. As default every rendered page implicitly renders a layout too. This is especially bad for JSON or XML or binary responses your application may generate as a response. So far the best explanation and some helpful insights is in Mr. Chris Schreiber's blog scribles.com in the article Disabling Layout in Zend Framework 2.
I was working on a web application, which was a machine-to-machine app and not meant for humans at all. So a layout is completely unnecessary. As Chris instructs, I went directly to the module-class and copy/pasted his code. It failed. He has some sort of typo in the code:
$sharedEventManager->attach(
'Zend\Mvc\Controller\AbstractController',
function(\Zend\Mvc\MvcEvent $event) {
When I was looking for SharedEventManagerInterface PHP-code, it says:
/**
* Attach a listener to an event
*
* @param string|array $id Identifier(s) for event emitting
* @param string $event
* @param callable $callback PHP Callback
* @param int $priority Priority at which listener should
* @return void
*/
public function attach($id, $event, $callback, $priority = 1);
There clearly are 3 obligatory parameters. Chris' code has three parameters with the optional priority. Something is missing. This is the fix:
$sharedEventManager->attach(
MvcEvent::EVENT_DISPATCH,
'Zend\Mvc\Controller\AbstractController',
function(\Zend\Mvc\MvcEvent $event) {
Now it works! However, as my application was in its early stages, I was missing the default controller with the class name of IndexController. Adding the code into onBootstrap() didn't help. None of the callback's code was run during event dispatch. More debugging revealed, that my code never triggered the MvcEvent::EVENT_DISPATCH. It did trigger a MvcEvent::EVENT_DISPATCH_ERROR instead. The reason is obvious, I didn't have the class.
For clarity I'll copy/paste my onBootstrap() entirely here:
public function onBootstrap(MvcEvent $evt)
{
$eventManager = $evt->getApplication()->getEventManager();
$sharedEventManager = $eventManager->getSharedManager();
// Make sure layout is not rendered for regular pages
$sharedEventManager->attach('Zend\Mvc\Controller\AbstractController',
MvcEvent::EVENT_DISPATCH,
function (MvcEvent $event) {
$dispatchResult = $event->getResult();
if ($dispatchResult instanceof ViewModel) {
$dispatchResult->setTerminal(true);
}
}, -99
);
// Make sure layout is not rendered for HTTP/404 pages
$eventManager->attach(MvcEvent::EVENT_DISPATCH_ERROR,
function (MvcEvent $event) {
$dispatchResult = $event->getResult();
if ($dispatchResult instanceof ViewModel) {
$dispatchResult->setTerminal(true);
}
}, -99
);
}
Now both events are handled properly, without attempting to find a layout. To test my code, I added a layout/layout.phtml to the project and commented out my above code. It did renader the file contents. Then I removed the comments and deleted the layout/-directory completely. My code still ran on both occasions: when action can be found and when action cannot be found. Actually I also have a CLI-interface to the app, but that won't render the layout anyway.
This is yet another example of the complexity of ZF2. Any trivial task turns out to be a huge pain in the butt. I don't want to start ranting about using PhpRenderer in CLI-app, that's a completely another complaint.
Triggering Adobe Flash Player update manually
Wednesday, February 12. 2014
No matter how much I think it, it simply does not make any sense to me. Why an earth, isn't there a button to manually update Adobe Flash Player? What good will it do to download it every single time you want it updated? All the parts are already there in your computer, but there is no reasonable way of telling it to:
Go! Update! Now!
With the help of an excellent tool, Windows Sysinternals Process Explorer, I snooped out the location and parameters of the update application.
On a 64-bit Windows
It is highly likely, that your browser is 32-bit. You need to be some sort of hacker (like me) not to have a 32-bit browser. So, the assumption is this applies to you.
All the good parts are in C:\Windows\SysWOW64\Macromed\Flash\
On a 32-bit Windows
If your PC is old, then you'll have this. (Or, alternate case: you are a hacker and running 64-bit browser.)
All the good stuff is in C:\Windows\System32\Macromed\Flash\
Triggering the update
NOTE:
The version number of the application will change on each update. What I demonstrate here was valid at the time of writing this, but I assure you, the exact name of the application will be something else next month.
The location of the files is ...
For all other browsers than Internet Explorer:
FlashUtil32_12_0_0_43_Plugin.exe -update plugin
For Internet Explorer:
FlashUtil32_12_0_0_44_ActiveX.exe -update activex
Running that command as a regular user will trigger the same process that would be triggered during a Windows login. Since I login rarely, the update almost never triggers for me. I simply put the computer to sleep and wake it up and unlock the screen, which again does not trigger the version check.
This isn't the only stupid thing Adobe does. They don't take your security seriously. Shame on them!
Supermicro SuperServer 5015A-EHF-D525
Tuesday, February 11. 2014
I needed a new rack-server to fit in a standard 19" telco-rack. Your average rack-mounted server is 60 to 100 cm long. A rack-server may be really thin. 1U (4,5 cm) or less, but really long. Then again, your average telco rack is not long enough for a long server, they may be 40 to 50 cm and not all of them have brackets for mounting the back. My requirement of mounting a server into a telco-rack hugely limits the options to choose a server from. One final requirement I had was that the server must have IPMI 2.0, because that would guarantee KVM over IP.
But after a longish search I found one. The bad news was, that it was a Supermicro, a SuperServer 5015A-EHF-D525 to be exact. Supermicro's track-record on failing IPMI-security is known, see an earlier blog post about it. Anyway, I got a vendor from UK who would order one. They didn't have them in stock and such small rack-servers are quite inexpensive, but don't sell like hotcakes. Who would be crazy enough to want one? I am, so put an order in with the Boston Ltd. It didn't take them many weeks and I got my server delivered.
Here are the pics from front and back:
In the front there is couple of LEDs, power and reset buttons. In the back there are couple of USB 2.0 -ports and two RJ-45 for 100 Mbit/s Ethernet. It is possible to run IPMI on a shared Ethernet-port or a dedicated one. In my setup I didn't want to have the extra cable and went for a shared setup. The port announces itself as two different MAC-addresses and effectively gets two different DHCP-addresses. One for IPMI and one for the operating system.
This is what it looks inside:
Notice how the Intel SSD -drive is not attached properly yet at the time of taking the pic. The motherboard is really small. The only fan is in the PSU, no other ones are required as the Intel Atom CPU does not produce too much heat and the GPU isn't much of a rocket. The tiny box is almost empty, even with a 2.5" SSD in it.
In the BIOS I found that there is a lot of watchdogging to do, there are two of them:
I don't know what the BIOS watchdog is good for as it keeps triggering if turned on. For the IPMI's watchdog there is a Linux-daemon.
Anyway, except the small size and 32-bit Atom CPU there is nothing special in it. 10 years ago such a server would have been a state-of-the-art box, but on today's standards its just a small low-power unit.
On the positive side, IPMI's web console has a setting for IP Access Control. I tested it and it seems to work. It closes all the ports for everybody else. Since the raw rules are not available for inspection, there still may be couple of network blocks which are allowed. These days you cannot help being suspicious.
This is a special box. If you happen to need a silent low-power server, then go with a 5015A-EHF-D525.
Installing OpenSuse 13.1 into a MacBook
Monday, February 10. 2014
OpenSuse 13.1 was released November 2013. During Christmas holidays I started a project of upgrading my previous installation.
Since I'm running on a MacBook 1,1 it was obvious that I was looking for trouble. Previously I had rEFIt running just to get a GRUB 2 -prompt. This time I decided to start from a clean slate. Literally. I ran
dd if=/dev/zero of=/dev/sda
for the 10 first MiB of the drive to make sure, that it definitely has no trace of any of my previous settings. Since rEFIt has been abandoned years ago, I went for the replacement project rEFInd. I approached the author Mr. Roderick W. Smith and he was very helpful, but no matter what I did, I could not get rEFInd running on my very old 32-bit Mac. So, I had two options left: to go back to abandonware or see what would happen without a Boot Manager.
I failed on the installer settings-dialog, by trying to out-smart OpenSuse logic. My completed installation didn't boot. On 2nd try I simply went with the flow. As Mr. Smith instructed me, I didn't touch the most critical thing: MBR is not the way to go on a Mac! Here are my settings:
And guess what, it worked! OpenSuse 13.1 installer has enough logic to create a bootable Linux-installation to a completely blank hard drive. Nice!
The installer was pretty smart. Wi-Fi network was configured properly, it worked out-of-the-box. Apple-keys work: screen brightness, volume, etc. work as is. Also the typical trouble-makers sleep (on RAM) / hibernate (to disk), battery info, sound, and what not. There were only two minor issues: iSight does not work without the Apple proprietary firmware and the keyboard Apple-keys don't do anything usable.
To get the iSight camera working, see ift-extract -tool at Apple Built-in iSight Firmware Tools for Linux. It can dig the guts out of Mac OS X iSight-driver and equip your Linux with a functioning camera. The keyboard is a trivial one. Like previously, I just keyfuzz'ed the keys into something useful. See the OpenSuse 12.3 installation blog entry for details.
There is one thing you may want to check. If you enable SSHd, like I always do on all servers. As default /etc/sysconfig/SuSEfirewall2.d/services/sshd defines as TCP/22 to be open. That is the general idea, but apparently there is so much SSHd bombing going on, that I always tar pit my installations. For some strange reason Suse engineers chose not to allow that in a specific service definition file, but it has to be in the classic /etc/sysconfig/SuSEfirewall2 file, section FW_SERVICES_ACCEPT_EXT="0/0,tcp,22,,hitcount=3,blockseconds=60,recentname=ssh"
I urge everyone of you to rename the services/sshd into something else and add the above line. This makes bombing your SSH-port so much more difficult. And it does not affect your own login performance, unless you choose to bomb it yourself.
You may want to check OpenSuse's hardware compatibility list for details about Apple Laptops. The HCL has info about what works and what doesn't.
In general OpenSuse folks did a very good job with this one. There was a real improvement on ease installation. Thanks Roderick W. Smith for his help during my installation and thanks to Novell for giving this great distro for free!
Huawei E3276 reviewed
Sunday, February 9. 2014
I got my hands on Huawei E3276. It is a 4G LTE USB-stick. The model I tested appears to be a model s-150. According to 4GLTEmall.com it has following frequencies available in FDD: 800 MHz, 900 MHz, 1800 MHz, 2100 MHz and 2600 MHz. The newer 4G modulation of TDD is not available.
It looks like this:
On the side there is a slot for MicroSD-card and a standard CRC9-connector which Huawei is using for all of its USB-products. See this vid (with cheesy music) to see an external antenna connected to a Huawei. The pics:
You open the stick by sliding the white cover to left. Then you can insert a mini SIM-card (in my pic there is a micro-SIM to mini-SIM adapter sleeve):
In general it resembles your average Huawei 3G-stick a lot, the only difference is that it is little bit wider. I still remember how big a Huawei E220 felt when the 2nd gen 3G-sticks (E160/E169/etc.) came. As you would expect, when the USB-stick is inserted, it appears as a CD-drive with the classic Huawei Mobile Partner setup in the "disc". After you install the drivers and the software, there is pretty much no way of telling that you are not running on a 3G but a 4G. The mobile partner has features to lock it into LTE-mode and it indicates the connection type of 3G WCDMA, 3G DC and LTE properly.
This is how it looks like on a side of a Lenovo Ideapad S10:
Generally I was pretty pleased with the stick. It is clearly an evolution version of a 3G stick. The feeling is that it is a 3G on steroids. The antenna is very sensitive and my testing revealed that it keeps the connection open even if there is zero bars of reception. However, it is quite expensive currently. In Finland you can get one from a telco with 99,- € with a contract. The unlocked ones are around 150,- €.
It worked on Windows 7 without problems, but as a Linux-man I had to try and see if it works. Unfortunately there are issues with that. Linux recognizes the stick as a Huawei E352 3G-stick, does the modeswitch from storage mode to modem mode properly and a /dev/ttyUSB0 appears, but it does not connect to The Net. A short minicom session indicated, that the modem is there and is receving commands:
atz
OK
ati
Manufacturer: huawei
Model: E3276
Revision: 21.192.05.00.00
IMEI: 863781010000000
+GCAP: +CGSM,+DS,+ES
OK
It seems to have a new Type Allocation Code code of 86378101 for Huawei. Not all databases have that yet.
The Linux chatter in The Net is that 4g-connect-linux -script should be able to make a connection. I tried it with varying parameters, but somehow the new wwan0-interface didn't have a proper device attached into it and all I got was ioctl SIOCGIFHWADDR: No such device -errors. Search for "cdc_ncm e3276" to get a number of posts about that.
I also did some ping-testing on 3G Dual-Carrier versus 4G LTE. The bandwidth is really a no contest, LTE will win that one hands down as 3G DC has a theoretical cap on 42 Mbit/s. That's pretty much the point where LTE starts! My measurements also confirmed that. A 50 Mbit/s connection limit imposed by the telco measures 69 Mbit/s in reality. Not bad, huh!
Anyway, a simple ping-test says:
- Forced 3G DC: Minimum = 31ms, Maximum = 1217ms, Average = 66ms
- Forced LTE: Minimum = 24ms, Maximum = 460ms, Average = 35ms (Winner!)
The fluctuation in a mobile connection is there. When something happens in the air, there will be delays and retransmissions. But when conditions are nearing optimal, on LTE it is difficult to differentiate mobile connection from a wired one. That's the reason I'm so thrilled about LTE availability. 3G is ok for surfing the web, but LTE is the first mobile technology that really performs!
Final words:
The Huawei E3276 is definitely a keeper. Now that Finnish telcos have an agreement about the 800 MHz 4G usage, they are investing heavily on their LTE-networks. Even pre-paid data has 4G on some telcos. The stick does not replace my B593 which is meant for non-mobile usage, but for a laptop I'll be definitely keep using this one.