Wheezy domU + pygrub

I’m using Xen 4.1 / Linux 3.5 on a recent AMD (FX class) box for Audeo internal VMs. I decided to use wheezy on a few of these VMs for various reasons.
I normally use xen-create-image to create new VMs, but when I tried to start a wheezy VM created in this manner, Xen complained of the following:

Error: Boot loader didn't return any data!

You’ll also see the following in xend.log:

[2012-09-13 16:25:04 1506] ERROR (XendDomainInfo:108) Domain construction failed
Traceback (most recent call last):
  File "/usr/lib/xen-4.1/bin/../lib/python/xen/xend/XendDomainInfo.py", line 106, in create
  File "/usr/lib/xen-4.1/bin/../lib/python/xen/xend/XendDomainInfo.py", line 474, in start
    XendTask.log_progress(31, 60, self._initDomain)
  File "/usr/lib/xen-4.1/bin/../lib/python/xen/xend/XendTask.py", line 209, in log_progress
    retval = func(*args, **kwds)
  File "/usr/lib/xen-4.1/bin/../lib/python/xen/xend/XendDomainInfo.py", line 2838, in _initDomain
  File "/usr/lib/xen-4.1/bin/../lib/python/xen/xend/XendDomainInfo.py", line 3285, in _configureBootloader
    bootloader_args, kernel, ramdisk, args)
  File "/usr/lib/xen-4.1/bin/../lib/python/xen/xend/XendBootloader.py", line 215, in bootloader
    raise VmError, msg
VmError: Boot loader didn't return any data!

As it turns out, with a default install of wheezy from xen-create-image, you won’t get anything in /boot, or grub; pygrub depends on at least the existence of /boot/grub/menu.lst.

The easiest way to fix this is to just chroot into the (mounted) disk image and install grub, and make sure it gets installed in the future by making appropriate changes to your x-c-i configuration.

Mountain Lion upgrade issues

(I’ve expanded this to include something else I’ve noticed)

First, for anyone who is upgrading to 10.8 – I had this issue when upgrading my MBP. Once you’ve finished downloading and run the installer for the first time, your Mac will reboot to install the update. I was getting a persistent kernel panic immediately upon reboot. The solution to this (for me) was to zap the PRAM – boot the machine and hold down Command-Option-P-R. After that the installer continued as normal.

Next, I noticed some issues with iCal and creating calendar events. I use Zimbra for calendaring and contact syncing between all my various devices (I’m loth to use iCloud for anything at all and Zimbra mostly Just Works with iOS and anything else I encounter).

Apparently the iCal in Mountain Lion changed just enough to break Zimbra, inasmuch as the defaults for event notifications seem to be “None”, and Zimbra doesn’t support this. This bug was recently fixed so we should see updated versions soon. In the meantime, however, you can work around this problem by going into Preferences->Alerts, selecting the account for the Zimbra server, and setting defaults instead of “None” for all the alert settings. After this things will work as normal.

Capistrano and the shell used for run

A short blog post in lieu of a longer one, since I’m so busy. However I noticed this recently and figured someone else might be wondering.

The workflow for my biggest client involves using Jenkins as a coordinator for builds & deploys. This often involves checking out something from git, running tests and whatever, then using Capistrano to deploy it (we generally use Capistrano even if the project isn’t Ruby, unless it comes with something else).

While working on a python project recently, using virtualenv, I noticed that I couldn’t use source in a run command:

* executing `app:pip'
  * executing "cd /mnt/web_stage/sites/someclient.com/web-app/releases/20120430160812 && source someclient/bin/activate && pip install -r requirements/requirements.txt"
    servers: ["localhost"]
    [localhost] executing command
*** [err :: localhost] sh: source: not found


Turns out, by default, Capistrano won’t send the command itself directly, such as:

ssh foo@bar 'cap production deploy'

but rather attempts to make any potential shell differences irrelevant, and uses:

ssh foo@bar 'sh -c 'cap production deploy' '

…opting for the lowest common denominator, sh. Which is fine, but not helpful for what I was trying to do (install stuff in a virtualenv environment, for which source is useful).

You can change this by setting :shell to be either false (to execute commands directly) or a path to a shell you prefer.

Interestingly, setting this is not intuitive. Trying to set this as you might expect:

set :shell, false

does absolutely nothing.

After looking through the gem source, I found that this works:

default_run_options[:shell] = false

That allowed me to source the activation script for virtualenv, and then everything works as you might expect.

Also, please, don’t give me any jibba jabba about how I’m not using Fabric. We decided on a standard and we use the standard.

Apparently someone reads this blog. My apologies.

Huh. Somebody linked to my post on hacker news, which resulted in some interesting comments. Some of them are fairly insulting, but, hey, whatevs.

Mostly, I feel I should clarify: I tend to write a certain way on this blog. The post in question was deliberately bombastic; it was reflecting a certain frustration I often feel with Ubuntu in general (yes, I should have more appropriately titled the post, shut up). A lot of people mentioned “why didn’t I do x, or y, or z” or “He didn’t know X, y, or even Z” and my point was more, “Why doesn’t cat > /etc/motd just work?” – followed by a lot of other questions. I wouldn’t say that it was written out of nerd rage: it was more written deliberately in an inflammatory style. Yes, obviously, looking at /etc/motd would have shown me it was a symlink, and yes I could have just removed it: you’re missing the damn point. What’s the point in telling a rabbit hole story if you just fix the problem and forget about the underlying issues that cause it to begin with?

Migrating from Windows DNS to BIND

I don’t often (read: ever) post about Windows, but I thought this might come in handy for a few people.

I’ve been working for a new client recently, helping out with their infrastructure. One thing the CTO really wants to do is to use BIND as a DNS server, instead of Windows. The infrastructure side of the house sees Windows as a necessary evil to keep users happy: the less reliance on it, the better.

As you can probably tell by reading my blog or by knowing anything about me, I have no issue with this position whatsoever.

Anyways, this used to be a particularly easy task: the last time I had anything to do with Windows (Server 2003, no, really) the zones were stored on disk, in a BIND-compatible format.

In the latest versions, the zones appear to be stored in Active Directory, and there are some hurdles you need to go through to export them to a usable format (the Export action in Server Manager does, quite frankly, less than diddly).

dnscmd <domain controller> /ZoneExport <name of domain> <filename>


dnscmd some-domain-controller.ad.yourdomain.biz /ZoneExport ad.yourdomain.biz ad.yourdomain.biz.txt

Like all Windows tools, dnscmd has its own particular brand of brain damage: <filename> is not a full path, but literally just a filename that will be saved in <windows root>/<system dir>/dns. If you try to put something like C:\temp\something.txt you’ll be rewarded with nonsense like ..temp instead.

On most systems you can find the files dnscmd produces in C:\Windows\System32\dns .

You’ll be rewarded with a BIND-compatible zone export, which you can use as you like. The SRV records for an AD domain are the most important, since Windows replication and other functionality will break without them.

Too bad I still have to merge two forests and rename a domain. That should be FUN CITY.

Just hit yes.