Archive for category Linux

DisplayLink and x2x brings back Zaphod mode

Ever since work issued me a Lenovo T61 and I installed Fedora on it, I have lamented the loss of something that X afficionados referred to as “Zaphod mode”.  By gluing together a few different software and hardware components I managed to get close to the old Zaphod mode days — but first some background…

Usually when you set up a multi-monitor installation you get a single desktop that spans all the screens.  This is great when you have a single desktop, but on Linux multiple desktops are the norm.  When I started using multiple screens in Linux, I loved the extra screen real estate but the fact that switching virtual desktops caused *all* the windows on all the screens to switch really bugged me.  I wanted the ability to have something — like an email program, or a web browser — to stay on one screen while I switched between desktop views on the other screen.  Or better still, the ability for both screens to have virtual desktops that were independent of each other.

Enter “Zaphod mode”, named for Zaphod Beeblebrox from the Hitchhikers Guide to the Galaxy by Douglas Adams.  Beeblebrox, who was President of the Galaxy before he stole the Starship Heart Of Gold, had two heads that were independent of each other.  In X server terms, multiple display devices are often referred to as “heads”.  So you can probably deduce that “Zaphod mode” refers to an operating mode of the X server where the multiple “heads” or display devices function as different displays.

Go back far enough and you get to a point where that was the standard mode of operation of X.  The X extension “Xinerama” was developed to provide the merging of different X displays into a single screen.  NVidia also had a hardware/firmware based equivalent called TwinView, where multiple heads on an NVidia card (and even sometimes heads on different cards) could be joined.  These extensions came not without their problems however: it was common for windows and dialog boxes to get confused about what display to appear on.  You would almost always see dialog boxes that are meant to display in the middle of the screen being split across the two physical displays.  Also, there was the multiple desktop “inconvenience” of not being able to switch the desktops independently.

Zaphod mode fixed these problems.  Because the screens were separate, windows and dialog boxes always appeared in the centre of the physical screen.  You could leave a web browser on one screen while you switched between an e-mail client, an IRC client, and an SSH session in the other.  It wasn’t all beer-and-skittles though, since in Zaphod mode it was not possible to move an application from one screen to the other.  Plus, some applications like Firefox could not have windows running on both screens (the second one to start could not access the user profile).

Zaphod mode largely “went away” during the transition from XFree to Xorg.  The servers dropped support for multiple separate displays in the one server, and only gradually added it back in (with the Intel driver being one of the last to do so, and probably still has not).  Since laptops were the only place I still used multiple screens, and the laptops I used all had Intel integrated graphics, I had to do without Zaphod mode.

Today, I hardly use dual monitors at all.  I used to have a desktop system with a 21″ CRT flanked by 17″ LCDs on either side, but that all got replaced by a single 24″ LCD.  At work we don’t have assigned desks, so setting up a screen to plug the laptop into isn’t going to happen.  I guess I learned to live without Zaphod mode by just going back to a single screen.  I still remember my Zaphod-powered dual-screen days fondly though, and with almost every update to Xorg I would scan the feature list looking for something like “Support for configuration of multiple independent displays (Zaphod mode)”.

A while back I bought a DisplayLink USB to DVI adapter.  I didn’t really know what to do with it at the time, but recently I dug it out and tried setting it up.  Googling for “DisplayLink Fedora” sent me to a couple of very helpful pages and it didn’t take long to get the “green screen of life” that indicates that the DisplayLink driver was active.  It was when I was looking at how to make it work as an actual desktop — part of the process involves setting up a real xorg.conf (that’s right, something about the DisplayLink X server means it can’t be configured by the Xorg auto configuration magic) — that I realised I could do something wonderful.  Instead of making a config file that contained both my standard display and the DisplayLink device (and probably cause havoc for the 90% of times I boot without an additional screen) I would create a config file with *just* the DisplayLink device and start it as a second server.  Run a different window manager in there, and I would have two independent desktops — Zaphod mode!

I did a couple of little experiments just starting an xterm in the second X, and it worked fine (the more alert of you will realise that I’m taking a bit of artistic license with the word “fine” here, and know that three little letters in the title of this post are a clue to what wasn’t yet working…) with the desktop and the xterm appearing in the second monitor.  I installed XFCE, and configured it to start as the window manager of the second X server, which also worked well.

Something was missing though: there was no mouse input to the second screen.  In Zaphod mode, even though the two screens were separate X displays they were managed by the same server.  This meant that the input devices were shared between the two displays.  In this configuration, I was careful to exclude any mouse and keyboard devices from my second display config to avoid any conflicts.  So how was I to get input device data into the second server?  A second display is not much good if you can’t click and type on the applications that run on it…

I remembered about an old program called x2x that could transfer the mouse and keyboard events to a different X server when you moved the mouse to the edge of your display (and, inexplicably, I forgot all about a much younger program called Synergy that can do the same thing).  Since x2x isn’t built for Fedora I found the source and built it and started it up…  and it worked first time!  When I moved the mouse to the edge of the screen, it appeared on the other screen!  I could start apps and type into them exactly as I wanted.

It wasn’t perfect, however.  I found that when I returned the mouse to the primary screen, the second screen was still getting keyboard events.  I figured this would be particularly inconvenient when, for example, I was entering user and password details into an app on the primary screen while an editor or terminal program had focus on the second screen…  I checked the Xorg.1.log file, and found that even though I had not specified a “keyboard” input device Xorg was automatically defining one for me.  I turned off the udev options, but it still happened.  My initial enthusiasm was starting to fade.

What fixed it was to manually define a “dummy” keyboard device.  There must be some logic in Xorg that it refuses to allow a configuration with no configured keyboard (which makes sense), so in this rather unusual case where I don’t want a keyboard I have to define one but give it a dummy device definition.  Defining the dummy keyboard stopped Xorg from defining its automatic one, and everything worked as expected!  Even screensavers work more-or-less as designed (although I haven’t actually spent much time in front of the setup yet so haven’t had to unlock the screen that often).

I’m away from the computer in question right now, otherwise I would post configs and command lines (and even a pic of the end result).  I’ll update this post with the details — leave a comment if you think I need to hurry up!  🙂


Tags: , , , , ,

Oracle Database 11gR2 on Linux on System z

Earlier this year (30 March, to be precise) Oracle announced that Oracle Database 11gR2 was available as a fully-supported product for Linux on IBM System z.  A while before that they had announced E-Business Suite as available for Linux on System z, but at the time the database behind it had to be 10g.  Shortly after 30 March, they followed up the 11gR2 announcement with a statement of support for the Oracle 11gR2 database on Linux on System z as a backend for E-Business Suite — the complete, up-to-date Oracle stack was now available on Linux on System z!

In April this year I attended the zSeries Special Interest Group miniconf[1], part of the greater Independent Oracle Users Group (IOUG) event COLLABORATE 11.  I was amazed to discover that there are actually Oracle employees whose job it is to work on IBM technologies — just like there are IBM employees dedicated to selling and supporting the Oracle stack.  Never have I seen (close-up) a better example of the term “coopetition”.

On my return from the zSeries SIG and IOUG, I’ve become the local Oracle expert.  However, I’ve had no more training than the two days of workshops run at the conference!  The workshops were excellent (held at the Epcot Center at Walt Disney World, no less!) but they could not an expert make.  So I’ve been trying to build some systems and teach myself more about running Oracle.  I thought I’d gotten off to a good start too — I’d installed a standalone system, then went on to build a two-node RAC.  I communicated my success to one of my sales colleagues:

“I’ve got a two-node RAC setup running on the z9 in Brisbane!”

“Great!  Good work,” he said.  “So the two nodes are running in different LPARs, so we can demonstrate high-availability?”

” . . . ”

In my haste I’d built both virtual machines in the same LPAR.  Whoops.  (I’ve fixed that now, by the way.  The two RAC nodes are in different LPARs and seem to be performing better for it.)

Over the coming weeks, I’ll write up some of the things that have caught me out.  I still don’t really know how all this stuff works, but I’m getting better!


IBM System z: or

Linux on System z:

Oracle zSeries SIG:

Oracle Database:

[1] Miniconf is a term I picked up from — the zSeries SIG didn’t advertise its event as a miniconf, but as a convenient name for a “conference-in-a-conference” I’m using the term here.




Tags: , , , , ,

What a difference a working resolver makes

The next phase in tidying up my user authentication environment in the lab was to enable SSL/TLS on the z/VM LDAP server I use for my Linux authentication (I’ll discuss the process on the DeveloperWorks blog, and put a link here).  Apart from being the right way to do things, LDAP authentication appears to require SSL or TLS in Fedora 15.

After I got the Fedora system working, I thought it would be a good idea to have other systems in the complex using SSL/TLS also.  The process was moderately painless on a SLES 10 system, but on the first SLES 11 system I went to YaST froze while saving the changes.  I (foolishly) rebooted the image, and it hung during boot.  Not fun.

After a couple of attempts to fix up what I thought were the obvious problems (each attempt involving logging off the guest, connecting its disk to another guest, mounting the filesystem, making a change, unmounting and disconnecting, and re-IPLing) with no success, I went into /etc/nsswitch.conf and turned off LDAP for everything I could find.  This finally allowed the guest to complete its boot — but I had no LDAP now.  I did a test using ldapsearch, which reported it couldn’t reach the LDAP server.  I tried to ping the LDAP server by address, which worked.  I tried to lookup the hostname of the LDAP server, and name resolution failed with the traditional “no servers could be reached” message.  This was odd, as I knew I’d changed it since it was pointing to the wrong DNS server before…  I could ping the DNS by address, and another system resolved fine.

I thought it might have been a configuration problem — I had earlier had trouble with systems not being able to do recursive DNS lookups through my DNS server.  I went to YaST to configure the DNS Server, and it told me that I had to install the package “bind”.  WHAT?!?!?  How did the BIND package get uninstalled from the system…

Unless…  It’s the wrong system…

I checked /etc/resolv.conf on a working system and sure enough I had the IP address wrong.  I was pointing at a server that was NOT my DNS server.  Presumably the inability to resolve the name of the LDAP server I was trying to reach is what made the first attempt to enable TLS for LDAP fail in YaST, and whatever preload magic SLES uses to enable LDAP authentication got broken by the failure.  Setting the right DNS and re-running the LDAP Client module in YaST not only got LDAP authentication working but got me a bootable system again.

A simple fix in the end, but I’d forgotten the power of the resolver to cause untold and unpredictable havoc.  Now, pardon me while I lie in wait for the YaST-haters who will no doubt come out and sledge me…  🙂

Tags: , , , , , ,

Another round of Gentoo fun

A little while back I did an “emerge system” on my VPS and didn’t think much more about it.  First time back to the box today to emerge something else, and was greeted with this:

>>> Unpacking source…
>>> Unpacking traceroute-2.0.15.tar.gz to /var/tmp/portage/net-analyzer/traceroute-2.0.15/work
touch: setting times of `/var/tmp/portage/net-analyzer/traceroute-2.0.15/.unpacked’: No such file or directory

…and the emerge error output.  Took me a little while to get the answer, but it was (of course) caused by a new version of something that came in with the system update.  This bug comment had the crude hack I needed to get back working again, but longer-term I obviously need to fix the mismatch between the version of linux-headers and the kernel version my VPS is using (it’s Xen on RHEL5).

Tags: , , , , ,

Nagios service check for IAX

I’ve been using Nagios for ages to monitor the Crossed Wires campus network, but it’s fallen into a little disrepair.  Nothing worse than your monitoring needing monitoring…  so I set about tidying it up. Network topology changes, removal of old kit, and some fixes to service checks no longer working correctly.

One of the problems I needed to fix was the service check for IAX connections into my Asterisk box.  The script (the standard from the Nagios Plugins package) was set up correctly, but it would fail with a “Got no reply” message.

I started doing traces and “iax2 debug” in Asterisk, but got nowhere — Asterisk was rejecting the packet from the check script.  Finally I decided to JFGI, and eventually I found this page with the explanation and the fix.  Basically, sometime in the 1.6 stream Asterisk toughened up security on the control message the Nagios service check used to use.  Thankfully, at the same time a new control message specifically designed for availability checking was implemented, and the fix is to update the script to use the new control message.  Easy!

BTW, while on Nagios, I got burned by the so-called “vconfig patch” which broke the check_ping script.  I’ve had to mask version 1.4.14-r2 and above of the nagios-plugins package until the issue is fixed.

Tags: , , , , , ,

Sharing an OSA port in Layer 2 mode

I posted on my developerWorks blog about an experience I had sharing an OSA port in Layer 2 mode.  Thrilling stuff.  What’s more thrilling is the context of where I had my OSA-port-sharing experience: my large-scale Linux on System z cloning experiment.  One of these days I’ll get around to writing that up.

Tags: , , ,

Asterisk and a Patton SmartNode

It’s been ages since I did an update on the main network machine here, and I bit the bullet over the weekend. 250+ packages emerged with surprisingly little trouble, and all I was left to do was build the updated kernel and reboot.
I usually end up with something that doesn’t restart after the reboot, usually because of a kernel module that needs to be rebuilt after the kernel (because I forget to remerge the package before the reboot, oops). This time the culprit was Asterisk (the phone system), which I also often have trouble with after an update due to a couple of codec modules external to the Asterisk build. This time however the problem ended up being due to the Asterisk CAPI channel driver failing.
Thinking it was the usual didn’t-rebuild-the-module problem, I went looking for the package I had to rebuild… only to find it was masked. Turns out the driver for the ISDN card in the box, a FritzCard PCI, is no longer maintained and doesn’t build on modern kernels, which has resulted in the Gentoo folks hard-masking the entire set of AVM’s out-of-tree drivers.
Help was at hand in the form of a Patton SmartNode 4552 ISDN VoIP router I’d bought months ago to replace the Fritz card. Even though there isn’t much information about how to configure the SmartNode for Asterisk around, I managed to get the setup working in only a couple of hours. I even managed to get the outgoing routing for the work line set up right!
Eventually I’ll get something posted here that goes into a bit more detail about the configuration. Let me know in a comment if you need to hurry me up! 🙂

Tags: , , , ,

ppc Linux on the PowerMac G5

With Apple’s abandonment of PPC as of Snow Leopard, I began wondering what to do with the old PowerMac. It’s annoying that so (comparatively) recent a piece of equipment should be given up by its manufacturer, but that’s a rant for another day. Yes, we can still run Leopard until it goes out of support, but with S and I both on MacBook Pros with current OS I know that we would both become frustrated with a widening functionality gap between the systems.

I had always resisted runing Linux on the PowerMac, thinking that the last thing I needed was yet another Linux box in the house. I had tried a couple of times, but it was in the early days of support for the liquid cooling system in the dual-2.5Ghz model and those attempts failed dismally. I figured that by now those issues would be resolved and I would have a much better time.

I assumed that Yellow Dog was still the ‘benchmark’ PPC Linux distro, so I went to their site. I saw a lot of data there about PS3 and Cell; it seems that YDL is transitioning to the cluster and/or research market by focussing on Cell.

The next thing I discovered is the lack of distributions that have a PPC version, even as a secondary platform. My old standby Gentoo still supports PPC, as does Fedora (I think: I saw a reference to downloading a PPC install disk, bit didn’t follow it), but every other major distro has dropped it — openSUSE, for example, with their very latest release (their download page still has a picture of a disc labelled “ppc”, but no such download exists, oops). I guess that since the major producer of desktop PPC systems stopped doing so, the distros saw their potential install base disappear. Unfortunately for those distros, I can see the reverse happening: now that Apple has fully left PPC behind, plenty of folks like me who have moderately recent G4 and G5 hardware and who still want to run a current OS will come to Linux looking for an alternative… I guess time will tell who is right on this one.

So I went to install Gentoo, and to cut a long story short I had exactly the same problem as before: critical temperature condition leading to emergency system power-off. I found that if I capped the CPU speed to 2Ghz I could stay up long enough to get things built, but then the system refused to boot because it couldn’t find the root filesystem. Probably something to do with yaboot, SATA drives and OpenFirmware. So again I’m putting it aside.

My next plan was to treat it as a file server. Surely a BSD would support my G5 hardware: after all, Mac OS X is BSD at heart… Well, no. FreeBSD has no support for SATA on ppc, OpenBSD specifically mentioned liquid-cooled G5s as having no support, and I don’t think I saw any ppc support on NetBSD more recent than G3 [1].

This is one of the things that annoys me about the computer industry: that somehow it’s okay to so completely disregard your older releases. What if the automotive industry worked that way?

So I may yet try Fedora, or give the game away for another year or so and see what the situation looks like then.

[1] I may have mixed up a couple of these details.

Edit: Gentoo’s yaboot has managed to make it so that I can’t boot Mac OS X on the machine any more.  Oh dear.

Tags: , , , , ,

Network virtualisation

I’ve been doing a lot of mucking around with KVM with libvirt (I keep promising an update here, don’t I).  In my desktop virtualisation requirements I had a need for presenting VLAN traffic to guests: simple enough, and I’ve done it before.  You can do what I usually do, and configure all your VLANs against the physical interface then create a bridge for each VLAN you want to present to a guest.  The guest then attaches to the bridge appropriate to the VLAN it wants access to, with no need to configure 8021q.

(The other method of combining VLAN-tagging and bridging is to bridge the physical interface first, then create VLANs on the bridge.  I couldn’t work out how to get VLAN-unaware guests attached to this kind of setup, and it didn’t work for me even to give IP access to the host using a br0.100 for example.  Still, it must work for someone as it’s written about a lot…)

I realised that from particular virtual machines I needed to get access to the VLAN tags — I needed VLAN-awareness.  Now I knew up-front that the way I could do this was to just throw another NIC into the machine and either dedicate it to the virtual guest or set up a bridge with VLAN tags intact.  I really wanted to exhaust all possible avenues to solve the problem without throwing hardware around (as I’ve been doing a bit of that recently, I have to admit).

First, I tried to use standard Linux bridges as a solution, but discovered that an interface can’t belong to more than one bridge at a time, which put paid to my plan to have one or more VLAN-untagging bridges and a VLAN-tagged bridge.  I figured it could be done with bridges, but I envisaged a stacked mess of bridge-to-tap-to-bridge-to-tap-to-guest connections and decided that wasn’t the way to go.

Next I checked out VDE, which I had first seen a couple of years ago — but something gave me the impression that VDE either wasn’t really going to give me anything more than bridging would, or was not flexible enough to do what I needed.  I like the distributed aspect of VDE (the D in the name) but I’d rarely use that capability so it wasn’t a big drawcard.  I widened my search, and found two interesting projects — one that eventually became my solution, and another that I think is quite incredible in its scope and capability.

First, the amazing one: ns-3, “a great network simulator for research and education”.  As the name suggests, it simulates networks.  It is completely programmable (in fact your network “scripts” are actually C++ code using the product’s libraries and functions) and can be used to accurately model the behaviour of a real network when faced with network traffic.  The project states that ns-3 models of real networks have produced libpcap traces that are almost indistinguishable from the traces of the real networks being modelled…  I’ll take their word for that, but when you get to configure the propogation delay between nodes in your simulated network it seems to me it’s pretty thorough.  Although the way that I found ns-3 was via a forum posting from someone who claimed to have used it to solve a similar situation as me, and ns-3 does provide a way to “bridge” between the simulated network and real networks, the simulation aspect of ns-3 seems to be more complexity than I’m looking for in this instance.  It does look like a fascinating tool however, and one I’ll definitely be keeping at least half-an-eye on.

To my eventual solution, then: Open vSwitch.  Designed with exactly my scenario in mind–network connection for virtualisation–it has at least two functions that make it ideal for me:

  • a Linux-bridging compatibility mode, allowing the brctl command to still function
  • IEEE 802.1Q VLAN support (innovatively at that)

The Open vSwitch capability can be built as a kernel module (there’s a second module that supports the brctl compatibility mode), or very recent versions have the ability to be run in user-space (with a corresponding performance drop).

On the surface, configuring an OvS bridge does seem to result in something that looks exactly like a brctl bridge (especially if you use brctl and the OvS bridging compatibility feature to configure it), but its native support for VLANs really brings it into its own for me.  In summary, for each “real” bridge you configure in OvS, you can configure a “fake” bridge that passes through packets for a single VLAN from the real bridge (the “parent” bridge).  This is exactly what I needed!

For the guest interfaces that needed full VLAN-awareness, I simply provided the name of my OvS bridge as the name of the bridge for libvirt to connect the guest to–OvS bridge-compatibility mode took care of the brctl commands issued in the background by libvirt.  The VLAN-unaware guest interfaces presented a bit of a challenge–the OvS “fake” bridge does not present itself like a Linux bridge, so it doesn’t work with libvirt’s bridge interface support.  This ended up being moderately easy to overcome as well, thanks to libvirt’s ability to set up an interface configured by an arbitrary script–I hacked the supplied /etc/qemu-ifup script and made a version that adds the tap interface created by libvirt to the OvS fake bridge.

The only thing I might want from this now is an ability for an OvS bridge to have visibility over a subset of the VLANs presented on the physical NIC.  The OvS website talks about extensive filtering capability though, so I’ve little doubt that the capability is there and I’m just yet to find it.  From a functionality aspect, OvS is packed to the gills with support for various open management protocols, including something called OpenFlow that I’d never heard of before (but I hope that some certain folks in upstate New York have!) but is apparently an open standard that enables secure centralised management of switches.

Detail of exactly how I pulled this all together will come in a page on this site; I’ll make a bunch of pages that describe all the mucky details of my KVM adventures and update this post with a link, so stay tuned!

Tags: , ,

LDAP groups in Postfix

For a long time I’ve been managing virtual e-mail addresses (the ones you create when you sign up to a web service, so that you know where your spam is originating) using Postfix’s LDAP alias capability.  At the time I was still putting every bit of configuration I could into LDAP–particularly if it was user-id related–and I’ve never had a need to change what was working really well.

N’s school recently decided to distribute the weekly school newsletter via e-mail, and had allowance for one e-mail address per family.  Not wanting the additional overhead of having to have either S or me receive it and then having to forward it to the other, I thought it would be neat to have a single common address that, when items arrived, distributed the mail to multiple boxes.  Of course I took the stupid path of providing the school with a yet-to-be-created e-mail address, foolishly trusting my ability to set the system up before they tried to send anything to it…  but in the end it was not so foolish after all, as unbeknown to me I already had everything I needed to achieve my objective.

Unfortunately the first thing I did was assume that I needed mailing list software.  I installed Mailman, and started to read-up on the process to get it working.  I did this on my yet-to-be-commissioned KVM-hosted mail server (a blog post for another day), and started trying to diagnose why mail wasn’t getting delivered.  I had set up Postfix on this mail server to point to my existing LDAP to test, and thought that there was a problem there (but also started to work out if there was a way to use the LDAP server to manage the Mailman aliases).  I re-found the Postfix LDAP HOWTO, and stumbled over the section entitled “Example: expanding LDAP groups”.  Et voila: multidrop incoming mail without the need for a mailing list manager!

I had always assumed that e-mail aliases were a one-to-one mapping of alias address to real destination.  Not the case: an alias can have multiple destinations.  It doesn’t just apply to LDAP alias support, either: as per the “aliases” man page you can do

name: value1, value2, ...

In my LDAP situation, all I need to do is list the alias in the “mailLocalAddress” attribute of which ever users need to receive mail for that alias.  Done!

I may have to keep Mailman, however.  Shortly after this success, I wondered how cool it would be to have the notification SMS messages for voicemail received at home, that currently go only to S, come to me as well.  I’m using a hosted email-to-SMS gateway service for this, so the “alias” would have to expand to multiple external e-mail addresses.  I’m not sure if you can alias mail addresses that are not in your domain…  I’ll have to try and see–might be easier to do that than subscribing to a Mailman list via SMS-to-email!  🙂

Tags: , ,