Archive for category Networks

What a difference a working resolver makes

The next phase in tidying up my user authentication environment in the lab was to enable SSL/TLS on the z/VM LDAP server I use for my Linux authentication (I’ll discuss the process on the DeveloperWorks blog, and put a link here).  Apart from being the right way to do things, LDAP authentication appears to require SSL or TLS in Fedora 15.

After I got the Fedora system working, I thought it would be a good idea to have other systems in the complex using SSL/TLS also.  The process was moderately painless on a SLES 10 system, but on the first SLES 11 system I went to YaST froze while saving the changes.  I (foolishly) rebooted the image, and it hung during boot.  Not fun.

After a couple of attempts to fix up what I thought were the obvious problems (each attempt involving logging off the guest, connecting its disk to another guest, mounting the filesystem, making a change, unmounting and disconnecting, and re-IPLing) with no success, I went into /etc/nsswitch.conf and turned off LDAP for everything I could find.  This finally allowed the guest to complete its boot — but I had no LDAP now.  I did a test using ldapsearch, which reported it couldn’t reach the LDAP server.  I tried to ping the LDAP server by address, which worked.  I tried to lookup the hostname of the LDAP server, and name resolution failed with the traditional “no servers could be reached” message.  This was odd, as I knew I’d changed it since it was pointing to the wrong DNS server before…  I could ping the DNS by address, and another system resolved fine.

I thought it might have been a configuration problem — I had earlier had trouble with systems not being able to do recursive DNS lookups through my DNS server.  I went to YaST to configure the DNS Server, and it told me that I had to install the package “bind”.  WHAT?!?!?  How did the BIND package get uninstalled from the system…

Unless…  It’s the wrong system…

I checked /etc/resolv.conf on a working system and sure enough I had the IP address wrong.  I was pointing at a server that was NOT my DNS server.  Presumably the inability to resolve the name of the LDAP server I was trying to reach is what made the first attempt to enable TLS for LDAP fail in YaST, and whatever preload magic SLES uses to enable LDAP authentication got broken by the failure.  Setting the right DNS and re-running the LDAP Client module in YaST not only got LDAP authentication working but got me a bootable system again.

A simple fix in the end, but I’d forgotten the power of the resolver to cause untold and unpredictable havoc.  Now, pardon me while I lie in wait for the YaST-haters who will no doubt come out and sledge me…  🙂

Tags: , , , , , ,

Another IPv6 instalment (subtitled: Watch Your Tech Library Currency!)

I made a somewhat cryptic tweet a little while ago about how I spent a crazy-long period of time researching what was, I believed, the next-big-thing in DNS resolution for IPv6 (or so my 2002 edition of “IPv6 Essentials” told me).  I could not work out why I saw nothing about A6 records in any of the excellent Hurricane Electric IPv6 material or in any other documentation I came across.

The answer should have been obvious: DNS A6 records (and the corresponding DNAME records) never caught on.  RFC 3363 recommended that the RFC that defined A6 and DNAME (RFC 2874) be moved back into Experimental status.  If I hadn’t been using an old edition of the IPv6 book, I might never have even known the existence of A6 and not have wasted any time.

In my previous post on IPv6 I theorised that we are in the early-adoption phase of IPv6 where things aren’t quite baked, and yet now I’ve picked up a 9 year old text on the topic and acted all surprised when it got something wrong.  It was a bit stupid of me; had I bought a book about IPv4 in 1976, might it have been similarly out of date in 1985?

As always though I’m richer for the experience!  Or so I thought…  Like many, I’m becoming increasingly time-poor.  When I bought a book on IPv6 some years ago I thought I was making an investment, but it turned out that my investment actually lost for me in several ways:

  1. The book took up physical space in my bookshelf for all that time I wasn’t using it
  2. I didn’t actually use the information at the time I acquired it
  3. The time I could have got value from it was wasted by it idly sitting on the shelf
  4. Once I did try to use it, it actually cost me time rather than saved time

I came to think about the other books on my shelf.  It’s pretty easy to recognise that a book that proclaims to be up-to-date because it “Now covers Red Hat 5.2!” will be anything but.  Also, from the preface of a Perl programming book that says “this was written about Perl 5.8, but it should apply to 5.10 as well” I’ll be forewarned that things will be fairly applicable to 5.12 but maybe not to Perl 6 when it’s out.

Technology usually has a somewhat abbreviated lifespan, so therefore the corresponding documentation will have a lifespan correspondingly short…  Here, however, is an example of a technology that will have a far greater lifespan (we hope) than much of the documentation that currently exists around it.  I emphasise “currently exists”, because it won’t always be that way: IPv4 was pretty well-baked by the time I had anything to do with it, so I could have bought a book on IPv4 with next to no concern that it was going to lead me astray (indeed, I bought W. Rich Stevens’ TCP/IP programming texts during the 1990s, and still use them to this day).  I keep forgetting that I’m on a completely different point of the IPv6 adoption curve, and the “experts” are learning along with me.

So, a new tech library plan then:

  • Reduce dependence on physical books (okay, this one is already a work-in-progress for me) — they don’t come with you on your travels as easily, and (more important in this context) they’re harder to keep up to date.
  • Before regarding the book on the shelf as authoritative, check its publication date.  If it’s more than three years old, depending on the subject matter it might be out of date.  Check if there’s a new edition available, and consider updating.  If there’s no new edition, check for recent reviews (Amazon, etc).  Someone who just bought it last month might have posted an opinion on its currency.
  • If you have to buy a paper book, don’t buy a book on any technology that is a moving target.  On the same shelf as my copy of “IPv6 Essentials” there is a book entitled “Practical VoIP Using VOCAL”.  I never even installed VOCAL, and I’m sure many current VoIP practitioners never heard of it.  (Side note: I think it’s strange that I bought that book, and a Cisco one, but still to this day have never owned a book on Asterisk.  Maybe I have some kind of inability to pick the right nascent-technology book to buy.)
  • Use bookmarking technology more! I have a Delicious account, and I went through a phase of bookmarking everything there.  I realise now that, if I was a bit more disciplined, I could actually use it (or a system like it, depending on what Yahoo! does to it) as my own personal index to the biggest tech library in existence: the Internet.

That first point is harder than it sounds (especially for someone like me who has a couple of books on his shelf with his name on the cover).  My Rich Stevens books are littered with sticky-note bookmarks for when I flick to-and-fro between different programming examples.  Electronic readers are still not there when it comes to the “handy-hints-I-keep-on-my-lap-while-coding” aspect of book ownership.

I have a Sony Reader which I purchased with the intent of making it my mobile tech library.  It’s just not that great for tech documents though, since it doesn’t render diagrams and illustrations well (it also isn’t ideal for PDFs, especially in A4 ratio).  This may change as publishers of tech docs start releasing more titles on e-reader formats like ePub.  The iPad is working much better for tech library tasks; I’m using an app called GoodReader which renders PDFs (especially RedBooks!) quite well and has good browsing and syncing capability as well.

More on these topics later, I’m sure!

Update: I omitted another option in my “tech library plan” — since IPv6 Essentials is an O’Reilly book, I could have registered with their site to get offers on updating to new editions.  Had I done so, the events of this post might not have happened!  Now that I’ve registered my books with O’Reilly, I’m getting offers of 40% off new paper editions and 50% off e-book editions.  Also, in line with my reduce-paper-book-dependence policy, I can “upgrade” any of the titles I own in paper to e-book for US$4.99.  If you haven’t already, I encourage anyone who has O’Reilly books that they rely on as part of their tech library to register them at members.oreilly.com.  (This is an unsolicited endorsement from a happy customer, nothing more!)

Tags: , , , , ,

IPv6: SSDM?

Two of the four keynotes at LCA 2011 referenced the depletion of the IPv4 address space (and I reckon if I looked back through the other two I could find some reference in them as well).  I think there’s a good chance Geoff Huston was lobbying his APNIC colleagues to lodge the “final request” (for the two /8s that triggered the final allocation of the remaining 5, officially exhausting IANA) a week earlier than they did, as it would have made the message of his LCA keynote a bit stronger.  Not that it was a soft message: we went from Vint Cerf the day before, who said “I’m the guy who said that a 32-bit address would be enough, so, sorry ’bout that”, to Geoff Huston saying “Vint Cerf is a professional optimist.  I’m not.”.  But I digress…

I did a bit of playing with IPv6 over the years, but it was too early and too broken when I did (by “too broken” I refer to the immaturity of dual-stack implementations and the lack of anything actually reachable on the IPv6 net).  However, with the bell of IPv4 exhaustion tolling, I had another go.

Freenet6, who now goes alternatively as gogonet or gogo6, was my first point-of-call.  I had looked at Gogo6 most recently, and still had an account.  It was just a matter of deciding whether or not I needed to make a new account (hint: I did) and reconfiguring the gw6c process on my router box.  Easy-as, I had a tunnel — better still, my IPv6-capable systems on the LAN also had connectivity thanks to radvd.  From Firefox (and Safari, and Chrome) on the Mac I could score both 10/10 scores on http://test-ipv6.com!

My joy was short-lived, however.  gw6c was proving to be about as stable as a one-legged tripod, and not only that Gogo6 had changed the address range they allocated me.  That wouldn’t be too bad, except that all my IPv6-capable systems still had the old address and were trying to use that — looks like IPv6 auto-configuration doesn’t un-configure an address that’s no longer valid (at least by default).  I started to look for possible alternatives.

Like many who’ve looked at IPv6 I had come across Hurricane Electric — in the countdown to IPv4 exhaustion I used their iOS app “ByeBye v4”.  They offer free v6-over-v4 tunneling, and the configuration in Gentoo is very simple.  I also get a static allocation of an IPv6 address range that I can see in the web interface.  The only downside I can see is that I had to nominate which of their locations I wanted to terminate my tunnel; they have no presence in Australia, the geographically-nearest location being Singapore.  I went for Los Angeles, thinking that would probably be closest network-wise.  The performance has been quite good, and it has been quite reliable (although I do need to set up some kind of monitoring over the link, since everything that can talk IPv6 is now doing so).

In typical style, after I’d set up a stable tunnel and got everything working, I decided to learn more about what I’d done.  What is IPv6 anyways?  Is there substance to the anecdotes flying around that are saying that “every blade of grass on the planet can have an IPv6 address” and similar?  Well, a 128-bit address provides for an enormous range of addresses.  The ZFS guys are on the same track — ZFS uses 128-bit counters for blocks and inodes, and there have been ridiculous statements made about how much data could theoretically be stored in a filesystem that uses 128-bit block counters.  To quote the Hitchhiker’s Guide to the Galaxy:

Space is big. Really big. You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.

The Guide, The Hitchhiker’s Guide To The Galaxy, Douglas Adams, Pan Books 1979

Substitute IPv6 (or ZFS) for space.  To try and put into context just how big the IPv6 address range is, let’s use an example: the smallest common subnetwork.

When IPv4 was first developed, there were three address classes, named, somewhat unimaginatively, A B and C.  Class A was all the networks from 1.x.x.x to 127.x.x.x, and each had about 16 million addresses.  Class B was all the networks from 128.0.x.x to 191.255.x.x, each network with 65 534 usable addresses.  Class C went from 192.0.0.x to 223.255.255.x, and each had 254 usable addresses.  Other areas, such as 0.x.x.x and the networks after 224.x.x.x, have been reserved.  So, in the early days, the smallest network of hosts you could have was a network of 254 hosts.  After a while IP introduced something called Classless Inter-Domain Routing (CIDR) which meant that the fixed boundaries of the classes were eliminated and it became possible to “subnet” or “supernet” networks — divide or combine the networks to make networks that were just the right size for the number of hosts in the network (and, with careful planning, could be grown or shrunk as plans changed).  With CIDR, since the size of the network was now variable, addresses had to be written with the subnet mask — a format known as “CIDR notation” came into use, where an address would have the number of bits written after the address like this: 192.168.1.42/24.

Fast-forward to today, with IPv6…  IPv4’s CIDR notation is used in IPv6 (mostly because the masks are so huge).  In IPv6, the smallest network that can be allocated is what is called a “/64”.  This means that out of the total 128-bit address range, 64 bits represent what network the address belongs to.  Let’s think about that for a second.  There are 32 bits in an IPv4 address — that means that the entire IPv4 Internet would fit in an IPv6 network with a /96 mask (128-32=96).  But the default smallest IPv6 subnet is /64 — the size of the existing IPv4 Internet squared!

Wait a second though, it gets better…  When I got my account with Gogo6, they offered me up to a /56 mask — that’s a range that covers 256 /64s, or 256 Internet-squareds!  Better still, the Hurricane Electric tunnel-broker account gave me one /64 and one /48Sixty-five thousand networks, each the size of the IPv4 Internet squared! And how much did I pay for any of these allocations?  Nothing!

I can’t help but think that folks are repeating similar mistakes from the early days of IPv4.  A seemingly limitless address range (Vint said that 32 bits would be enough, right?) was given away in vast chunks.  In the early days of IPv4 we had networks with two or three hosts on them using up a Class C because of the limitations of addressing — in IPv6 we have LANs of maybe no more than a hundred or so machines taking up an entire /64 because of the way we designed auto-configuration.  IPv6 implementations now will be characterised not by how well their dual-stack implementations work, or how much more secure transactions have become thanks to the elimination of NAT, but by how much of the addressable range they are wasting.  So, is IPv6 just Same Sh*t, Different Millennium?

Like the early days of IPv4 though, things will surely change as IPv6 matures.  I guess I’m just hoping that the folks in charge are thinking about it, and not just high on the amount of space they have to play with now.  Because one day all those blades of grass will want their IP addresses, and the Internet had better be ready.

Update 16 May 2011: I just listened to Episode 297 of the Security Now program…  Steve Gibson relates some of his experience getting IPv6 allocation from his upstream providers (he says he got a /48).  In describing how much address space that is, he made the same point (about the “wasteful” allocation of IPv6).  At about 44:51, he starts talking about the current “sky is falling” attitude regarding IPv4, and states “you’d think, maybe they’d learn the lesson, and be a little more parsimonious with these IPs…”.  He goes on to give the impression that the 128-bit range of IPv6 is so big that there’s just no need to worry about it.  I hope you’re right, Steve!

Tags: , , , , , ,

Nagios service check for IAX

I’ve been using Nagios for ages to monitor the Crossed Wires campus network, but it’s fallen into a little disrepair.  Nothing worse than your monitoring needing monitoring…  so I set about tidying it up. Network topology changes, removal of old kit, and some fixes to service checks no longer working correctly.

One of the problems I needed to fix was the service check for IAX connections into my Asterisk box.  The script (the standard check_asterisk.pl from the Nagios Plugins package) was set up correctly, but it would fail with a “Got no reply” message.

I started doing traces and “iax2 debug” in Asterisk, but got nowhere — Asterisk was rejecting the packet from the check script.  Finally I decided to JFGI, and eventually I found this page with the explanation and the fix.  Basically, sometime in the 1.6 stream Asterisk toughened up security on the control message the Nagios service check used to use.  Thankfully, at the same time a new control message specifically designed for availability checking was implemented, and the fix is to update the script to use the new control message.  Easy!

BTW, while on Nagios, I got burned by the so-called “vconfig patch” which broke the check_ping script.  I’ve had to mask version 1.4.14-r2 and above of the nagios-plugins package until the issue is fixed.

Tags: , , , , , ,

Sharing an OSA port in Layer 2 mode

I posted on my developerWorks blog about an experience I had sharing an OSA port in Layer 2 mode.  Thrilling stuff.  What’s more thrilling is the context of where I had my OSA-port-sharing experience: my large-scale Linux on System z cloning experiment.  One of these days I’ll get around to writing that up.

Tags: , , ,

Network virtualisation

I’ve been doing a lot of mucking around with KVM with libvirt (I keep promising an update here, don’t I).  In my desktop virtualisation requirements I had a need for presenting VLAN traffic to guests: simple enough, and I’ve done it before.  You can do what I usually do, and configure all your VLANs against the physical interface then create a bridge for each VLAN you want to present to a guest.  The guest then attaches to the bridge appropriate to the VLAN it wants access to, with no need to configure 8021q.

(The other method of combining VLAN-tagging and bridging is to bridge the physical interface first, then create VLANs on the bridge.  I couldn’t work out how to get VLAN-unaware guests attached to this kind of setup, and it didn’t work for me even to give IP access to the host using a br0.100 for example.  Still, it must work for someone as it’s written about a lot…)

I realised that from particular virtual machines I needed to get access to the VLAN tags — I needed VLAN-awareness.  Now I knew up-front that the way I could do this was to just throw another NIC into the machine and either dedicate it to the virtual guest or set up a bridge with VLAN tags intact.  I really wanted to exhaust all possible avenues to solve the problem without throwing hardware around (as I’ve been doing a bit of that recently, I have to admit).

First, I tried to use standard Linux bridges as a solution, but discovered that an interface can’t belong to more than one bridge at a time, which put paid to my plan to have one or more VLAN-untagging bridges and a VLAN-tagged bridge.  I figured it could be done with bridges, but I envisaged a stacked mess of bridge-to-tap-to-bridge-to-tap-to-guest connections and decided that wasn’t the way to go.

Next I checked out VDE, which I had first seen a couple of years ago — but something gave me the impression that VDE either wasn’t really going to give me anything more than bridging would, or was not flexible enough to do what I needed.  I like the distributed aspect of VDE (the D in the name) but I’d rarely use that capability so it wasn’t a big drawcard.  I widened my search, and found two interesting projects — one that eventually became my solution, and another that I think is quite incredible in its scope and capability.

First, the amazing one: ns-3, “a great network simulator for research and education”.  As the name suggests, it simulates networks.  It is completely programmable (in fact your network “scripts” are actually C++ code using the product’s libraries and functions) and can be used to accurately model the behaviour of a real network when faced with network traffic.  The project states that ns-3 models of real networks have produced libpcap traces that are almost indistinguishable from the traces of the real networks being modelled…  I’ll take their word for that, but when you get to configure the propogation delay between nodes in your simulated network it seems to me it’s pretty thorough.  Although the way that I found ns-3 was via a forum posting from someone who claimed to have used it to solve a similar situation as me, and ns-3 does provide a way to “bridge” between the simulated network and real networks, the simulation aspect of ns-3 seems to be more complexity than I’m looking for in this instance.  It does look like a fascinating tool however, and one I’ll definitely be keeping at least half-an-eye on.

To my eventual solution, then: Open vSwitch.  Designed with exactly my scenario in mind–network connection for virtualisation–it has at least two functions that make it ideal for me:

  • a Linux-bridging compatibility mode, allowing the brctl command to still function
  • IEEE 802.1Q VLAN support (innovatively at that)

The Open vSwitch capability can be built as a kernel module (there’s a second module that supports the brctl compatibility mode), or very recent versions have the ability to be run in user-space (with a corresponding performance drop).

On the surface, configuring an OvS bridge does seem to result in something that looks exactly like a brctl bridge (especially if you use brctl and the OvS bridging compatibility feature to configure it), but its native support for VLANs really brings it into its own for me.  In summary, for each “real” bridge you configure in OvS, you can configure a “fake” bridge that passes through packets for a single VLAN from the real bridge (the “parent” bridge).  This is exactly what I needed!

For the guest interfaces that needed full VLAN-awareness, I simply provided the name of my OvS bridge as the name of the bridge for libvirt to connect the guest to–OvS bridge-compatibility mode took care of the brctl commands issued in the background by libvirt.  The VLAN-unaware guest interfaces presented a bit of a challenge–the OvS “fake” bridge does not present itself like a Linux bridge, so it doesn’t work with libvirt’s bridge interface support.  This ended up being moderately easy to overcome as well, thanks to libvirt’s ability to set up an interface configured by an arbitrary script–I hacked the supplied /etc/qemu-ifup script and made a version that adds the tap interface created by libvirt to the OvS fake bridge.

The only thing I might want from this now is an ability for an OvS bridge to have visibility over a subset of the VLANs presented on the physical NIC.  The OvS website talks about extensive filtering capability though, so I’ve little doubt that the capability is there and I’m just yet to find it.  From a functionality aspect, OvS is packed to the gills with support for various open management protocols, including something called OpenFlow that I’d never heard of before (but I hope that some certain folks in upstate New York have!) but is apparently an open standard that enables secure centralised management of switches.

Detail of exactly how I pulled this all together will come in a page on this site; I’ll make a bunch of pages that describe all the mucky details of my KVM adventures and update this post with a link, so stay tuned!

Tags: , ,

LDAP-backed DNS and DHCP…?

I’m having a bit of an infrastructure redesign here at the Crossed Wires campus.  Each time I have an outage (the last one was caused by a power failure) I learn a little more about the holes in my current setup and what I can do better.

I’m implementing a router box on an old low(-ish)-power PC that will be backed up by a virtual machine on my main virt-box.  I’ve already done most of the preparation of using keepalived to implement VRRP, and a colleague has given me some pointers in using the Linux-HA tools like Heartbeat and DRBD to make services like e-mail and Samba redundant.

I’ve had a soft spot for LDAP for ages; I’ve always thought that putting as much backend data into LDAP as you can would be a really good way to get failover and redundancy.  Instead of having to deal with every single server’s different way of doing replication and failover, just bung everything into LDAP and get that replicating.  Sounds good in theory, but in a nutshell it’s not working out that way for the two least-celebrated but most important components of my (arguably any) network: DNS and DHCP.

There are a number of LDAP-backed DNS projects out there.  If I’m willing to go to the bleeding edge with BIND on my Gentoo build I can get access to the two most talked-about ones (bind9-sdb-ldap and the BIND DLZ LDAP driver), and other solutions like PowerDNS and ldapdns are available.  But none of them offer integration with DHCP, and I’m currently using dhcpd’s “interim DDNS update method” to make sure that hostnames are seen in my DNS when a lease is granted (okay, there’s a Perl daemon that goes with bind9-sdb-ldap, but it seems like a sort-of clunky afterthought).

Speaking of DHCP, LDAP backends for it are virtually non-existent.  The only LDAP-enablement I’ve found for ISC DHCP involves putting the config file into LDAP, not the leases…  I actually used that for a few days a while ago and pulled it out because it was actually more work to do it that way (and for no benefit in failover).

It seems to me it would be a project ripe for the picking: take an integrated DNS/DHCP server like dnsmasq and make it write into LDAP instead of to a file.  If I had more free time I’d probably have a go at it, except for the fact that no-one really seems to be that interested in storing DNS and DHCP in LDAP: that it hasn’t been done says to me that there’s no demand for it, and it’d end up being a big waste of time and effort.

Over to you, lazyweb…  Is this a yawning chasm of unfulfilled networking dreams, or a case of me trying to make something more complex than it needs to be?  After all, the rest of the world gets by with DNS master-slave and DHCP failover, they should be good enough for me too, right?  😉

Tags: , , , ,

Ubuntu 8.04 Wireless Weirdness

Over the last fortnight I finally got the wriggle-on to upgrade all my (K)Ubuntu systems to Hardy Heron. Various issues occurred with each of them, but overall the entire exercise went smoothly (my wife’s little old Fujitsu Lifebook was probably smoothest of the lot). I had one rather vexing issue however, on my old (I’m tempted to say “ancient”) Vaio laptop.

The onboard wireless on this thing is an ipw2100, hence only 802.11b, and I had a PCMCIA 802.11g NIC lying around (actually it came from the Lifebook, liberated from there after I bought it a Mini-PCI 802.11g card on eBay). On Gutsy, I used the hardware kill-switch to disable the onboard adapter to make double-sure that it wouldn’t try and drag the network down to 11Mbps.

This laptop was the last machine I upgraded to Hardy, and I was playing with KDE 4 on it so I was looking forward to seeing what KDE4-ness made it into Hardy. While the upgrade was taking place the wi-fi connection dropped out, but I didn’t think anything of it since Ubuntu upgrades try and restart the new versions of things and I figured NetworkManager had fallen and couldn’t get up. After the reboot, however, KNetworkManager (still the KDE3 version, don’t get me started there) could find no networks — could find no adapters, in fact.

I logged back into KDE3 and poked. Still no wireless (as if the desktop would make a difference, but I had to make *some* start on pruning the fault tree). The Hardware Drivers Manager was reporting that the Atheros driver was active (for the PCMCIA card), and an unplug-plug cycle generated all kinds of good kernel messages.

On a whim, I flicked the hardware kill-switch for the onboard wifi[1]. Almost instantly, KNetworkManager prompted to get my wallet unlocked — it had found my network and wanted the WPA passphrase. I provided it, and got a connection: via the PCMCIA NIC.

“That’s odd”, I thought, and flicked the switch. A few seconds passed, and the link dropped. Flicked the switch on, link came back. Flicked the switch off again: this time a few minutes went past, but again the link failed. Tried it several times again, and the same thing happened. The state of the kill-switch for the onboard NIC was influencing the other NIC too!

It seems that this is altered behaviour in NetworkManager, applying the state of the hardware switch to all wi-fi adapters. If it annoys me significantly I’d like to think I’ll trawl changelogs, or even better lodge something on Launchpad… more likely though I’ll forget all about it having found a kludgy workaround.

I’ve now added ipw2100 to the module blacklist and things work okay (presumably because the state of the onboard switch can’t be reported any more). I’ll also have a think about whether a few dollars for another g-capable Mini-PCI NIC will be throwing good money after bad, as this laptop really is quite long-in-the-tooth.

Oh yes, that’s right… KDE 4. Next time perhaps. 🙂

[1] I can’t think why I did this. I knew that I’d disabled 802.11b in my access point, to make triple-sure an 802.11b device wouldn’t slow my network down… The onboard 802.11b NIC would never successfully get a connection.

Tags: , ,

Zeroshell redux

I wrote about Zeroshell, and how I thought it was pretty great. I still do, but it hasn’t taken centre-stage in my network configuration like I thought it would. I’ve had to tone down my raves about some of its integrated features as well.

The fact that it hasn’t taken centre-stage is possibly as much to do with VMware’s bogus clock-drift problems as anything, as I haven’t dedicated hardware to my Zeroshell instance yet (I could keep it running virtual, but some of the things I want to do with it will make more sense if it’s a separate machine). VMware Server takes another barb for its handling of VLAN tagging (but to be fair that might be the Linux 8021q module works). It seems that if you have any VLAN definitions on a network card, VMware won’t get to see any VLAN tags on that NIC. You can get a guest attached to a bridged interface to see the real VLAN tags, but only if Linux has not got any VLAN awareness over that NIC.

Alright, so enough ragging on VMware. I have Zeroshell attached to the networks it needs and all is fine. Except that I can’t actually change anything! The web interface that I spoke so highly of originally is actually very restricted in some areas. One of these is in the RADIUS server, and it bit me badly when I decided I’d use Zeroshell’s RADIUS server to authenticate access to the Web interface of my Linksys switch. Turns out that the Linksys firmware expects a particular attribute to appear in the response from the RADIUS server.

The fact that Linksys don’t document this anywhere is not Zeroshell’s fault, but that there is no interface allowing me to do updates to the records above what Zeroshell uses for its own applications is a bit of an issue. It means that instead of a Zeroshell box potentially becoming the hub of administration functions, it is in danger of becoming just another little vertical application server that doesn’t integrate.

Having said that, the backend for most (all?) authentication data is LDAP so a tool like PHPLDAPAdmin might be usable to extend the base records. But, arguably, I shouldn’t have to do that! It is still beta software though, so improvements and enhancements will be made.

The other area that it’s a bit lacking in is monitoring/graphing. Okay sure, I’d probably integrate Zeroshell into the rest of my Cacti setup, but it would be nice if Zeroshell did like other router distos and had a pre-built statistics/graphing page.

Zeroshell is still my pick (I revisited pfSense and fixed the problem updating, but to me it doesn’t have enough function to justify running its own hardware), but it’s just not quite the bees-knees it was when I first saw it.

Tags: , ,

Zeroshell: network services distro

I love it when, almost by chance, I find something new. I decided yesterday to look at FLOSS-based router distributions. I’ve been using IPCop for a while, as an easy way to create a VPN to another location. Unfortunately, IPCop failed my latest requirement: 802.1Q VLAN support. So I went surfing and found an absolute ripper in Zeroshell, but I didn’t find him straight away…

First I found pfSense, a FreeBSD-based distro that seemed to fit the bill–indeed the very first question the Live-CD asked me on bootup was “do you want to use VLANs?”. It also promised a very extensive set of additional packages that extend it’s capability into areas like file/print, WWW proxying, and a host of other features. However, even though it has a very nice web-based configuration facility, due to what looks like a problem on their web site I was unable to even look at what packages are available. Since some of the basic function I would like is provided by these packages, I’ve had to move on–but pfSense gets an honourable mention because of its easy installation and excellent configuration interface.

I looked again at Smoothwall, but soon remembered why I discounted it at the time I chose IPCop. For me, the level of function I think I’d use is a bit too close to the threshold of function in the “community” (read, “free”) version. Astaro would go in this category too, except that I was too dense to be able to even find much clear information about the level of function you get in their community version. So no recommendation on either of these, as I’ve never used either–I do work with a fellow who happily uses Smoothwall though.

Then, I came across Zeroshell. The lead developer describes it as “a small Linux distribution for servers and embedded devices aimed at providing the main network services a LAN requires”. And does it ever! It’s a veritable Alladin’s Cave of features and functions. It certainly does everything I was looking for, from VLAN tagging through QoS to VPNs, from an SPI firewall to multi-zone DNS and multi-subnet DHCP servers, but also has Certificate Management (using a self-signed CA certificate or one you import), a RADIUS server, WiFi access-point capability with multiple SSID and VLAN mapping, captive portal or “normal” HTTP proxying, 802.1d bridging, clients for Dynamic DNS, a Kerberos 5 server, plus a raft of other capabilities. Zeroshell–named because the author wanted to provide a system that was extremely flexible and powerful yet did not require users to access a shell prompt–is remarkably feature rich, and yet the download for the ISO image is only around 100MB (a bit beefier than pfSense, admittedly, which weighed in at around 60MB).

There are a couple of downsides, however. Until very recently, installing to a hard disk was not supported. The distro is designed to boot from a CD only, but can use an installed hard disk (if available) for what it calls “databases”, where configuration and other data is kept. With the latest release, however, the developers have created a “1GB USB drive” download (the size of the download isn’t 1GB), which is designed to be copied to a USB pendrive or hard disk.

The other downside (and it’s not fair to say that, as will become clear) is the web interface. Not because it’s ugly or not functional: it is neither of those. It’s clean and well laid out, and fairly consistent. It’s very technical, however. Where other distros tackle the “SOHO divide” by hiding details such as protocol numbers or port ranges, Zeroshell uncovers all this stuff in its gory detail. So it’s great for someone like me, who looks at the interfaces on other systems and pines for the knobs I can’t fiddle with, but it’s not for newcomers.

It looks to be a fairly new project (current release is 1.0beta9), but the forums look good and there does seem to be a bit of activity around it. I’m running Zeroshell in a VMware guest at the moment while I kick the tyres–the VMware download is also available from the project’s mirrors–but I reckon this one will be a keeper!

Tags: ,