Posts Tagged router


Two of the four keynotes at LCA 2011 referenced the depletion of the IPv4 address space (and I reckon if I looked back through the other two I could find some reference in them as well).  I think there’s a good chance Geoff Huston was lobbying his APNIC colleagues to lodge the “final request” (for the two /8s that triggered the final allocation of the remaining 5, officially exhausting IANA) a week earlier than they did, as it would have made the message of his LCA keynote a bit stronger.  Not that it was a soft message: we went from Vint Cerf the day before, who said “I’m the guy who said that a 32-bit address would be enough, so, sorry ’bout that”, to Geoff Huston saying “Vint Cerf is a professional optimist.  I’m not.”.  But I digress…

I did a bit of playing with IPv6 over the years, but it was too early and too broken when I did (by “too broken” I refer to the immaturity of dual-stack implementations and the lack of anything actually reachable on the IPv6 net).  However, with the bell of IPv4 exhaustion tolling, I had another go.

Freenet6, who now goes alternatively as gogonet or gogo6, was my first point-of-call.  I had looked at Gogo6 most recently, and still had an account.  It was just a matter of deciding whether or not I needed to make a new account (hint: I did) and reconfiguring the gw6c process on my router box.  Easy-as, I had a tunnel — better still, my IPv6-capable systems on the LAN also had connectivity thanks to radvd.  From Firefox (and Safari, and Chrome) on the Mac I could score both 10/10 scores on!

My joy was short-lived, however.  gw6c was proving to be about as stable as a one-legged tripod, and not only that Gogo6 had changed the address range they allocated me.  That wouldn’t be too bad, except that all my IPv6-capable systems still had the old address and were trying to use that — looks like IPv6 auto-configuration doesn’t un-configure an address that’s no longer valid (at least by default).  I started to look for possible alternatives.

Like many who’ve looked at IPv6 I had come across Hurricane Electric — in the countdown to IPv4 exhaustion I used their iOS app “ByeBye v4”.  They offer free v6-over-v4 tunneling, and the configuration in Gentoo is very simple.  I also get a static allocation of an IPv6 address range that I can see in the web interface.  The only downside I can see is that I had to nominate which of their locations I wanted to terminate my tunnel; they have no presence in Australia, the geographically-nearest location being Singapore.  I went for Los Angeles, thinking that would probably be closest network-wise.  The performance has been quite good, and it has been quite reliable (although I do need to set up some kind of monitoring over the link, since everything that can talk IPv6 is now doing so).

In typical style, after I’d set up a stable tunnel and got everything working, I decided to learn more about what I’d done.  What is IPv6 anyways?  Is there substance to the anecdotes flying around that are saying that “every blade of grass on the planet can have an IPv6 address” and similar?  Well, a 128-bit address provides for an enormous range of addresses.  The ZFS guys are on the same track — ZFS uses 128-bit counters for blocks and inodes, and there have been ridiculous statements made about how much data could theoretically be stored in a filesystem that uses 128-bit block counters.  To quote the Hitchhiker’s Guide to the Galaxy:

Space is big. Really big. You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.

The Guide, The Hitchhiker’s Guide To The Galaxy, Douglas Adams, Pan Books 1979

Substitute IPv6 (or ZFS) for space.  To try and put into context just how big the IPv6 address range is, let’s use an example: the smallest common subnetwork.

When IPv4 was first developed, there were three address classes, named, somewhat unimaginatively, A B and C.  Class A was all the networks from 1.x.x.x to 127.x.x.x, and each had about 16 million addresses.  Class B was all the networks from 128.0.x.x to 191.255.x.x, each network with 65 534 usable addresses.  Class C went from 192.0.0.x to 223.255.255.x, and each had 254 usable addresses.  Other areas, such as 0.x.x.x and the networks after 224.x.x.x, have been reserved.  So, in the early days, the smallest network of hosts you could have was a network of 254 hosts.  After a while IP introduced something called Classless Inter-Domain Routing (CIDR) which meant that the fixed boundaries of the classes were eliminated and it became possible to “subnet” or “supernet” networks — divide or combine the networks to make networks that were just the right size for the number of hosts in the network (and, with careful planning, could be grown or shrunk as plans changed).  With CIDR, since the size of the network was now variable, addresses had to be written with the subnet mask — a format known as “CIDR notation” came into use, where an address would have the number of bits written after the address like this:

Fast-forward to today, with IPv6…  IPv4’s CIDR notation is used in IPv6 (mostly because the masks are so huge).  In IPv6, the smallest network that can be allocated is what is called a “/64”.  This means that out of the total 128-bit address range, 64 bits represent what network the address belongs to.  Let’s think about that for a second.  There are 32 bits in an IPv4 address — that means that the entire IPv4 Internet would fit in an IPv6 network with a /96 mask (128-32=96).  But the default smallest IPv6 subnet is /64 — the size of the existing IPv4 Internet squared!

Wait a second though, it gets better…  When I got my account with Gogo6, they offered me up to a /56 mask — that’s a range that covers 256 /64s, or 256 Internet-squareds!  Better still, the Hurricane Electric tunnel-broker account gave me one /64 and one /48Sixty-five thousand networks, each the size of the IPv4 Internet squared! And how much did I pay for any of these allocations?  Nothing!

I can’t help but think that folks are repeating similar mistakes from the early days of IPv4.  A seemingly limitless address range (Vint said that 32 bits would be enough, right?) was given away in vast chunks.  In the early days of IPv4 we had networks with two or three hosts on them using up a Class C because of the limitations of addressing — in IPv6 we have LANs of maybe no more than a hundred or so machines taking up an entire /64 because of the way we designed auto-configuration.  IPv6 implementations now will be characterised not by how well their dual-stack implementations work, or how much more secure transactions have become thanks to the elimination of NAT, but by how much of the addressable range they are wasting.  So, is IPv6 just Same Sh*t, Different Millennium?

Like the early days of IPv4 though, things will surely change as IPv6 matures.  I guess I’m just hoping that the folks in charge are thinking about it, and not just high on the amount of space they have to play with now.  Because one day all those blades of grass will want their IP addresses, and the Internet had better be ready.

Update 16 May 2011: I just listened to Episode 297 of the Security Now program…  Steve Gibson relates some of his experience getting IPv6 allocation from his upstream providers (he says he got a /48).  In describing how much address space that is, he made the same point (about the “wasteful” allocation of IPv6).  At about 44:51, he starts talking about the current “sky is falling” attitude regarding IPv4, and states “you’d think, maybe they’d learn the lesson, and be a little more parsimonious with these IPs…”.  He goes on to give the impression that the 128-bit range of IPv6 is so big that there’s just no need to worry about it.  I hope you’re right, Steve!

Tags: , , , , , ,

LDAP-backed DNS and DHCP…?

I’m having a bit of an infrastructure redesign here at the Crossed Wires campus.  Each time I have an outage (the last one was caused by a power failure) I learn a little more about the holes in my current setup and what I can do better.

I’m implementing a router box on an old low(-ish)-power PC that will be backed up by a virtual machine on my main virt-box.  I’ve already done most of the preparation of using keepalived to implement VRRP, and a colleague has given me some pointers in using the Linux-HA tools like Heartbeat and DRBD to make services like e-mail and Samba redundant.

I’ve had a soft spot for LDAP for ages; I’ve always thought that putting as much backend data into LDAP as you can would be a really good way to get failover and redundancy.  Instead of having to deal with every single server’s different way of doing replication and failover, just bung everything into LDAP and get that replicating.  Sounds good in theory, but in a nutshell it’s not working out that way for the two least-celebrated but most important components of my (arguably any) network: DNS and DHCP.

There are a number of LDAP-backed DNS projects out there.  If I’m willing to go to the bleeding edge with BIND on my Gentoo build I can get access to the two most talked-about ones (bind9-sdb-ldap and the BIND DLZ LDAP driver), and other solutions like PowerDNS and ldapdns are available.  But none of them offer integration with DHCP, and I’m currently using dhcpd’s “interim DDNS update method” to make sure that hostnames are seen in my DNS when a lease is granted (okay, there’s a Perl daemon that goes with bind9-sdb-ldap, but it seems like a sort-of clunky afterthought).

Speaking of DHCP, LDAP backends for it are virtually non-existent.  The only LDAP-enablement I’ve found for ISC DHCP involves putting the config file into LDAP, not the leases…  I actually used that for a few days a while ago and pulled it out because it was actually more work to do it that way (and for no benefit in failover).

It seems to me it would be a project ripe for the picking: take an integrated DNS/DHCP server like dnsmasq and make it write into LDAP instead of to a file.  If I had more free time I’d probably have a go at it, except for the fact that no-one really seems to be that interested in storing DNS and DHCP in LDAP: that it hasn’t been done says to me that there’s no demand for it, and it’d end up being a big waste of time and effort.

Over to you, lazyweb…  Is this a yawning chasm of unfulfilled networking dreams, or a case of me trying to make something more complex than it needs to be?  After all, the rest of the world gets by with DNS master-slave and DHCP failover, they should be good enough for me too, right?  😉

Tags: , , , ,

Zeroshell redux

I wrote about Zeroshell, and how I thought it was pretty great. I still do, but it hasn’t taken centre-stage in my network configuration like I thought it would. I’ve had to tone down my raves about some of its integrated features as well.

The fact that it hasn’t taken centre-stage is possibly as much to do with VMware’s bogus clock-drift problems as anything, as I haven’t dedicated hardware to my Zeroshell instance yet (I could keep it running virtual, but some of the things I want to do with it will make more sense if it’s a separate machine). VMware Server takes another barb for its handling of VLAN tagging (but to be fair that might be the Linux 8021q module works). It seems that if you have any VLAN definitions on a network card, VMware won’t get to see any VLAN tags on that NIC. You can get a guest attached to a bridged interface to see the real VLAN tags, but only if Linux has not got any VLAN awareness over that NIC.

Alright, so enough ragging on VMware. I have Zeroshell attached to the networks it needs and all is fine. Except that I can’t actually change anything! The web interface that I spoke so highly of originally is actually very restricted in some areas. One of these is in the RADIUS server, and it bit me badly when I decided I’d use Zeroshell’s RADIUS server to authenticate access to the Web interface of my Linksys switch. Turns out that the Linksys firmware expects a particular attribute to appear in the response from the RADIUS server.

The fact that Linksys don’t document this anywhere is not Zeroshell’s fault, but that there is no interface allowing me to do updates to the records above what Zeroshell uses for its own applications is a bit of an issue. It means that instead of a Zeroshell box potentially becoming the hub of administration functions, it is in danger of becoming just another little vertical application server that doesn’t integrate.

Having said that, the backend for most (all?) authentication data is LDAP so a tool like PHPLDAPAdmin might be usable to extend the base records. But, arguably, I shouldn’t have to do that! It is still beta software though, so improvements and enhancements will be made.

The other area that it’s a bit lacking in is monitoring/graphing. Okay sure, I’d probably integrate Zeroshell into the rest of my Cacti setup, but it would be nice if Zeroshell did like other router distos and had a pre-built statistics/graphing page.

Zeroshell is still my pick (I revisited pfSense and fixed the problem updating, but to me it doesn’t have enough function to justify running its own hardware), but it’s just not quite the bees-knees it was when I first saw it.

Tags: , ,

Zeroshell: network services distro

I love it when, almost by chance, I find something new. I decided yesterday to look at FLOSS-based router distributions. I’ve been using IPCop for a while, as an easy way to create a VPN to another location. Unfortunately, IPCop failed my latest requirement: 802.1Q VLAN support. So I went surfing and found an absolute ripper in Zeroshell, but I didn’t find him straight away…

First I found pfSense, a FreeBSD-based distro that seemed to fit the bill–indeed the very first question the Live-CD asked me on bootup was “do you want to use VLANs?”. It also promised a very extensive set of additional packages that extend it’s capability into areas like file/print, WWW proxying, and a host of other features. However, even though it has a very nice web-based configuration facility, due to what looks like a problem on their web site I was unable to even look at what packages are available. Since some of the basic function I would like is provided by these packages, I’ve had to move on–but pfSense gets an honourable mention because of its easy installation and excellent configuration interface.

I looked again at Smoothwall, but soon remembered why I discounted it at the time I chose IPCop. For me, the level of function I think I’d use is a bit too close to the threshold of function in the “community” (read, “free”) version. Astaro would go in this category too, except that I was too dense to be able to even find much clear information about the level of function you get in their community version. So no recommendation on either of these, as I’ve never used either–I do work with a fellow who happily uses Smoothwall though.

Then, I came across Zeroshell. The lead developer describes it as “a small Linux distribution for servers and embedded devices aimed at providing the main network services a LAN requires”. And does it ever! It’s a veritable Alladin’s Cave of features and functions. It certainly does everything I was looking for, from VLAN tagging through QoS to VPNs, from an SPI firewall to multi-zone DNS and multi-subnet DHCP servers, but also has Certificate Management (using a self-signed CA certificate or one you import), a RADIUS server, WiFi access-point capability with multiple SSID and VLAN mapping, captive portal or “normal” HTTP proxying, 802.1d bridging, clients for Dynamic DNS, a Kerberos 5 server, plus a raft of other capabilities. Zeroshell–named because the author wanted to provide a system that was extremely flexible and powerful yet did not require users to access a shell prompt–is remarkably feature rich, and yet the download for the ISO image is only around 100MB (a bit beefier than pfSense, admittedly, which weighed in at around 60MB).

There are a couple of downsides, however. Until very recently, installing to a hard disk was not supported. The distro is designed to boot from a CD only, but can use an installed hard disk (if available) for what it calls “databases”, where configuration and other data is kept. With the latest release, however, the developers have created a “1GB USB drive” download (the size of the download isn’t 1GB), which is designed to be copied to a USB pendrive or hard disk.

The other downside (and it’s not fair to say that, as will become clear) is the web interface. Not because it’s ugly or not functional: it is neither of those. It’s clean and well laid out, and fairly consistent. It’s very technical, however. Where other distros tackle the “SOHO divide” by hiding details such as protocol numbers or port ranges, Zeroshell uncovers all this stuff in its gory detail. So it’s great for someone like me, who looks at the interfaces on other systems and pines for the knobs I can’t fiddle with, but it’s not for newcomers.

It looks to be a fairly new project (current release is 1.0beta9), but the forums look good and there does seem to be a bit of activity around it. I’m running Zeroshell in a VMware guest at the moment while I kick the tyres–the VMware download is also available from the project’s mirrors–but I reckon this one will be a keeper!

Tags: ,