Archive for category Hardware

Simultaneous Multi-Threading at McDonalds

Keeping on the analogy theme…  This time, it’s an explanation of Simultaneous Multi-Threading (SMT).  SMT was introduced to the z Systems architecture with the z13, and many technical specialists (myself included) have struggled with the standard explanations of how SMT is meant to improve overall system performance.  Here’s yet another attempt at an explanation!  Some folks might be a bit affronted at the “compare and contrast” of z Systems and a fast food drive-through, but it’s just an analogy…

So in Brisbane just about every McDonalds has a drive-through.  They used to have a single lane, with the menu boards and a speaker for the operator inside the restaurant to take your order.  As the customer, once you placed your order you then would drive forward to the “first window” where the cashier would take payment, then you’d drive to the “next window” to receive your order and proceed away.  Apologies to anyone offended by me feeling the need to explain how a drive-through works, but I don’t know how they work in your part of the world so I’m just covering how they work in mine.

Many of these drive-throughs have been redeveloped to add a second lane and order taking station — complete with a second set of menu boards.  They didn’t duplicate anything else in the process though: same payment and collection windows, even in most cases a single cashier taking orders alternately from both stations.

A dual-lane McDonalds drive thru

A dual-lane McDonalds drive thru, AKA CPUs 0x00 and 0x01

Why did McDonalds do this?  Without duplicating anything else in the whole chain, what benefit does adding a queue provide?  If two cars arrive at the stations at the same time there’s going to be contention for the cashier.  They then have contention to enter the single lane going past the windows.  Not only that, the restaurant had to give up physical space to install the second station — perhaps they lost a few parking spaces, or a garden.

had passed through this kind of drive-through a few times, and never clearly saw the benefit.  Sometimes I’d drive up to the station with the shortest queue, only to be stuck behind someone who seemed to be ordering one of everything from the menu…  Other times I’d pull up to an empty station, next to the only other car in the system (at the other station), but because the car at the occupied station was already placing their order I still had to wait the same amount of time as I would have in a single-lane system.

Then I finally realised.  The multiple queues aren’t for me as a customer — they’re for the restaurant.  Specifically, they’re for the food production stations in back-of-house.  To understand why it makes sense to have two lanes, it’s critical to realise that the drive-through is not just the speakers and the lane and the windows, it’s the method by which instructions are given to the many and various individuals that make the food and package the orders.  Each of those individuals has their own role and contribution to the overall task of serving the customer; from the grillers to the fryers to the wrappers to the packers (sorry, I’ll bet there’s McDonalds formal names for each of the team member roles but I don’t know them).

Having multiple order stations means that the orders get to the burger makers and packers faster, making them more efficient and improving their throughput.  The beverage orders go to the little automatic drink-pouring machine instantly, so that everyone’s Cokes and Fantas are ready and waiting sooner.  One car wants a Chicken McWrap, the next just wants a McFlurry?  No contention there, those orders can be getting made at the same time.

Maybe you’re asking “so what does this have to do with SMT?”  Well, the order stations are our threads.  The cashiers and the packers are the fetch-and-store units, the parts of the processor that fetch instructions from memory and store the results back.  The cashier’s terminal is the instruction decode unit.  The food preparers in the back-of-house, they are the processor execution units; the integer units, the DFP and BFP, the SIMD unit, the CPACF, and more — that’s where the real work is done.  To a large extent all of those execution units operate independently, just like our McD food preparers.  SMT, like our two drive-through lanes, makes sure that all those execution units are as busy as possible.  One thread issues an integer add instruction, the other thread is doing a crypto hash using the CPACF?  They can be happening simultaneously.

We’ve been saying all along that SMT will likely decrease the perceived speed of an individual unit of work, but overall more work will get done across all units of work.  When I’ve been in a two-lane drive-through and placed my order, and then had to wait while I merged with the cars coming from the other lane, I have to agree that it seemed like the merging delayed me.  However, if that had been a single-lane drive-through, chances are I would have been in a longer queue of cars before even reaching the order station, and that metric isn’t even measured by the queue management built into McDonalds’ terminals.  Likewise, on a busy system without SMT, it’s difficult to say how long instructions are getting queued in the operating system scheduler before even making it to the processor to be dispatched.  Basically, I’m saying that we may see OS scheduler queuing reduce, and therefore improved “performance” at the OS level over and above the actual benefit of improved processor throughput, even if our SMT ratio doesn’t get anywhere near the impossible 2:1.

If ten cars line up at the two windows and they all want a Big Mac and a Hot Apple Pie then there’s probably not going to be much gain there.  Today’s McDonalds menu is quite diverse though, which means the chances of orders having “non-intersecting overlaps” are greatly improved.  On z Systems, ensuring a variety of workloads and transaction types would help to ensure a diversity in the instruction stream that would give SMT a good opportunity to yield benefit.  This means mixing applications as much as possible, and using Large Memory and CPU Pooling support in z/VM 6.3 to bring lots of different workloads into heterogenous LPARs.

I’ll bet that McDonalds worked out that simply adding an extra entry lane meant that they can move more food items in a given time — and McDonalds business is to sell food on a massive scale.  In the same way, the goal of z Systems has never been to make one single workload run as fast as possible, but to make the hundreds or thousands of workloads across an enterprise run at highest efficiency.

Analogy can be found anywhere

This post may come across as self-serving, semi-advertorial, promotional, or just plain crappy (or all of the above).  I don’t apologise, it’s my blog and I’ll write what I want to.  However, because it’s the Internet and it’s almost guaranteed that someone reading this will think I should have warned them… consider yourself warned, fair reader.

My recent post about experiencing things for the last time started me off on a somewhat interesting train of thought.  There I was, sitting on an aircraft that was being retired, which must happen fairly often around the world–after all we don’t see too many 707s or TriStars in the skies any more.  Qantas used to have a lot of 767s, and I picked up the inflight magazine to see the numbers today.

As at September 2014, Qantas had 6 Boeing 767s in their fleet (down from 13 at 30 June 2014, further down from 20 as at July 2013, according to the Qantas Data Book 2014).  Then I looked at the total fleet size: just over 200 aircraft in total (again, looking at the Data Book 2014, 203 as at 30 June 2014).  The numbers started wandering around in my head, and soon put me in mind of another piece of hardware requiring large investment, and just as close to my heart as jet aircraft — mainframe computers.

I started to do some research into the numbers I looked at in the flight magazine.  According to the registration data available from CASA, there is only one 767 in Australia (a 767-381F freighter) not registered to Qantas.  Therefore, during 2014, Qantas was the operator of the only dozen-odd Boeing 767 aircraft in Australia.  Thousands of people every day, travelling on an aircraft of which there was only a dozen working examples in the country–in fact, by the time I had my last 767 flight, I wonder how many of the September Six were left?  Maybe VH-OGO was the last in service by then…?

Okay, you might say, the B767 doesn’t count as it’s old and Qantas was retiring them.  Righto, point taken.  Lets look at what is the mainstay of domestic inter-capital air travel in Australia then–the B737.  Qantas lists 70 as at June 30 (57 owned and 13 leased) while Virgin Australia shows 74.  CASA lists some freighters and a half-dozen registered to “Nauru Air Corporation”, but lets stick to QF and VA (apart from a couple of B787s Jetstar’s fleet is all Airbus and much smaller than Qantas or Virgin).  The most widely-used commercial jet aircraft in the country, and there’s only 140-ish of them?  So what, you might say: they’re jet planes, of course there won’t be many.

The numbers continue: again as at 30 June 2014 the total number of Boeing 747s and Airbus A380s and A330s in the Qantas fleet was 36 aircraft, and by now some of the B747s have been retired.  Think about that for a moment: Qantas is able to service all of its international routes, including covering maintenance intervals, using less than forty aircraft?  It’s not like Qantas has a small network… yes they extend their reach through alliances and codeshare just like all airlines do, but Qantas services Los Angeles direct from Brisbane, Sydney, and Melbourne, daily (you’d have to think that’s at least six planes by itself) as well as daily flights into cities across Asia and the few routes into Europe that haven’t been taken over by Emirates.  Three-dozen planes seems light…

A popular criticism of mainframes (once you get past the “old, room-sized, punch card” nonsense) is that there aren’t many of them.  Apparently if it was such a good system everyone would use it, and the fact that not many companies do is proof that it isn’t.  Also, apparently it’s risky to use a system that comparatively few other businesses use.

Imagine for a moment if airlines around the world started subscribing to the same kind of thinking that seems to have taken hold in IT:

Operations Manager: It’s too risky for us to use these large, expensive aircraft.  We don’t have enough of them to justify training pilots to operate them, and it costs a fortune when we have to service one.  Plus, did you know each one costs $100million?

C-suite: The last OM said these aircraft are the best fit for our operations, that we get value in return for the cost.  Are you saying there’s an alternative?

OM: You bet!  Did you know we can buy hundreds of light aircraft for what it costs to buy one jet?

C-suite: Really?  Sounds complicated…

OM: No way!  It’s simple, light aircraft are much less complicated to operate and maintain, and it’s much cheaper and easier to get pilots that know how to fly them.

C-suite: I’ve seen a light aircraft, they’re… small.  Won’t we need more of them to carry the load of our jets?

OM:  Maybe… ah but it won’t be that bad: how often are we running those big jets half-empty anyway?

C-suite: Hmm…  I assume you’ve done some projections?

OM: Yes, the acquisition cost of a fleet of light aircraft is a fraction of that of a fleet of jets!

C-suite: Acquisition cost…  I seem to recall that we should be worried about more than cost of acquisition…

OM:  Did I mention the acquisition cost of a fleet of light aircraft is a fraction of that of a fleet of jets?

C-suite: I guess that was all!  Okay, sounds like a great plan!

It seems ludicrous, and would never happen in real life.  Outside aviation, imagine a similar scenario with a transport company replacing B-doubles with postie bikes, or an energy company replacing wired electricity distribution with boxes of AA batteries sent to homes.  For some reason though it’s not farfetched in IT, and yet over the years conversations like that have happened in too many companies.

There aren’t many Boeing 737s in Australia, but that isn’t stopping Qantas and Virgin (and airlines around the world) from using equipment that is fit for purpose.  Why should mainframes be different?

DisplayLink and x2x brings back Zaphod mode

Ever since work issued me a Lenovo T61 and I installed Fedora on it, I have lamented the loss of something that X afficionados referred to as “Zaphod mode”.  By gluing together a few different software and hardware components I managed to get close to the old Zaphod mode days — but first some background…

Usually when you set up a multi-monitor installation you get a single desktop that spans all the screens.  This is great when you have a single desktop, but on Linux multiple desktops are the norm.  When I started using multiple screens in Linux, I loved the extra screen real estate but the fact that switching virtual desktops caused *all* the windows on all the screens to switch really bugged me.  I wanted the ability to have something — like an email program, or a web browser — to stay on one screen while I switched between desktop views on the other screen.  Or better still, the ability for both screens to have virtual desktops that were independent of each other.

Enter “Zaphod mode”, named for Zaphod Beeblebrox from the Hitchhikers Guide to the Galaxy by Douglas Adams.  Beeblebrox, who was President of the Galaxy before he stole the Starship Heart Of Gold, had two heads that were independent of each other.  In X server terms, multiple display devices are often referred to as “heads”.  So you can probably deduce that “Zaphod mode” refers to an operating mode of the X server where the multiple “heads” or display devices function as different displays.

Go back far enough and you get to a point where that was the standard mode of operation of X.  The X extension “Xinerama” was developed to provide the merging of different X displays into a single screen.  NVidia also had a hardware/firmware based equivalent called TwinView, where multiple heads on an NVidia card (and even sometimes heads on different cards) could be joined.  These extensions came not without their problems however: it was common for windows and dialog boxes to get confused about what display to appear on.  You would almost always see dialog boxes that are meant to display in the middle of the screen being split across the two physical displays.  Also, there was the multiple desktop “inconvenience” of not being able to switch the desktops independently.

Zaphod mode fixed these problems.  Because the screens were separate, windows and dialog boxes always appeared in the centre of the physical screen.  You could leave a web browser on one screen while you switched between an e-mail client, an IRC client, and an SSH session in the other.  It wasn’t all beer-and-skittles though, since in Zaphod mode it was not possible to move an application from one screen to the other.  Plus, some applications like Firefox could not have windows running on both screens (the second one to start could not access the user profile).

Zaphod mode largely “went away” during the transition from XFree to Xorg.  The servers dropped support for multiple separate displays in the one server, and only gradually added it back in (with the Intel driver being one of the last to do so, and probably still has not).  Since laptops were the only place I still used multiple screens, and the laptops I used all had Intel integrated graphics, I had to do without Zaphod mode.

Today, I hardly use dual monitors at all.  I used to have a desktop system with a 21″ CRT flanked by 17″ LCDs on either side, but that all got replaced by a single 24″ LCD.  At work we don’t have assigned desks, so setting up a screen to plug the laptop into isn’t going to happen.  I guess I learned to live without Zaphod mode by just going back to a single screen.  I still remember my Zaphod-powered dual-screen days fondly though, and with almost every update to Xorg I would scan the feature list looking for something like “Support for configuration of multiple independent displays (Zaphod mode)”.

A while back I bought a DisplayLink USB to DVI adapter.  I didn’t really know what to do with it at the time, but recently I dug it out and tried setting it up.  Googling for “DisplayLink Fedora” sent me to a couple of very helpful pages and it didn’t take long to get the “green screen of life” that indicates that the DisplayLink driver was active.  It was when I was looking at how to make it work as an actual desktop — part of the process involves setting up a real xorg.conf (that’s right, something about the DisplayLink X server means it can’t be configured by the Xorg auto configuration magic) — that I realised I could do something wonderful.  Instead of making a config file that contained both my standard display and the DisplayLink device (and probably cause havoc for the 90% of times I boot without an additional screen) I would create a config file with *just* the DisplayLink device and start it as a second server.  Run a different window manager in there, and I would have two independent desktops — Zaphod mode!

I did a couple of little experiments just starting an xterm in the second X, and it worked fine (the more alert of you will realise that I’m taking a bit of artistic license with the word “fine” here, and know that three little letters in the title of this post are a clue to what wasn’t yet working…) with the desktop and the xterm appearing in the second monitor.  I installed XFCE, and configured it to start as the window manager of the second X server, which also worked well.

Something was missing though: there was no mouse input to the second screen.  In Zaphod mode, even though the two screens were separate X displays they were managed by the same server.  This meant that the input devices were shared between the two displays.  In this configuration, I was careful to exclude any mouse and keyboard devices from my second display config to avoid any conflicts.  So how was I to get input device data into the second server?  A second display is not much good if you can’t click and type on the applications that run on it…

I remembered about an old program called x2x that could transfer the mouse and keyboard events to a different X server when you moved the mouse to the edge of your display (and, inexplicably, I forgot all about a much younger program called Synergy that can do the same thing).  Since x2x isn’t built for Fedora I found the source and built it and started it up…  and it worked first time!  When I moved the mouse to the edge of the screen, it appeared on the other screen!  I could start apps and type into them exactly as I wanted.

It wasn’t perfect, however.  I found that when I returned the mouse to the primary screen, the second screen was still getting keyboard events.  I figured this would be particularly inconvenient when, for example, I was entering user and password details into an app on the primary screen while an editor or terminal program had focus on the second screen…  I checked the Xorg.1.log file, and found that even though I had not specified a “keyboard” input device Xorg was automatically defining one for me.  I turned off the udev options, but it still happened.  My initial enthusiasm was starting to fade.

What fixed it was to manually define a “dummy” keyboard device.  There must be some logic in Xorg that it refuses to allow a configuration with no configured keyboard (which makes sense), so in this rather unusual case where I don’t want a keyboard I have to define one but give it a dummy device definition.  Defining the dummy keyboard stopped Xorg from defining its automatic one, and everything worked as expected!  Even screensavers work more-or-less as designed (although I haven’t actually spent much time in front of the setup yet so haven’t had to unlock the screen that often).

I’m away from the computer in question right now, otherwise I would post configs and command lines (and even a pic of the end result).  I’ll update this post with the details — leave a comment if you think I need to hurry up!  🙂

 

Tags: , , , , ,

Another IPv6 instalment (subtitled: Watch Your Tech Library Currency!)

I made a somewhat cryptic tweet a little while ago about how I spent a crazy-long period of time researching what was, I believed, the next-big-thing in DNS resolution for IPv6 (or so my 2002 edition of “IPv6 Essentials” told me).  I could not work out why I saw nothing about A6 records in any of the excellent Hurricane Electric IPv6 material or in any other documentation I came across.

The answer should have been obvious: DNS A6 records (and the corresponding DNAME records) never caught on.  RFC 3363 recommended that the RFC that defined A6 and DNAME (RFC 2874) be moved back into Experimental status.  If I hadn’t been using an old edition of the IPv6 book, I might never have even known the existence of A6 and not have wasted any time.

In my previous post on IPv6 I theorised that we are in the early-adoption phase of IPv6 where things aren’t quite baked, and yet now I’ve picked up a 9 year old text on the topic and acted all surprised when it got something wrong.  It was a bit stupid of me; had I bought a book about IPv4 in 1976, might it have been similarly out of date in 1985?

As always though I’m richer for the experience!  Or so I thought…  Like many, I’m becoming increasingly time-poor.  When I bought a book on IPv6 some years ago I thought I was making an investment, but it turned out that my investment actually lost for me in several ways:

  1. The book took up physical space in my bookshelf for all that time I wasn’t using it
  2. I didn’t actually use the information at the time I acquired it
  3. The time I could have got value from it was wasted by it idly sitting on the shelf
  4. Once I did try to use it, it actually cost me time rather than saved time

I came to think about the other books on my shelf.  It’s pretty easy to recognise that a book that proclaims to be up-to-date because it “Now covers Red Hat 5.2!” will be anything but.  Also, from the preface of a Perl programming book that says “this was written about Perl 5.8, but it should apply to 5.10 as well” I’ll be forewarned that things will be fairly applicable to 5.12 but maybe not to Perl 6 when it’s out.

Technology usually has a somewhat abbreviated lifespan, so therefore the corresponding documentation will have a lifespan correspondingly short…  Here, however, is an example of a technology that will have a far greater lifespan (we hope) than much of the documentation that currently exists around it.  I emphasise “currently exists”, because it won’t always be that way: IPv4 was pretty well-baked by the time I had anything to do with it, so I could have bought a book on IPv4 with next to no concern that it was going to lead me astray (indeed, I bought W. Rich Stevens’ TCP/IP programming texts during the 1990s, and still use them to this day).  I keep forgetting that I’m on a completely different point of the IPv6 adoption curve, and the “experts” are learning along with me.

So, a new tech library plan then:

  • Reduce dependence on physical books (okay, this one is already a work-in-progress for me) — they don’t come with you on your travels as easily, and (more important in this context) they’re harder to keep up to date.
  • Before regarding the book on the shelf as authoritative, check its publication date.  If it’s more than three years old, depending on the subject matter it might be out of date.  Check if there’s a new edition available, and consider updating.  If there’s no new edition, check for recent reviews (Amazon, etc).  Someone who just bought it last month might have posted an opinion on its currency.
  • If you have to buy a paper book, don’t buy a book on any technology that is a moving target.  On the same shelf as my copy of “IPv6 Essentials” there is a book entitled “Practical VoIP Using VOCAL”.  I never even installed VOCAL, and I’m sure many current VoIP practitioners never heard of it.  (Side note: I think it’s strange that I bought that book, and a Cisco one, but still to this day have never owned a book on Asterisk.  Maybe I have some kind of inability to pick the right nascent-technology book to buy.)
  • Use bookmarking technology more! I have a Delicious account, and I went through a phase of bookmarking everything there.  I realise now that, if I was a bit more disciplined, I could actually use it (or a system like it, depending on what Yahoo! does to it) as my own personal index to the biggest tech library in existence: the Internet.

That first point is harder than it sounds (especially for someone like me who has a couple of books on his shelf with his name on the cover).  My Rich Stevens books are littered with sticky-note bookmarks for when I flick to-and-fro between different programming examples.  Electronic readers are still not there when it comes to the “handy-hints-I-keep-on-my-lap-while-coding” aspect of book ownership.

I have a Sony Reader which I purchased with the intent of making it my mobile tech library.  It’s just not that great for tech documents though, since it doesn’t render diagrams and illustrations well (it also isn’t ideal for PDFs, especially in A4 ratio).  This may change as publishers of tech docs start releasing more titles on e-reader formats like ePub.  The iPad is working much better for tech library tasks; I’m using an app called GoodReader which renders PDFs (especially RedBooks!) quite well and has good browsing and syncing capability as well.

More on these topics later, I’m sure!

Update: I omitted another option in my “tech library plan” — since IPv6 Essentials is an O’Reilly book, I could have registered with their site to get offers on updating to new editions.  Had I done so, the events of this post might not have happened!  Now that I’ve registered my books with O’Reilly, I’m getting offers of 40% off new paper editions and 50% off e-book editions.  Also, in line with my reduce-paper-book-dependence policy, I can “upgrade” any of the titles I own in paper to e-book for US$4.99.  If you haven’t already, I encourage anyone who has O’Reilly books that they rely on as part of their tech library to register them at members.oreilly.com.  (This is an unsolicited endorsement from a happy customer, nothing more!)

Tags: , , , , ,

Sharing an OSA port in Layer 2 mode

I posted on my developerWorks blog about an experience I had sharing an OSA port in Layer 2 mode.  Thrilling stuff.  What’s more thrilling is the context of where I had my OSA-port-sharing experience: my large-scale Linux on System z cloning experiment.  One of these days I’ll get around to writing that up.

Tags: , , ,

Asterisk and a Patton SmartNode

It’s been ages since I did an update on the main network machine here, and I bit the bullet over the weekend. 250+ packages emerged with surprisingly little trouble, and all I was left to do was build the updated kernel and reboot.
I usually end up with something that doesn’t restart after the reboot, usually because of a kernel module that needs to be rebuilt after the kernel (because I forget to remerge the package before the reboot, oops). This time the culprit was Asterisk (the phone system), which I also often have trouble with after an update due to a couple of codec modules external to the Asterisk build. This time however the problem ended up being due to the Asterisk CAPI channel driver failing.
Thinking it was the usual didn’t-rebuild-the-module problem, I went looking for the package I had to rebuild… only to find it was masked. Turns out the driver for the ISDN card in the box, a FritzCard PCI, is no longer maintained and doesn’t build on modern kernels, which has resulted in the Gentoo folks hard-masking the entire set of AVM’s out-of-tree drivers.
Help was at hand in the form of a Patton SmartNode 4552 ISDN VoIP router I’d bought months ago to replace the Fritz card. Even though there isn’t much information about how to configure the SmartNode for Asterisk around, I managed to get the setup working in only a couple of hours. I even managed to get the outgoing routing for the work line set up right!
Eventually I’ll get something posted here that goes into a bit more detail about the configuration. Let me know in a comment if you need to hurry me up! 🙂

Tags: , , , ,

ppc Linux on the PowerMac G5

With Apple’s abandonment of PPC as of Snow Leopard, I began wondering what to do with the old PowerMac. It’s annoying that so (comparatively) recent a piece of equipment should be given up by its manufacturer, but that’s a rant for another day. Yes, we can still run Leopard until it goes out of support, but with S and I both on MacBook Pros with current OS I know that we would both become frustrated with a widening functionality gap between the systems.

I had always resisted runing Linux on the PowerMac, thinking that the last thing I needed was yet another Linux box in the house. I had tried a couple of times, but it was in the early days of support for the liquid cooling system in the dual-2.5Ghz model and those attempts failed dismally. I figured that by now those issues would be resolved and I would have a much better time.

I assumed that Yellow Dog was still the ‘benchmark’ PPC Linux distro, so I went to their site. I saw a lot of data there about PS3 and Cell; it seems that YDL is transitioning to the cluster and/or research market by focussing on Cell.

The next thing I discovered is the lack of distributions that have a PPC version, even as a secondary platform. My old standby Gentoo still supports PPC, as does Fedora (I think: I saw a reference to downloading a PPC install disk, bit didn’t follow it), but every other major distro has dropped it — openSUSE, for example, with their very latest release (their download page still has a picture of a disc labelled “ppc”, but no such download exists, oops). I guess that since the major producer of desktop PPC systems stopped doing so, the distros saw their potential install base disappear. Unfortunately for those distros, I can see the reverse happening: now that Apple has fully left PPC behind, plenty of folks like me who have moderately recent G4 and G5 hardware and who still want to run a current OS will come to Linux looking for an alternative… I guess time will tell who is right on this one.

So I went to install Gentoo, and to cut a long story short I had exactly the same problem as before: critical temperature condition leading to emergency system power-off. I found that if I capped the CPU speed to 2Ghz I could stay up long enough to get things built, but then the system refused to boot because it couldn’t find the root filesystem. Probably something to do with yaboot, SATA drives and OpenFirmware. So again I’m putting it aside.

My next plan was to treat it as a file server. Surely a BSD would support my G5 hardware: after all, Mac OS X is BSD at heart… Well, no. FreeBSD has no support for SATA on ppc, OpenBSD specifically mentioned liquid-cooled G5s as having no support, and I don’t think I saw any ppc support on NetBSD more recent than G3 [1].

This is one of the things that annoys me about the computer industry: that somehow it’s okay to so completely disregard your older releases. What if the automotive industry worked that way?

So I may yet try Fedora, or give the game away for another year or so and see what the situation looks like then.

[1] I may have mixed up a couple of these details.

Edit: Gentoo’s yaboot has managed to make it so that I can’t boot Mac OS X on the machine any more.  Oh dear.

Tags: , , , , ,

Upgrading from Cisco

In case you weren’t aware, I am a VoIP nutcase.  I have an Asterisk phone system at home, and all the phones in the house are VoIP of some description (either real VoIP devices or analogue handsets through an ATA).  While I haven’t converted to VoIP as a replacement for PSTN, I have some connectivity to VoIP providers both here and overseas (and soon to be more, to help the phone-home situation while I’m overseas).

I’ve been a user of Cisco IP phones, buying 7960s and a couple of 7970s through a well-known internet site (maybe it starts with an “e”, not sure).  The phones have been excellent, and I’ve even written a few XML apps to supplement their use here.  The 7960s are getting a bit dated now, however, and I found myself contemplating buying 7971s (or even something newer, like the 7965 or 7975).  Before I committed myself further into the relationship with Cisco, though, I thought about what I was really getting out of using Cisco phones.

Like many users of second-hand Cisco gear, I only purchased the hardware.  I do occasionally succumb to a nagging feeling of being an “outlaw” (at least in the eyes of Cisco), but admittedly that feeling usually only comes when I find out that Cisco has released another new version of SIP software that I can’t get because I haven’t paid for SmartNet.  The last time I had this thought though, I had a realisation: even if I did pay for SmartNet, the only thing I’d get would be the firmware: Cisco will only support their phone software when connected to their CallManager server (yes, even the SIP firmware).  Anyone running Cisco phones against anything other than CUCM gets no support from Cisco in the event something doesn’t work–and based on the information floating around, the problems are many.

So basically I would be paying Cisco to allow me to run one of the worst SIP implementations in embedded existence, with no opportunity to report problems with it in my environment.  Hmm, let me think about that for a minute…

At around the same time, I happened across the NerdVittles site, and in particular the post where NerdUno nominated the Aastra 57i as the “World’s Best Asterisk Phone“.  I started to do some research into it, and was astounded at the level of support the manufacturer (a Canadian company which a few years ago acquired the telephony business of a little mob called Nortel) and the community provide for this phone and Asterisk.  Looking through the phone manual, I found functions that only work with Asterisk! I found a full set of integration scripts that provide XML applications, right through to automatic provisioning tools.  Possibly the best thing was that on the product page for their phones — right there on the page that descibes the product — are links to current versions of firmware, documentation, XML application development guides, even a Linux-based application to encrypt the phone configuration files.  Not hidden in some obscure hard-to-find portal, or behind a registration-only support site.

I started to think of the possibilities…  I’d be able to freely modify the phone configuration (even via a HTTP interface if I so chose), without having to make trial-and-error changes to a cryptic and totally undocumented configuration file.  I’d be able to write XML apps without having to do laborious debugging to cater for why the parser was choking on XML that was perfectly okay according to the documentation but apparently tripped over an undocumented field length restriction or character encoding limitation.  I could get access to things like Visual Voicemail, BLF, integration with Asterisk functions like day/night mode and call parking.  I could keep the phones up-to-date for new functions and bug fixes.  With a click of a mouse I could get proper Australian tones!

So, I decided to give one a try.  Finding nothing on that “e” site I went looking for a vendor locally, and found several places that would sell one to me (legitimate e-tailers, no less!  Zounds!  A VoIP phone with a warranty?  You jest!).  It took a while for my chosen vendor to source it for me, but I’ve had it now for a couple of weeks.  It’s probably going to take a while for it to live up to it’s full potential in my installation, but since that potential is so much greater than what I have been able to do with the Ciscos I think I’m already ahead.

More in the coming weeks as the Aastra settles in.

Tags: , , , , ,

Classic Mac sounds on my mobile phone

We watched WALL-E the other day. A bit of trivia for Apple Mac fans (if you didn’t already know) is that WALL-E’s startup sound — heard when he’s finished his solar recharge — is that of a post-1997 Mac computer (with Steve Jobs on the board of Pixar and Disney, WALL-E was never going to make The Microsoft Sound (: ). Coincidentally, at around the same time as I saw WALL-E I was going through that modern malaise of mobile-phone-alert-tone-taedium… So, inspired by this bit of cinematic crossover coolness, I decided to get some Mac-chime action for my handset.

The first thing was obviously to get hold of the audio file. This turned out to be surprisingly easy, thanks to Google pointing me to a piece of software called MacTracker. MacTracker is actually a reference guide for Apple products (computers all the way back to the Macintosh XL, the MessagePads, printers, displays, even iPods and mice), but part of the information it holds about the computers is their startup and death chimes.

There’s no option in MacTracker to export the audio files, but by opening the app package (“Show Package Contents” in Finder) it’s possible to navigate to where the chime sound files are stored. Then from Finder, all I had to do was zap the file to the phone via Bluetooth. On the phone, opening the Bluetooth message gave me an option to save the “music” file, which I did — this adds the file to the Music Player, but importantly makes it easily selectable in the configuration of the alert tones.

So now when I receive an SMS I hear the death chime of a Macintosh LC, and the startup sound of the Twentieth Anniversary Macintosh alerts me to incoming e-mail. I’m going to apply similar configuration to my desktops: on-and-off for the last ten years I’ve been using a Homer Simpson soundbite to advise incoming mail, and it’s a bit tired now…

Next task will be to replace the startup sound on my N810 with something a bit retro-Mac! 🙂

Tags: , ,

Living with an iPod touch

I held out for a long, long time. I'd even talked myself entirely out of getting one. Like they say in the classics though, "you think you've escaped, but they pull you back in". I now have a 32GB iPod touch and it's doin' alright, even though it took me nearly a week before I bothered putting any media on it!

I think what finally did it for me was the App Store. I love being able to simply go to an app on the device and easily look for software, installing what I like with no fuss. I especially like the fact that my downloads are synced with my computer, so that I don't have to keep track of all the individual items I've installed (unlike my phone; I can't think where all the sis and sisx files for different stuff I've installed might be).

My Facebook friends will know that I'm much more active there suddenly. Why? The Facebook app on the Touch — I no longer have to start up a computer or open a browser to update my status or reply to comments. I had a bit of this function with Fring's Facebook interface on my phone, but the large screen of the Touch makes things like this much more friendly.

I came very close to getting an iPhone actually — but not to use as a phone. This was after I'd realised that it's just as valuable as an Internet-connected device as an actual phone. The cost of iPhone service is still a bit prohibitive to me though, especially for an occasional-use device.

One of the things that had turned me off was the closed nature of the iTunes ecosystem (iPod, iPhone, Apple TV, iTunes). People sometimes ask me about Skype, and I say that the worst thing about it is that it Just Works. I mean, it's a closed system with no interconnections other than those provided by Skype themselves — by this nature it should fail, and yet because it works (arguably) better than any other desktop VoIP product it enjoys immense success. Same goes for Apple's stuff: the iTunes ecosystem Works And Works Bloody Well.

I've been thinking for ages about sync for calendar and contacts and stuff; I've been hunting for services and software and tools for ages. I could build something myself, and indeed started to (I've looked at Google Apps, used Chandler, checked out Ovi, and played with Sync4J before it was called Funambol). I could spend time and effort coming up with something myself…

Or I could just buy an iPod.

Tags: ,