Archive for category Technology

Simultaneous Multi-Threading at McDonalds

Keeping on the analogy theme…  This time, it’s an explanation of Simultaneous Multi-Threading (SMT).  SMT was introduced to the z Systems architecture with the z13, and many technical specialists (myself included) have struggled with the standard explanations of how SMT is meant to improve overall system performance.  Here’s yet another attempt at an explanation!  Some folks might be a bit affronted at the “compare and contrast” of z Systems and a fast food drive-through, but it’s just an analogy…

So in Brisbane just about every McDonalds has a drive-through.  They used to have a single lane, with the menu boards and a speaker for the operator inside the restaurant to take your order.  As the customer, once you placed your order you then would drive forward to the “first window” where the cashier would take payment, then you’d drive to the “next window” to receive your order and proceed away.  Apologies to anyone offended by me feeling the need to explain how a drive-through works, but I don’t know how they work in your part of the world so I’m just covering how they work in mine.

Many of these drive-throughs have been redeveloped to add a second lane and order taking station — complete with a second set of menu boards.  They didn’t duplicate anything else in the process though: same payment and collection windows, even in most cases a single cashier taking orders alternately from both stations.

A dual-lane McDonalds drive thru

A dual-lane McDonalds drive thru, AKA CPUs 0x00 and 0x01

Why did McDonalds do this?  Without duplicating anything else in the whole chain, what benefit does adding a queue provide?  If two cars arrive at the stations at the same time there’s going to be contention for the cashier.  They then have contention to enter the single lane going past the windows.  Not only that, the restaurant had to give up physical space to install the second station — perhaps they lost a few parking spaces, or a garden.

had passed through this kind of drive-through a few times, and never clearly saw the benefit.  Sometimes I’d drive up to the station with the shortest queue, only to be stuck behind someone who seemed to be ordering one of everything from the menu…  Other times I’d pull up to an empty station, next to the only other car in the system (at the other station), but because the car at the occupied station was already placing their order I still had to wait the same amount of time as I would have in a single-lane system.

Then I finally realised.  The multiple queues aren’t for me as a customer — they’re for the restaurant.  Specifically, they’re for the food production stations in back-of-house.  To understand why it makes sense to have two lanes, it’s critical to realise that the drive-through is not just the speakers and the lane and the windows, it’s the method by which instructions are given to the many and various individuals that make the food and package the orders.  Each of those individuals has their own role and contribution to the overall task of serving the customer; from the grillers to the fryers to the wrappers to the packers (sorry, I’ll bet there’s McDonalds formal names for each of the team member roles but I don’t know them).

Having multiple order stations means that the orders get to the burger makers and packers faster, making them more efficient and improving their throughput.  The beverage orders go to the little automatic drink-pouring machine instantly, so that everyone’s Cokes and Fantas are ready and waiting sooner.  One car wants a Chicken McWrap, the next just wants a McFlurry?  No contention there, those orders can be getting made at the same time.

Maybe you’re asking “so what does this have to do with SMT?”  Well, the order stations are our threads.  The cashiers and the packers are the fetch-and-store units, the parts of the processor that fetch instructions from memory and store the results back.  The cashier’s terminal is the instruction decode unit.  The food preparers in the back-of-house, they are the processor execution units; the integer units, the DFP and BFP, the SIMD unit, the CPACF, and more — that’s where the real work is done.  To a large extent all of those execution units operate independently, just like our McD food preparers.  SMT, like our two drive-through lanes, makes sure that all those execution units are as busy as possible.  One thread issues an integer add instruction, the other thread is doing a crypto hash using the CPACF?  They can be happening simultaneously.

We’ve been saying all along that SMT will likely decrease the perceived speed of an individual unit of work, but overall more work will get done across all units of work.  When I’ve been in a two-lane drive-through and placed my order, and then had to wait while I merged with the cars coming from the other lane, I have to agree that it seemed like the merging delayed me.  However, if that had been a single-lane drive-through, chances are I would have been in a longer queue of cars before even reaching the order station, and that metric isn’t even measured by the queue management built into McDonalds’ terminals.  Likewise, on a busy system without SMT, it’s difficult to say how long instructions are getting queued in the operating system scheduler before even making it to the processor to be dispatched.  Basically, I’m saying that we may see OS scheduler queuing reduce, and therefore improved “performance” at the OS level over and above the actual benefit of improved processor throughput, even if our SMT ratio doesn’t get anywhere near the impossible 2:1.

If ten cars line up at the two windows and they all want a Big Mac and a Hot Apple Pie then there’s probably not going to be much gain there.  Today’s McDonalds menu is quite diverse though, which means the chances of orders having “non-intersecting overlaps” are greatly improved.  On z Systems, ensuring a variety of workloads and transaction types would help to ensure a diversity in the instruction stream that would give SMT a good opportunity to yield benefit.  This means mixing applications as much as possible, and using Large Memory and CPU Pooling support in z/VM 6.3 to bring lots of different workloads into heterogenous LPARs.

I’ll bet that McDonalds worked out that simply adding an extra entry lane meant that they can move more food items in a given time — and McDonalds business is to sell food on a massive scale.  In the same way, the goal of z Systems has never been to make one single workload run as fast as possible, but to make the hundreds or thousands of workloads across an enterprise run at highest efficiency.

Analogy can be found anywhere

This post may come across as self-serving, semi-advertorial, promotional, or just plain crappy (or all of the above).  I don’t apologise, it’s my blog and I’ll write what I want to.  However, because it’s the Internet and it’s almost guaranteed that someone reading this will think I should have warned them… consider yourself warned, fair reader.

My recent post about experiencing things for the last time started me off on a somewhat interesting train of thought.  There I was, sitting on an aircraft that was being retired, which must happen fairly often around the world–after all we don’t see too many 707s or TriStars in the skies any more.  Qantas used to have a lot of 767s, and I picked up the inflight magazine to see the numbers today.

As at September 2014, Qantas had 6 Boeing 767s in their fleet (down from 13 at 30 June 2014, further down from 20 as at July 2013, according to the Qantas Data Book 2014).  Then I looked at the total fleet size: just over 200 aircraft in total (again, looking at the Data Book 2014, 203 as at 30 June 2014).  The numbers started wandering around in my head, and soon put me in mind of another piece of hardware requiring large investment, and just as close to my heart as jet aircraft — mainframe computers.

I started to do some research into the numbers I looked at in the flight magazine.  According to the registration data available from CASA, there is only one 767 in Australia (a 767-381F freighter) not registered to Qantas.  Therefore, during 2014, Qantas was the operator of the only dozen-odd Boeing 767 aircraft in Australia.  Thousands of people every day, travelling on an aircraft of which there was only a dozen working examples in the country–in fact, by the time I had my last 767 flight, I wonder how many of the September Six were left?  Maybe VH-OGO was the last in service by then…?

Okay, you might say, the B767 doesn’t count as it’s old and Qantas was retiring them.  Righto, point taken.  Lets look at what is the mainstay of domestic inter-capital air travel in Australia then–the B737.  Qantas lists 70 as at June 30 (57 owned and 13 leased) while Virgin Australia shows 74.  CASA lists some freighters and a half-dozen registered to “Nauru Air Corporation”, but lets stick to QF and VA (apart from a couple of B787s Jetstar’s fleet is all Airbus and much smaller than Qantas or Virgin).  The most widely-used commercial jet aircraft in the country, and there’s only 140-ish of them?  So what, you might say: they’re jet planes, of course there won’t be many.

The numbers continue: again as at 30 June 2014 the total number of Boeing 747s and Airbus A380s and A330s in the Qantas fleet was 36 aircraft, and by now some of the B747s have been retired.  Think about that for a moment: Qantas is able to service all of its international routes, including covering maintenance intervals, using less than forty aircraft?  It’s not like Qantas has a small network… yes they extend their reach through alliances and codeshare just like all airlines do, but Qantas services Los Angeles direct from Brisbane, Sydney, and Melbourne, daily (you’d have to think that’s at least six planes by itself) as well as daily flights into cities across Asia and the few routes into Europe that haven’t been taken over by Emirates.  Three-dozen planes seems light…

A popular criticism of mainframes (once you get past the “old, room-sized, punch card” nonsense) is that there aren’t many of them.  Apparently if it was such a good system everyone would use it, and the fact that not many companies do is proof that it isn’t.  Also, apparently it’s risky to use a system that comparatively few other businesses use.

Imagine for a moment if airlines around the world started subscribing to the same kind of thinking that seems to have taken hold in IT:

Operations Manager: It’s too risky for us to use these large, expensive aircraft.  We don’t have enough of them to justify training pilots to operate them, and it costs a fortune when we have to service one.  Plus, did you know each one costs $100million?

C-suite: The last OM said these aircraft are the best fit for our operations, that we get value in return for the cost.  Are you saying there’s an alternative?

OM: You bet!  Did you know we can buy hundreds of light aircraft for what it costs to buy one jet?

C-suite: Really?  Sounds complicated…

OM: No way!  It’s simple, light aircraft are much less complicated to operate and maintain, and it’s much cheaper and easier to get pilots that know how to fly them.

C-suite: I’ve seen a light aircraft, they’re… small.  Won’t we need more of them to carry the load of our jets?

OM:  Maybe… ah but it won’t be that bad: how often are we running those big jets half-empty anyway?

C-suite: Hmm…  I assume you’ve done some projections?

OM: Yes, the acquisition cost of a fleet of light aircraft is a fraction of that of a fleet of jets!

C-suite: Acquisition cost…  I seem to recall that we should be worried about more than cost of acquisition…

OM:  Did I mention the acquisition cost of a fleet of light aircraft is a fraction of that of a fleet of jets?

C-suite: I guess that was all!  Okay, sounds like a great plan!

It seems ludicrous, and would never happen in real life.  Outside aviation, imagine a similar scenario with a transport company replacing B-doubles with postie bikes, or an energy company replacing wired electricity distribution with boxes of AA batteries sent to homes.  For some reason though it’s not farfetched in IT, and yet over the years conversations like that have happened in too many companies.

There aren’t many Boeing 737s in Australia, but that isn’t stopping Qantas and Virgin (and airlines around the world) from using equipment that is fit for purpose.  Why should mainframes be different?

iOS8 and OS X Yosemite

A week or so ago I succumbed to the hype (and the nagging from my devices) and installed iOS 8 on a second iPad.  As far as updates go it was smooth although the post-install setup wizard crashed before it could ask me about things like iCloud Drive, which made me wonder whether I might be due for later problems.  For the most part I was proving immune to the “this feature only works with Yosemite” bait but I knew it was probably just a matter of time…

Call it serendipity, call it fate, call it whatever you will… but yesterday I was looking at my OS X desktop and thought “y’know, I’m a bit tired of that Apple font”.  You can probably imagine my wry grin when I surfed to Apple’s OS X Yosemite preview pages to find that one of the key features of the “new design” is a very clean replacement for the old Finder font!  So that, along with the nagging of the devices… and in the spirit of “better late than never”, I decided to join the beta of OS X Yosemite.

Signing up was incredibly easy and well integrated into the App Store.  It only took a login and a couple of clicks and Yosemite was being poured into my MacBook.  I took the opportunity during the download to make sure that my Time Machine backup was up to date, and let it do its thing.  Around 20 minutes later it was finished.  One weird thing I found though was that during the installation — while the big grey X was on the screen, and the progress bar was still counting down — my other iOS devices started squawking that a MacBook had “logged on to FaceTime”.  I even heard VoiceOver alerts from the machine itself, complaining about things in my auto-start that weren’t set up correctly, despite the OS X Installation progress bar reporting 7 minutes to go!  I guess I’m used to the installer for an OS being a different environment entirely from the running system, not just a wizard running on top of a user logon.

While I was poking around things in Yosemite, the iOS 8.0.2 update was released… and was duly applied to the old iPhone 4S and the main iPad.  I am concerned about battery life on the phone — for example the Facebook app seems to take 1% out of the battery every minute it’s running — but in honesty I was having battery issues while still on iOS 7.  I think it’s to do with the age of the device, but at this stage the best I can say is that iOS 8 doesn’t seem to be that much worse than iOS 7 for me, plus of course I get the benefit now of being able to see battery usage by app.

It hasn’t even been 24 hours in Yosemite yet, but I’m impressed.  The update to the look and feel of the OS X desktop is well overdue (although we still can only choose Blue or Graphite for Appearance?).  I really like the iOS integration features of Yosemite, but haven’t had a chance yet to see them in action.  I have to say though, at least for this Little Black Duck™, Yosemite and iOS 8 have reinvigorated my interest in the Apple ecosystem.  I mean I like the iDevices, but the “wow” of some of the Apple tech had faded for me in recent times…  If features like Handoff and the call and message integration actually work as designed, this could put Apple back into the lead position when it comes to “devices designed to work together”.

Tags: , , , ,

I lost my Fitbit… and found it

I have settled into a somewhat sedentary lifestyle.  My partner tries valiantly to get me involved in her personal training sessions, but I have a lot of inertia.  I know that I need to do something about being more active and increasing my fitness level, but have struggled to find a motivator.

While in Europe I succumbed to a bit of techno-craziness and bought a Fitbit One.  (The craziness wasn’t buying a Fitbit, it was where I bought it—the Apple Store in the Odysseum in Montpellier—and the resulting price I paid compared to if I’d waited and bought it at home, even from an Apple Store.)  I was enjoying the novelty of tracking activity, counting steps and calories, entering water consumption, and monitoring sleep.  I wore it almost constantly through France, in Amsterdam, and on the way back to Australia, thinking I might have finally found a way to motivate myself to exercise—that’s right: the path to a healthier life through good-old 21st century gamification!

I drove up to Brisbane a week ago for lunch with some work colleagues before picking up my kids; of course, the Fitbit was with me all the way.  The only problem was, my leather belt is too thick for the Fitbit’s clip so I instead clipped it into the coin pocket of my jeans.  It’s not so secure, and the Fitbit slid back and forth along the rim of the pocket, but I figured the seam along the edge of the pocket was thick enough to prevent the Fitbit from coming loose.

Almost over the jet-lag from coming back from Europe, I prepared for bed that evening looking forward to wearing the Fitbit to monitor my sleep—only the Fitbit was nowhere to be found.  Not on the jeans, not anywhere visible.  I decided that my method of clipping the Fitbit into the coin pocket was not so secure after all, and it had come loose during the day.

The next day I did the usual “retrace your steps, check behind the couch, blah blah” routine but still came up blank.  During Sunday however, for some reason I decided to start up the Fitbit app on my phone… and was rewarded with a message telling me it was “Syncing”!  I looked around where I was sitting, but still couldn’t find it.  By this time I had convinced myself it really was gone, and the sync message was the app on the phone syncing with the web site.

It got the better of me again today however.  I started the app again, and again was told it was “Syncing”.  I went to the “Devices” list, and sure enough beside my One it said it had synced just then.  Knowing that it had been over a week since I had last seen it, and that the battery was good but it wouldn’t last forever, I decided to pull out all the stops to locate it.

The BTLExplorer screen as it detects my Fitbit One.

The BTLExplorer screen as it detects my Fitbit One.

I figured there had to be an app similar to those I’d seen for scanning Wi-Fi and Bonjour but for Bluetooth, but searching for “bluetooth locator”, “bluetooth search”, and so on led to nothing helpful—there is a growing number of apps that help you search for headsets or objects to which you’ve attached a Bluetooth Low Energy (BLE) tag, but I couldn’t find anything that did a simple scan of Bluetooth devices in range.

I turned to Google at that point, and decided to search for “locate lost fitbit bluetooth”.  The second item in the results was this blog post, which turned up a free app called BTLExplorer.  I installed it, ran it, and straight away it detected my Fitbit!

What followed was an ultra-modern version of “Marco Polo” or “Hot or Cold”.  I wandered around the house watching the indicated signal strength rising and falling, trying to get closer to where it was hiding.  Eventually, I found the room where the strength was intermittently rising above -60dBm, and sure enough, under a cushion, was my Fitbit One!

Now I can resume the monitoring of my activity levels.  In addition, my fruitless searching of the Apple App Store has made me realise that the App Store app on the iPhone is pretty useless for searching for apps: turns out there are a few other apps similar to BTLExplorer, but because I didn’t search for “bluetooth scanner” or “bluetooth explorer” I didn’t find them.

So far I’m pretty impressed with the Fitbit technology, even though it’s not that much more than a fancy pedometer.  While the device is pretty cool most of the intelligence of the system is in the app and the website, which analyse and interpret the data gathered by the device itself.  It is pretty nicely integrated: the device itself gets the movement data and syncs to the phone, which you can use to do basic display of the data while entering additional data like weight measurements and food and water consumption; the phone app syncs all that data to the website which does additional analysis and provides more of the social aspects of the system.

I’ll report back on how the Fitbit and its application environment helps me with my health transformation!

Tags: , ,

On global roaming for data

Like most international travellers in the Internet age, during our recent travel through Europe I was confronted by the ridiculous situation that exists for mobile data access.  By ridiculous I mean ridiculously expensive.

Telstra SMS warnings

Warnings from Telstra when a customer connects to a roaming network.

Look, don’t get me wrong: the technology that allows GSM/UMTS global roaming is pretty magical[1].  But it’s not exactly new!  It’s not like mobile networks are breaking new ground in how this should be done!  As I understand it, GSM was designed almost from day one to support the interconnection of networks in the manner that global roaming requires, so why are we consumers gouged so aggressively for it?

Telstra goes to great lengths to warn their customers about the high cost of data when they roam overseas.  Nice.  So let’s say I want to buy one of these International Roaming Data Packs—how much does that cost?  On Telstra’s website I find the answer: in fact I find several answers, since it would be unreasonable to expect one simple, easy-to-budget rate from a telephone company.

The cheapest data pack is A$29, which gets you the princely total of—wait for it…

20MB of data.

Wait, what…?

Twenty megabytes?!?!?

Packs range all the way up to 2GB, which costs an unbelievable A$1800.  I have a Telstra mobile broadband service which costs around A$39 per month and has a monthly allowance of 3GB—that comparison puts the roaming data rate at almost 700% more expensive!

The kickers though are in the fine print:

If you use all of your data allowance, we will charge you 1.5 cents per kB you use which equates to $15.36 per MB.

This is the rate for roaming data if you don’t have a data pack.  An order of magnitude again more expensive than data in a data pack!  Let’s look at the SMS they sent though: “we’ll SMS you every 20MB of data” — which means, if you don’t have a data pack, you’ll get your first SMS alert once you’ve already spent A$307.20!  The next one is the absolute best, though:

Any unused data allowance will be forfeited at the end of the 30 day period.

Are you absolutely @#$%!$ kidding?!?!?  Seriously?!?  Let’s think this one through:

  • You have concocted an astronomically exorbitant rate for data usage
  • You’ve made me pay up-front for my expected use
  • If I get the estimate wrong, I’ll either pay through the nose at the casual usage rate until I decide if I want to buy another pack OR I’ll get no compensation of the up-front money I paid for data I didn’t end up using
  • You’re still collecting on my contract monthly plan fee, which includes domestic call and data allowances I can’t possibly use because I’m overseas!

The whole situation is unbelievable to me.  Unjustifiable.  If Douglas Adams was writing Life, The Universe, And Everything today I believe “bistromathics” would have instead been “phonemathics” (except that it doesn’t roll off the tongue as well).  I can just imagine it:

“Just as Einstein observed that space was not an absolute, but depended on the observer’s movement in time, so it was realised that numbers are not absolute, but depend on the observer’s mobile phone’s movement through roaming zones.”

It’s only a problem because mobile technology is so embedded in our lives today.  We tweet, we post pictures, we e-mail, we navigate, we live connected in ways that we didn’t even ten, or five, years ago.  I know this, because I did without mobile data even as recently as 2009 (when I was in the US and China for a total of seven weeks).  The ironic thing is that we are most likely to want to do these sharing activities, such as checking in on Foursquare and sharing photos, when we are travelling—and even more so when we are in new and exotic places, such as a foreign land.

I call BS on the whole international roaming data scam.  I defy anyone from a telecommunications company to explain to me why it can be three orders of magnitude more expensive to access bits in a foreign country compared to accessing those same bits from home.  It is nothing more than a money gouging exercise, and I reckon I’ve got proof:

Amazon Whispernet.

I have a Kindle, for which I paid about A$100 when the local supermarket had a 25%-off sale.  It’s the 3G and Wi-Fi version, and I take it with me most places I travel.  I’ve had that Kindle in the USA, New Zealand, France, and of course here in Australia, and in every place I’ve had Whispernet come on line and I’ve been able to at least browse the Amazon store.  Now, if roaming data really did cost what mobile networks say it does, how does it make sense for Amazon to make Whispernet available internationally on my Australian Kindle?  I mean, if I browsed the Store for half an hour before buying a A$2.99 book (pretty-much exactly what I did last trip, in France), the transaction would have cost more than it made!  To me, Amazon Whispernet is the proof that there is minimal cost in roaming data and that we’re being taken for a ride.

Needless to say, my phone has Data Roaming disabled.  I became a free Wi-Fi junkie—one of those pitiful souls hopping from one café to the next looking for open access points.  It wasn’t too bad when we were in Paris and Montpellier, but before leaving for the back-blocks outside Toulouse and Bordeaux I knew we’d need a better solution.  In a Geant Casino store in Montpellier I happened across a prepaid 3G Wi-Fi access box from Orange for about 45€, which included 500MB of data valid for one month.  It came in handy too, thanks to the GPS unit in the car getting confused about the location of our hotel at La Pomarede and us having to use Google Maps to find the right way.

Come on mobile networks, get with it.  Stop pissing off your customers and forcing them to do cruel and unusual things when they travel.  Just charge reasonable rates.  You’ll get more business, and guess what—happy customers.  Well, happier.

[1] By magical I refer to Clarke‘s Third Law: “any sufficiently advanced technology is indistinguishable from magic”.

Tags: , , , , , , , , , ,

DisplayLink and x2x brings back Zaphod mode

Ever since work issued me a Lenovo T61 and I installed Fedora on it, I have lamented the loss of something that X afficionados referred to as “Zaphod mode”.  By gluing together a few different software and hardware components I managed to get close to the old Zaphod mode days — but first some background…

Usually when you set up a multi-monitor installation you get a single desktop that spans all the screens.  This is great when you have a single desktop, but on Linux multiple desktops are the norm.  When I started using multiple screens in Linux, I loved the extra screen real estate but the fact that switching virtual desktops caused *all* the windows on all the screens to switch really bugged me.  I wanted the ability to have something — like an email program, or a web browser — to stay on one screen while I switched between desktop views on the other screen.  Or better still, the ability for both screens to have virtual desktops that were independent of each other.

Enter “Zaphod mode”, named for Zaphod Beeblebrox from the Hitchhikers Guide to the Galaxy by Douglas Adams.  Beeblebrox, who was President of the Galaxy before he stole the Starship Heart Of Gold, had two heads that were independent of each other.  In X server terms, multiple display devices are often referred to as “heads”.  So you can probably deduce that “Zaphod mode” refers to an operating mode of the X server where the multiple “heads” or display devices function as different displays.

Go back far enough and you get to a point where that was the standard mode of operation of X.  The X extension “Xinerama” was developed to provide the merging of different X displays into a single screen.  NVidia also had a hardware/firmware based equivalent called TwinView, where multiple heads on an NVidia card (and even sometimes heads on different cards) could be joined.  These extensions came not without their problems however: it was common for windows and dialog boxes to get confused about what display to appear on.  You would almost always see dialog boxes that are meant to display in the middle of the screen being split across the two physical displays.  Also, there was the multiple desktop “inconvenience” of not being able to switch the desktops independently.

Zaphod mode fixed these problems.  Because the screens were separate, windows and dialog boxes always appeared in the centre of the physical screen.  You could leave a web browser on one screen while you switched between an e-mail client, an IRC client, and an SSH session in the other.  It wasn’t all beer-and-skittles though, since in Zaphod mode it was not possible to move an application from one screen to the other.  Plus, some applications like Firefox could not have windows running on both screens (the second one to start could not access the user profile).

Zaphod mode largely “went away” during the transition from XFree to Xorg.  The servers dropped support for multiple separate displays in the one server, and only gradually added it back in (with the Intel driver being one of the last to do so, and probably still has not).  Since laptops were the only place I still used multiple screens, and the laptops I used all had Intel integrated graphics, I had to do without Zaphod mode.

Today, I hardly use dual monitors at all.  I used to have a desktop system with a 21″ CRT flanked by 17″ LCDs on either side, but that all got replaced by a single 24″ LCD.  At work we don’t have assigned desks, so setting up a screen to plug the laptop into isn’t going to happen.  I guess I learned to live without Zaphod mode by just going back to a single screen.  I still remember my Zaphod-powered dual-screen days fondly though, and with almost every update to Xorg I would scan the feature list looking for something like “Support for configuration of multiple independent displays (Zaphod mode)”.

A while back I bought a DisplayLink USB to DVI adapter.  I didn’t really know what to do with it at the time, but recently I dug it out and tried setting it up.  Googling for “DisplayLink Fedora” sent me to a couple of very helpful pages and it didn’t take long to get the “green screen of life” that indicates that the DisplayLink driver was active.  It was when I was looking at how to make it work as an actual desktop — part of the process involves setting up a real xorg.conf (that’s right, something about the DisplayLink X server means it can’t be configured by the Xorg auto configuration magic) — that I realised I could do something wonderful.  Instead of making a config file that contained both my standard display and the DisplayLink device (and probably cause havoc for the 90% of times I boot without an additional screen) I would create a config file with *just* the DisplayLink device and start it as a second server.  Run a different window manager in there, and I would have two independent desktops — Zaphod mode!

I did a couple of little experiments just starting an xterm in the second X, and it worked fine (the more alert of you will realise that I’m taking a bit of artistic license with the word “fine” here, and know that three little letters in the title of this post are a clue to what wasn’t yet working…) with the desktop and the xterm appearing in the second monitor.  I installed XFCE, and configured it to start as the window manager of the second X server, which also worked well.

Something was missing though: there was no mouse input to the second screen.  In Zaphod mode, even though the two screens were separate X displays they were managed by the same server.  This meant that the input devices were shared between the two displays.  In this configuration, I was careful to exclude any mouse and keyboard devices from my second display config to avoid any conflicts.  So how was I to get input device data into the second server?  A second display is not much good if you can’t click and type on the applications that run on it…

I remembered about an old program called x2x that could transfer the mouse and keyboard events to a different X server when you moved the mouse to the edge of your display (and, inexplicably, I forgot all about a much younger program called Synergy that can do the same thing).  Since x2x isn’t built for Fedora I found the source and built it and started it up…  and it worked first time!  When I moved the mouse to the edge of the screen, it appeared on the other screen!  I could start apps and type into them exactly as I wanted.

It wasn’t perfect, however.  I found that when I returned the mouse to the primary screen, the second screen was still getting keyboard events.  I figured this would be particularly inconvenient when, for example, I was entering user and password details into an app on the primary screen while an editor or terminal program had focus on the second screen…  I checked the Xorg.1.log file, and found that even though I had not specified a “keyboard” input device Xorg was automatically defining one for me.  I turned off the udev options, but it still happened.  My initial enthusiasm was starting to fade.

What fixed it was to manually define a “dummy” keyboard device.  There must be some logic in Xorg that it refuses to allow a configuration with no configured keyboard (which makes sense), so in this rather unusual case where I don’t want a keyboard I have to define one but give it a dummy device definition.  Defining the dummy keyboard stopped Xorg from defining its automatic one, and everything worked as expected!  Even screensavers work more-or-less as designed (although I haven’t actually spent much time in front of the setup yet so haven’t had to unlock the screen that often).

I’m away from the computer in question right now, otherwise I would post configs and command lines (and even a pic of the end result).  I’ll update this post with the details — leave a comment if you think I need to hurry up!  🙂

 

Tags: , , , , ,

Oracle Database 11gR2 on Linux on System z

Earlier this year (30 March, to be precise) Oracle announced that Oracle Database 11gR2 was available as a fully-supported product for Linux on IBM System z.  A while before that they had announced E-Business Suite as available for Linux on System z, but at the time the database behind it had to be 10g.  Shortly after 30 March, they followed up the 11gR2 announcement with a statement of support for the Oracle 11gR2 database on Linux on System z as a backend for E-Business Suite — the complete, up-to-date Oracle stack was now available on Linux on System z!

In April this year I attended the zSeries Special Interest Group miniconf[1], part of the greater Independent Oracle Users Group (IOUG) event COLLABORATE 11.  I was amazed to discover that there are actually Oracle employees whose job it is to work on IBM technologies — just like there are IBM employees dedicated to selling and supporting the Oracle stack.  Never have I seen (close-up) a better example of the term “coopetition”.

On my return from the zSeries SIG and IOUG, I’ve become the local Oracle expert.  However, I’ve had no more training than the two days of workshops run at the conference!  The workshops were excellent (held at the Epcot Center at Walt Disney World, no less!) but they could not an expert make.  So I’ve been trying to build some systems and teach myself more about running Oracle.  I thought I’d gotten off to a good start too — I’d installed a standalone system, then went on to build a two-node RAC.  I communicated my success to one of my sales colleagues:

“I’ve got a two-node RAC setup running on the z9 in Brisbane!”

“Great!  Good work,” he said.  “So the two nodes are running in different LPARs, so we can demonstrate high-availability?”

” . . . ”

In my haste I’d built both virtual machines in the same LPAR.  Whoops.  (I’ve fixed that now, by the way.  The two RAC nodes are in different LPARs and seem to be performing better for it.)

Over the coming weeks, I’ll write up some of the things that have caught me out.  I still don’t really know how all this stuff works, but I’m getting better!

Links:

IBM System z: www.ibm.com/systems/z or www.ibm.com/systems/au/z

Linux on System z: www.ibm.com/systems/z/os/linux/index.html

Oracle zSeries SIG: www.zseriesoraclesig.org

Oracle Database: www.oracle.com/us/products/database/index.html

[1] Miniconf is a term I picked up from linux.conf.au — the zSeries SIG didn’t advertise its event as a miniconf, but as a convenient name for a “conference-in-a-conference” I’m using the term here.

 

 

 

Tags: , , , , ,

What a difference a working resolver makes

The next phase in tidying up my user authentication environment in the lab was to enable SSL/TLS on the z/VM LDAP server I use for my Linux authentication (I’ll discuss the process on the DeveloperWorks blog, and put a link here).  Apart from being the right way to do things, LDAP authentication appears to require SSL or TLS in Fedora 15.

After I got the Fedora system working, I thought it would be a good idea to have other systems in the complex using SSL/TLS also.  The process was moderately painless on a SLES 10 system, but on the first SLES 11 system I went to YaST froze while saving the changes.  I (foolishly) rebooted the image, and it hung during boot.  Not fun.

After a couple of attempts to fix up what I thought were the obvious problems (each attempt involving logging off the guest, connecting its disk to another guest, mounting the filesystem, making a change, unmounting and disconnecting, and re-IPLing) with no success, I went into /etc/nsswitch.conf and turned off LDAP for everything I could find.  This finally allowed the guest to complete its boot — but I had no LDAP now.  I did a test using ldapsearch, which reported it couldn’t reach the LDAP server.  I tried to ping the LDAP server by address, which worked.  I tried to lookup the hostname of the LDAP server, and name resolution failed with the traditional “no servers could be reached” message.  This was odd, as I knew I’d changed it since it was pointing to the wrong DNS server before…  I could ping the DNS by address, and another system resolved fine.

I thought it might have been a configuration problem — I had earlier had trouble with systems not being able to do recursive DNS lookups through my DNS server.  I went to YaST to configure the DNS Server, and it told me that I had to install the package “bind”.  WHAT?!?!?  How did the BIND package get uninstalled from the system…

Unless…  It’s the wrong system…

I checked /etc/resolv.conf on a working system and sure enough I had the IP address wrong.  I was pointing at a server that was NOT my DNS server.  Presumably the inability to resolve the name of the LDAP server I was trying to reach is what made the first attempt to enable TLS for LDAP fail in YaST, and whatever preload magic SLES uses to enable LDAP authentication got broken by the failure.  Setting the right DNS and re-running the LDAP Client module in YaST not only got LDAP authentication working but got me a bootable system again.

A simple fix in the end, but I’d forgotten the power of the resolver to cause untold and unpredictable havoc.  Now, pardon me while I lie in wait for the YaST-haters who will no doubt come out and sledge me…  🙂

Tags: , , , , , ,

Another IPv6 instalment (subtitled: Watch Your Tech Library Currency!)

I made a somewhat cryptic tweet a little while ago about how I spent a crazy-long period of time researching what was, I believed, the next-big-thing in DNS resolution for IPv6 (or so my 2002 edition of “IPv6 Essentials” told me).  I could not work out why I saw nothing about A6 records in any of the excellent Hurricane Electric IPv6 material or in any other documentation I came across.

The answer should have been obvious: DNS A6 records (and the corresponding DNAME records) never caught on.  RFC 3363 recommended that the RFC that defined A6 and DNAME (RFC 2874) be moved back into Experimental status.  If I hadn’t been using an old edition of the IPv6 book, I might never have even known the existence of A6 and not have wasted any time.

In my previous post on IPv6 I theorised that we are in the early-adoption phase of IPv6 where things aren’t quite baked, and yet now I’ve picked up a 9 year old text on the topic and acted all surprised when it got something wrong.  It was a bit stupid of me; had I bought a book about IPv4 in 1976, might it have been similarly out of date in 1985?

As always though I’m richer for the experience!  Or so I thought…  Like many, I’m becoming increasingly time-poor.  When I bought a book on IPv6 some years ago I thought I was making an investment, but it turned out that my investment actually lost for me in several ways:

  1. The book took up physical space in my bookshelf for all that time I wasn’t using it
  2. I didn’t actually use the information at the time I acquired it
  3. The time I could have got value from it was wasted by it idly sitting on the shelf
  4. Once I did try to use it, it actually cost me time rather than saved time

I came to think about the other books on my shelf.  It’s pretty easy to recognise that a book that proclaims to be up-to-date because it “Now covers Red Hat 5.2!” will be anything but.  Also, from the preface of a Perl programming book that says “this was written about Perl 5.8, but it should apply to 5.10 as well” I’ll be forewarned that things will be fairly applicable to 5.12 but maybe not to Perl 6 when it’s out.

Technology usually has a somewhat abbreviated lifespan, so therefore the corresponding documentation will have a lifespan correspondingly short…  Here, however, is an example of a technology that will have a far greater lifespan (we hope) than much of the documentation that currently exists around it.  I emphasise “currently exists”, because it won’t always be that way: IPv4 was pretty well-baked by the time I had anything to do with it, so I could have bought a book on IPv4 with next to no concern that it was going to lead me astray (indeed, I bought W. Rich Stevens’ TCP/IP programming texts during the 1990s, and still use them to this day).  I keep forgetting that I’m on a completely different point of the IPv6 adoption curve, and the “experts” are learning along with me.

So, a new tech library plan then:

  • Reduce dependence on physical books (okay, this one is already a work-in-progress for me) — they don’t come with you on your travels as easily, and (more important in this context) they’re harder to keep up to date.
  • Before regarding the book on the shelf as authoritative, check its publication date.  If it’s more than three years old, depending on the subject matter it might be out of date.  Check if there’s a new edition available, and consider updating.  If there’s no new edition, check for recent reviews (Amazon, etc).  Someone who just bought it last month might have posted an opinion on its currency.
  • If you have to buy a paper book, don’t buy a book on any technology that is a moving target.  On the same shelf as my copy of “IPv6 Essentials” there is a book entitled “Practical VoIP Using VOCAL”.  I never even installed VOCAL, and I’m sure many current VoIP practitioners never heard of it.  (Side note: I think it’s strange that I bought that book, and a Cisco one, but still to this day have never owned a book on Asterisk.  Maybe I have some kind of inability to pick the right nascent-technology book to buy.)
  • Use bookmarking technology more! I have a Delicious account, and I went through a phase of bookmarking everything there.  I realise now that, if I was a bit more disciplined, I could actually use it (or a system like it, depending on what Yahoo! does to it) as my own personal index to the biggest tech library in existence: the Internet.

That first point is harder than it sounds (especially for someone like me who has a couple of books on his shelf with his name on the cover).  My Rich Stevens books are littered with sticky-note bookmarks for when I flick to-and-fro between different programming examples.  Electronic readers are still not there when it comes to the “handy-hints-I-keep-on-my-lap-while-coding” aspect of book ownership.

I have a Sony Reader which I purchased with the intent of making it my mobile tech library.  It’s just not that great for tech documents though, since it doesn’t render diagrams and illustrations well (it also isn’t ideal for PDFs, especially in A4 ratio).  This may change as publishers of tech docs start releasing more titles on e-reader formats like ePub.  The iPad is working much better for tech library tasks; I’m using an app called GoodReader which renders PDFs (especially RedBooks!) quite well and has good browsing and syncing capability as well.

More on these topics later, I’m sure!

Update: I omitted another option in my “tech library plan” — since IPv6 Essentials is an O’Reilly book, I could have registered with their site to get offers on updating to new editions.  Had I done so, the events of this post might not have happened!  Now that I’ve registered my books with O’Reilly, I’m getting offers of 40% off new paper editions and 50% off e-book editions.  Also, in line with my reduce-paper-book-dependence policy, I can “upgrade” any of the titles I own in paper to e-book for US$4.99.  If you haven’t already, I encourage anyone who has O’Reilly books that they rely on as part of their tech library to register them at members.oreilly.com.  (This is an unsolicited endorsement from a happy customer, nothing more!)

Tags: , , , , ,

Another round of Gentoo fun

A little while back I did an “emerge system” on my VPS and didn’t think much more about it.  First time back to the box today to emerge something else, and was greeted with this:

>>> Unpacking source…
>>> Unpacking traceroute-2.0.15.tar.gz to /var/tmp/portage/net-analyzer/traceroute-2.0.15/work
touch: setting times of `/var/tmp/portage/net-analyzer/traceroute-2.0.15/.unpacked’: No such file or directory

…and the emerge error output.  Took me a little while to get the answer, but it was (of course) caused by a new version of something that came in with the system update.  This bug comment had the crude hack I needed to get back working again, but longer-term I obviously need to fix the mismatch between the version of linux-headers and the kernel version my VPS is using (it’s Xen on RHEL5).

Tags: , , , , ,