Archive for category DeveloperWorks

A new adventure: installing z/OS from scratch

From time to time, I’ve run z/OS on the System z machines I have access to.  Originally this was by obtaining the ADCD distribution (which I think stands for Application Development Customised Distribution, and there’s a hyphen or a slash in the name somewhere too), but of late I’ve had access to alternative methods of installation.  However I’ve obtained my z/OS builds though, as I’ve never actually been a z/OS systems programmer they’ve always been pre-built systems.  I’ve never experienced a from-scratch installation of z/OS.
This is about to change.  I’ve set myself a challenge: equipped with my very basic z/OS systems programming knowledge, the z/OS Customised Offering Driver system on DVD, and IBM Shopz, my plan is to build a z/OS Parallel Sysplex.  Importantly, I plan to bring you along with me as I progress.  It won’t be a quick process, as I have to fit this around my day job (which for the next four weeks will be at the ITSO Poughkeepsie Center updating the “Security for Linux on System z” Redbook) but as I achieve milestones or hit major hurdles I’ll let you know what’s happening.
My first couple of milestones have already been achieved.  Firstly, I have managed to get the DVD-based COD system installed and running.  Some would say I’ve cheated a little, as I’ve used z/VM to avoid having to build a customised LPAR to match the IODF shipped with the COD.  I may yet take my working IODF from the running system and install it into the COD system to be able to run the COD in an LPAR natively.
The second milestone was to get TCP/IP connectivity to the COD.  Running under z/VM, I figured the easiest way to do this was to define a virtual OSA to connect to my z/VM VSWITCH.  Consulting the documentation for the COD, I found out what device address to use for an OSA This worked fine, but when I tried to bring up the TCP/IP interface I’d coded I got this nasty response:
EZZ0060I PROCESSING COMMAND: VARY TCPIP,,STA,OSAQDIO600
EZZ0053I COMMAND VARY START COMPLETED SUCCESSFULLY
EZZ4336I ERROR DURING ACTIVATION OF INTERFACE OSAQDIO600 – CODE 8010002A
 DIAGNOSTIC CODE 02
IST1631I OSATRL1E SUBCHANNEL 0601 QDIO DEVICE TYPE NOT OSD
When I displayed the channel paths, I saw all the paths defined as per my real IOCDS!  The “virtual” CHPID that z/VM had chosen for the virtual OSA did not actually exist in the real IOCDS, which I saw when I tried to vary the devices online:
IEE103I UNIT 0600 NOT BROUGHT ONLINE     538
IEE763I NAME= IOSVDSEO CODE= 0000000800000000
IOS576I OSA DEVICES REQUIRE AN OSA CHANNEL PATH BUT TYPE 00 FOUND
        TYPE=UNKNOWN
IEE764I END OF IEE103I    RELATED MESSAGES
The fix to this is to use an option on the z/VM DEFINE NIC command which is almost never used for Linux guests: the CHPID option.  I had to define the virtual OSA to appear at the z/OS guest on a CHPID that in the real IOCDS was an actual OSA.  This solved my problem, and allowed me to bring up TCP/IP and TN3270.

image

Now I can look at what to do to start the ServerPac installation.  Before I do that though, I’m pretty sure I have to allocate some DASD.  In fact, the instructions for the COD say I need to add page and spool datasets to the COD before I can do anything productive with the system…
Wish me luck!

Tags: ,

RACF Native Authentication with z/VM

 In 2009 I was part of the team that produced the Redbook "Security for Linux on System z" (find it at http://www.redbooks.ibm.com/abstracts/sg247728.html).  Part of my contribution was a discussion about using the z/VM LDAP Server to provide Linux guests with a secure password authentication capability.  I probably went a little overboard with screenshots of phpLDAPadmin, but overall I think it was useful.

I’ve come back to implement some of what I’d put together then, and unfortunately found…  not errors as such, but things I perhaps could have discussed in a little more detail.  I’ve been using the z/VM LDAP Server on a couple of systems in my lab but had not enabled RACF.  I realised I need to "eat my own cooking" though, so decided to implement RACF and enable the SDBM backend as well as switch to using Native Authentication in the LDBM backend.

Native Authentication provides a way for security administrators to present a standard RFC 2307 (or equivalent) directory structure to clients while at the same time taking advantage of RACF as a password or pass phrase store.  Have a look in our Redbook for more detail, but basically the usual schema is loaded into LDAP and records are created using the usual object classes like inetOrgPerson, but the records do not contain the userPassword attribute.  Instead of comparing a presented password against the field contained in LDAP, the z/VM LDAP Server (when Native Authentication is enabled) issues a RACROUTE call to RACF to have it check the password.

In my existing LDAP database, I had user records that were working quite successfully to authenticate logons to Linux.  My plan was simply to enable RACF, creating users in RACF with the same userid as the uid field in LDAP (I have access to a userid convention that fits RACF’s 8-character restriction, so no need to change it).  After going through the steps in the RACF program directory, and various follow-up tasks to make sure that various service machines would work correctly, I did the LDAP reconfiguration to get Native Authentication.

At this point I probably need to clarify my userid plan.  The documentation for Native Authentication in the TCP/IP Planning and Administration manual says that the LDAP server needs to be able to work out which RACF userid corresponds to the user record in LDAP to be able to validate the password.  It does this by either having the RACF userid explicitly specified using the ibm-nativeId attribute (the object class ibm-NativeAuthentication has to be added to the user object), or by matching the existing uid attribute with RACF.  This is what I hoped to be able to do; by using the same ID in RACF as I was already using in LDAP, I planned to not require the extra object class and attribute.  In the Redbook, because my RACF ID was different from the LDAP one I went straight to using the ibm-nativeId attribute and didn’t go back and test the uid method.

So, I gave it a try.  I had to disable SSH public-key authentication so that my password would actually get used, and once I did that I found that I couldn’t log on.  It didn’t matter whether I tried with my password or pass phrase, neither was successful.  I read and re-read all the LDAP setup tasks and checked the setup, but it all looked fine.  In one of those "let’s just see" moments, I decided to see if it worked with the ibm-nativeId attribute specified in uppercase…  and it did!

Okay, so it appeared that the testing of uid against a RACF id was case-sensitive.  I decided to try creating a different ID, with an uppercase uid, in LDAP to double-check.  Since phpLDAPadmin wouldn’t let me create an uppercase version of my own userid (since that would be non-unique), I created a different LDAP id to test:

[viccross@laptop ~]$ ssh MAINT@zlinux1
Password:
Could not chdir to home directory /home/MAINT: No such file or directory
/usr/X11R6/bin/xauth:  error in locking authority file /home/MAINT/.Xauthority
MAINT@zlinux1:/>

My MAINT user in LDAP has no ibm-nativeId attribute, so the only operational difference is the uppercase uid (the error messages are caused by the LDAP userid not having a home directory; I use a NFS shared home directory had I hadn’t bothered setting up the homedir for a test userid).

The final test was to change the contents of the ibm-nativeId attribute in my LDAP user record to lower-case — and it broke my login.  So that would seem to indicate that the user check against RACF is case sensitive wherever LDAP gets the userid from.  I’m going to have a look through documentation to see if there’s something I need to change, but this looks like something to be aware of when using Native Authentication.

I also noticed that I didn’t describe the LDAP Server SSL/TLS support in the Redbook, but that’s a post for another day…

Tags: , , , , , , ,

OpenSSL speed revisited

 I realised I never came back and reported the results of my OpenSSL "speed" testing after our 2096 got upgraded.  For reference, here was the original chart, from when the system was sub-capacity:

image

… and the question was, does the CPACF run at the speed of the CP (i.e. it runs sub-capacity if the CP is sub-capacity) or does it run at full speed like an IFL, zIIP or zAAP.  If the latter, the result after the upgrade should be the same as before — that would indicate the speed of crypto operations does not change with the CP capacity, and that CPACF is always full speed.  If the former, we should see an improvement between pre- and post-upgrade, indicating that the speed of CPACF follows the speed of the CP.

Place your bets…  Okay, no more bets…  Here’s the chart:

image 
The graph compares the results from the first chart in blue (when the machine was at capacity setting F01) with the full-speed (capacity setting Z01) results in red.

Okay, so did you get it right?  If you know your z/Architecture you would have!  As the name suggests, the Central Processor Assist for Cryptographic Function (or CPACF) is pretty-much an adjunct to each CP, just like any standard execution unit (like the floating point unit, say).  It is not like the Crypto Express cards, which are actually an I/O device and totally separate from the CP.  Because it is directly associated with each CP, for sub-capacity CPs its CPACF is bound to the speed of that CP.

If you look closer, further evidence that CPACF performance scales with capacity setting can be seen in the respective growth rates of each set of data points.  To see this a little clearer (because I don’t know the right mathematical terms to describe the shape of the curve, so I’ll just show you) I drew a couple more graphs:

image image  

Looking at the left graph (which is the same as the bar graph above, just drawn in lines) you can see that in both the software and the CPACF case the lines for before and after the upgrade follow the same trend with respect to the block size.  If these lines followed different trends — for example if the Z01 CPACF line was flat across the block size range instead of a gently falling slope like the F01 line — I’d suspect something else was affecting the result.  Looked at a different way, the right-hand graph above shows the "times-X" improvement between software and CPACF.  You can see that the performance multiplier (i.e. the relative performance improvement between software and hardware; CPACF speed is 16x software at 8192 byte blocks) was the same for each block size.

Now, just to confuse things…  Although I’ve used OpenSSL on Linux as the testing platform for this experiment, most Linux customers will never see the effects I’ve demonstrated here.  Why?  Because Linux is usually run on IFLs, and the IFL always runs at full speed!  Even if there are sub-capacity CPs installed in a machine with IFLs, the IFLs run at full speed and so to does the CPACF associated with the IFLs.  I’ll say again: CPACF follows the speed of the associated CP, so if you’re running Linux on IFLs the CPACF on those IFLs will be full capacity just like the IFLs themselves.  If you have sub-capacity CPs for z/OS workload on the same machine as IFLs, the CPACF on the CPs will appear slower than CPACF on the IFLs.

As far as the actual peak number is concerned, it looks like a big number!  If I understand it right, 250MB/sec would be more than enough speed to have a server doing SSL/TLS traffic driving a Gigabit Ethernet at line speed (traffic over connected sessions, NOT the certificate exchange for connection establishment; the public key crypto for certificate verification takes more hardware than just CPACF, at least on the z9 anyway).  And that’s just one CP!  Enabling more CPs (or IFLs, of course) gives you that much more CPACF capacity again.  Keep in mind that these results are using hardware that is two generations old — I would expect z10 and z196 hardware to get higher results on any of these tests.  Regardless, these are not formal, official measurements and should not be treated as such — do NOT use any of these figures as input to system sizing estimates or other important business measurements!  Always engage IBM to work with you for sizing or performance evaluations.

Tags: , , , , ,