Archives for: September 2011

NFS and Mac OS X 10.5+

24/09/11 | by admin [mail] | Categories: Networking, Mac OS X

NFS is part of Mac OS X since always. In 10.5 it went from being managed through NetInfo, to being obfuscated to an even more annoying level.

On Mac OS X 10.3+ Server it is an easy to manage service in Server Admin. However life is more difficult on the 'client' versions of Mac OS X.

Anyway, running along to the point of this story. nfsd is in the usual place, and config is where you'd hope it to be in /etc/export. But this being Mac OS X, there is also a nice launchd job, where you'd hope at: /System/Library/LaunchDaemons/

You start the usual way with:


launchctl load /System/Library/LaunchDaemons/

Or to make permanent, you can use the -w flag


launchctl load -w /System/Library/LaunchDaemons/

By default it will pick up the exports defined in /etc/export

Example entries in exports to share a whole directory tree, and allowing mounting of subfolders.

/Volumes/MyNFSShare -alldirs

NFS works only on UIDs, so you need to be careful about setup. Ideally, you use a shared Directory Service to synchronise UIDs across multiple systems. You might choose to you the -mapall flag to map the UIDs in the export to another.

You can mount the exports on a client computer easily using the Connect to Server (-k) menu item in Finder, and entering the share URL as nfs://server/path/to/export which mount by default under /Volumes. From terminal, open will work this way, too. 10.6 at least has an option under Disk Utility File Menu to import an NFS export and define the mount point.

You can also mount at terminal using:


sudo mkdir /path/to/local/mountpoint
sudo mount -o rsize=32768,wsize=32768,intr,noatime -t nfs host:/full/path/to/export /path/to/local/mountpoint

Remote install Mac OS X 10.7 Server

21/09/11 | by admin [mail] | Categories: Mac OS X

Mac OS X Server 10.7 Lion has the same issue that Snow Leopard Server has, where it requires you to use the Server app to do a remote configuration.

And Server app only runs on Lion.

This is real pain, because I have no desire to upgrade my MacBook Pro to 10.7 at this stage. I don't imagine it will be stable for another 6 months or so.

The work around is pretty much the same as for Snow Leopard, if you need to set up from a Snow Leopard or Leopard computer (or even earlier)

You can ssh to the server and turn on remote desktop, and then use remote desktop to complete the set up. Conceivably, you could also complete the install via SSH or use a VNC client on Linux/Windows/BSD etc to complete the install.

SSH as root to the server using the default password. The default password is the WHOLE serial number (it used to be just the first 8 chars).

Then start ARD on the new server by running the command:


/System/Library/CoreServices/RemoteManagement/ -activate -configure -access -on -restart -agent

Connect via ARD using a blank username and the default password (the whole serial number, again).

And remember to turn on ssh and ard in the Server set up. If you don't, you might be locked out. Once you've created a new user and password, you will need to use those details for ssh and ARD for subsequent connections as the default password will stop working.

Mac OS X Server Software Update Service and unmanaged clients

18/09/11 | by admin [mail] | Categories: Mac OS X

Mac OS X Serve Software Update Service (or SUS), is a service you can run on your Mac OS X Server to provide a local cache of Software updates from Apple.

This means that the updates can be downloaded once from the server, where you can then choose which updates to enable, and be served out on the local network to the client machines.

Where the client machines have been bound to the Open Directory on the server, they will automatically be configured to use the SUS on the server if it is available.

Where the client machines are unmanaged ie not bound to the Open Directory, then you can modify a system plist on the client machine to refer to the Server for updates instead of Apple.

On Mac OS X Server 10.5 and earlier, the SUS only provided updates for the corresponding version of Mac OS X. You might still get some updates, like iTunes, that were 'universal' across multiple versions of Mac OS X.

Mac OS X 10.6 Server introduced the possibility of using the one SUS to update clients running 10.4/10.5/10.6, and more recently with a minor modification, 10.7.

To modify a client machine to look to a specific location for updates, the easiest way is a defaults write command. The specifics vary depending on the version of SUS you are using.

The general command is:

defaults write CatalogURL [URL]

For version of 10.6 prior to 10.6.7 there are three different URLs depending on the flavour of the client:

Mac OS X 10.4:
Mac OS X 10.5:
Mac OS X 10.6:

Apparently, that last URL will also work on Leopard, although I've never tested it.

This MIGHT work for 10.7

Although, more likely you should upgrade to 10.6.8 and use the config below (after updating SUS on the server to serve the 10.7 updates).

For 10.6.7 and 10.6.8 there is just the one URL

defaults write /Library/Preferences/ CatalogURL

Which is the same as for Mac OS X 10.4 and 10.5

defaults write /Library/Preferences/ CatalogURL

Command line install in a nutshell

18/09/11 | by admin [mail] | Categories: Mac OS X

Mac OS X is Unix. It has lovely command line tools, several flavours of shell, and remote access.

This means that you can do almost anything from the command line, handy for administering large numbers of computers.

This came about from a need to install the Open Document converter on a computer where the user had no rights to mount .dmg files, or run the installer.

The first step is to get command line access. I did this using ssh and local administrator account.

The second, mount the disk image with the install package, using hdid. It can even be done using a remote image, hosted via http (or indeed any one of a number of protocols).

hdid /path/to/imagetomount.dmg

or if you are feeling brave:


Once you've mounted the disk image file, you can use installer to install the pkg or mpgk (metapackage) file, specifying the source package and the target.

sudo installer -pkg /path/to/installpackage.pkg -target /path/to/target



sudo installer -pkg "/Volumes/Open XML File Format Converter for Mac 1.2.1/Open XML File Format Converter for Mac 1.2.1.mpkg" -target /

Cisco time and logs

18/09/11 | by admin [mail] | Categories: Networking, Cisco

Show current time (useful for when time is not set correctly eg no nts available).

show version for uptime

More Cisco commands

Scripts for Apple Mail

18/09/11 | by admin [mail] | Categories: Mac OS X

I'm really not sure if I should do this just as a link.

Anyway, this is a useful repository of scripts for Apple Mail.

Especially useful, I've found, is the remove duplicates script. I've found duplicates to be quite a problem when dealing with slow IMAP connections, and POP mail where the index of downloaded mails has been broken. The script for switching SMTP servers is also handy, although Mail handles this situation much better by default than does Entourage or Outlook.

You can drop these scripts into users' home folders (at ~/Library/scripts/Mail Scripts) where they will then become available under the scripts menu of Mail. Which is nice, because a simple script and 5 minutes of explanation can save hours of tedious work.

An alternative method of removing duplicates using Thunderbird.

Fixing bad mail migrations in Mac OS X 10.6 Server

18/09/11 | by admin [mail] | Categories: Mac OS X

In Mac OS X 10.6 Server Apple moved from Cyrus to Dovecot. The implication is that mail is that little bit harder to migrate in Mac OS X 10.6.

I had a situation where I had to move a client from Mac OS X 10.4 Server to Mac OS X 10.6 Server. Normally, I would do this by exporting all the settings from Server Admin, and from Workgroup Manager, and then importing into Mac OS X 10.6, and also, taking screen shots of the settings so that they could be set up by hand.

This approach might seem ridiculous, but for the small networks I support, it generally makes more sense as Apples provided 'upgrade' tools don't always work as advertised, Open Directory in 10.5 being a good example.

However, in this situation, I had a large IMAP mail store to contend with. Several gigabytes of emails, across a dozen or so accounts. Previously, where I have had to move IMAP mail stores, I have used the excellent imapsync running both under Mac OS X and Linux.

However, as I was upgrading the whole server OS, and network home folders, I decided to give Apple's upgrade option a try (after taking a back up, in case it went pear shaped).

The upgrade failed, sort of. It reported failure, but booted and all the settings appeared to be ok. What had failed, was the mail migration.

Checking the size of the mail stores, it became apparent that the email was there, but that there were issues.

The emails are still stored as maildir files, but the layout and naming scheme is different.

When you do an upgrade or migration to 10.6 from earlier versions of Server, the mailboxes are converted from Cyrus to Dovecot. However, a rather common problem has been that the modification dates of the maildir files are changed. The problem with this is that popular Mac mail clients, like Mail, Entourage and Outlook, use this modification date as their 'Received date', the default sort date.

So, I had a bunch of users with emails showing the wrong dates. The most common 2 issues are modifications in 2020 (ie well in the future) or 0 modification dates (which show as 1970). Closer inspection reveals that the creation dates for the emails is unaffected for upgrades, where the modification date is 2020. With this information, it is possible to create a script which will read the creation date and set the modification date to the creation date.

With emails with 0 modification dates, the issue is complicated by the creation date also being 0. The solution here is to read the date headers from the email (the sent date, stamped by the sending mail service) and change the modification date to this date. This is also scriptable. You can also use this solution for the 2020 files, although the creation/modification date will be quicker and won't rely on the date headers being correct.

The only other hard thing is finding the problem emails and running the appropriate script.

For the 2020 dates, I used the find command and searched for files with modification dates less than 0s (ie files newer than 'now'). For the 1970 dates, I used the find command and searched for files with creation dates more than 20 years ago (or any large number of years, as emails with creation dates greater than 20 years will probably be wrong anyway), and then searched for files with modification dates older than 10 years and used the first script for files where they may still correct creation dates, but incorrect modification dates.

Googling found these scripts.

The fun parts.

This is the script (mac only) which will Fix file dates (or at least reset the modification date to the creation date)


#! /bin/bash
# Usage: filenametofix
# This script changes the modified date to the creation date
for file in "$@"
        createdDate=`/usr/bin/GetFileInfo -d "$file"`
        /usr/bin/SetFile -m "$createdDate" "$file"

This is the perl script which will read mail headers for date, and then set (touch) the date. It has dependencies.
From the terminal:


perl -MCPAN -e shell
install File::Touch

This is the script itself:


use strict;
use warnings;
use MIME::Parser;
use MIME::Entity;
use MIME::Body;
use Date::Parse;
use File::Touch;
if( !@ARGV )
die( "No arguments provided.\n" );
if( !-d( "/var/tmp/set_date" ) )
system( "mkdir /var/tmp/set_date" );
foreach my $arg ( @ARGV )
if(!-e $arg || !-f $arg)
print( STDERR "File $arg not found or not a file\n");
process( $arg );
sub process
my $file = shift @_;
print "Processing $file ";
my $parser = new MIME::Parser;
my $entity = $parser->parse_open( $file );
my $header = $entity->head;
my $date = $header->get('Date');
print("with date $date... ");
my $time = str2time($date);
my $touch = File::Touch->new(mtime => $time, no_create => 1);
if( $touch->touch( $file ) )
print( "ok\n" );
print( "failed.\n" );

And finally, these are the finds I used, running them from the root of the mailstore (or alternatively, on the subfolders of accounts or mail folders which are known to have problems)

For files where you are setting the modified date to the creation date, selecting all future files (everything newer than now):

find . -type f -mtime -0s -exec / {} \;

For mails where you are fixing the date with the perl script, I've chosen files older than 1040 weeks (20 years):

find . -type f -Btime +1040w -exec / {} \;

For files in the past, where the date needs fixing.

find . -type f -mtime +1040w -exec / {} \;

Customising the Guest account in Mac OS X

18/09/11 | by admin [mail] | Categories: Mac OS X

Mac OS X 10.5 introduced the guest account. The Guest account has 3 important features: no password, the home folder is reset at each login, it can be managed using Parental Controls.

By default the account is disabled. It can be enabled for service sharing (typically file sharing over a local network), for log in, or both.

The purpose, is to provide an account which Guests (family, friends etc) can use to get basic access. Parental Controls allow that access to be more finely tuned. For example, restricting access to programs, using the 'simple finder', controlling and monitoring web access, setting time limits and curfews, and providing some limitations to Mail and iChat.

However, the same underlying technology which allows Parental Controls, MCX or Managed Preferences, allows you far greater customisation possibilities.

The simplest way to control MCX settings for an account is to use an appropriate version of Workgroup Manager, part of the freely downloadable Server Tools from Apple.

Workgroup Manager is typically used for managing Users, Groups, Computers, and Preferences on Mac OS X Server in the context of a network. However, since Mac OS X has moved from NetInfo to local DS node, it has also become a powerful tool to manage accounts on local computers, and since the Guest account is just another local account, it too can be managed.

The first step is to get Server Tools. It comes as part of the Mac OS X Server distribution, or as mentioned a free download. Make sure you get a version which matches your version of Mac OS X. By default, it will install at /Applications/Server

Open up Workgroup Manager. You will be presented with a window asking for the Address, User Name and Password.

The address is the address of the machine whose accounts (DS node or Open Directory) you wish to manage. You can use the IP, DNS name, Bonjour name, a name defined in hosts, or any other name which will resolve. Apple recommends using the Fully Qualified Domain Name. For the local computer, you can localhost

The User Name and Password are the User Name and Password of any account with privileges to read the DS node, on the local machine this will typically be any account who is an Administrator.

You may get a message warning that you are working in a local database. This is perfectly ok, and is designed as a warning for Server Administrators to ensure that they are editing the intended directory.

By default, you will just see the standard accounts, and no system accounts. This means you won't be able to see the Guest account.

Go to the View menu, and select "Show System Records". You will now get a much longer list of accounts. You can use the search field, above the list of accounts, to find the Guest Account.

Once you've found the account, select it, and then press the Preference tool in the tool bar. You will now get a view which allows you to set a wide range of preferences for the account. For example, you can manage the Dock, by clicking on Dock. The Dock preferences allow you to set the items which appear in the Dock itself, and also the appearance of the Dock (whether hiding is on or off, the location, size, magnification, minimise effect). There are option to manage the preferences Never (ie, the preference isn't managed and reverts to the default as defined in the user template), Once (set at the first login, and then the user is allowed to change it) or Always (the user cannot change the preference).

You can also set fined grained controls on Applications, the look and feel of the Finder, whether the user can access external drives and/or burn CDs/DVDs, connect to network shares, shutdown/restart, which Printers they can access and/or manage, Universal Access settings, set log in Items, manage Proxy settings, set access to System Preferences (eg prevent them from viewing/changing Security or Network settings), and indeed anything which can be set using a plist stored in the user's Preferences folder.

This last item, the ability to import and set plists, is very useful for the Guest account. It allows you, for example, to stop Microsoft Office from running the initial set up script every time you log in to the Guest account (since the guest account resets at each login, it loses the plists MS Office creates from the initial set up), and for some applications, sets the things like serial numbers.

To set these per application plists, click on the Details tab of Preferences, and then either drag and drop or use the + button to add more plists.

Using Workgroup Manager to set MCX preferences, I was able to create a customisation for the Guest account on local computers which set up an environment where the Dock was set up with the appropriate applications and shortcuts to documents and network shares, where access to System Preferences was restricted to the bare minimum, where default Printers were set, where MS Office 2008 ran with all preferences set, and where FileMaker Pro 11 preferences were set.

The effect was to give a kiosk like environment, where anyone could sit down at a computer, login and have access to what they needed, where access to the critical resources such as Network Shares and FileMaker Pro was still controlled using per user passwords, and where, since the home folder was reset at each login, nothing was stored in the keychain, and no files could be left on the desktop.

Also, since the local DS node stores the account information in a plist, it is fairly trivial to deploy these settings to a large number of computers.

The album in the digital world

18/09/11 | by admin [mail] | Categories: Conceptual

Following on from an older post exploring "Open Source" in the context of fields outside of programming (e.g. scripts, takes, cuts, raw audio etc for a film), I was thinking about the nature various media in general. Specifically, though, I'll start with the music recording.

The nature of the traditional music recording has remained fairly static since the first albums of the early 20th century.

Artists go into a studio, record, record, record, and produce a definitive recording (or more realistically the composite of several recorded parts, mixing, production etc). Then, this definitive take is released as a single or part of an album or EP.

The medium is, as is normally the case, defined in large part by the technological constraints that produce it. Where you need to mass produce a physical recording for sale, you want to create a single product to lower your costs. Creating many versions of the same song is not practical in terms of selling.

The effect is that the music recording is its own cultural product, distinct from live music. There are certain cultural expectations and phenomena that come out of hearing the exact same recording of a song over and over. An expectation that the band will perform a song live as close is as possible to the recorded version.

To illustrate this, take the song Baba O Riley by The Who. The long intro to this song is complex arpeggios produced by electronic in a studio, Pete Townsend feeding a synths output through an arpeggiator. Since, the arpeggiator was do the work, it can not, in effect, be played live. So, The Who use a recording during their live performances.

This has been the same problem for all artists whose music can be produced in a studio, but not easily performed live. In a world where so much music is now more produced than performed, a variety of approaches have been taken. From the extreme of the DJ, to innovative steps which have seen performers take the tools of the studio on stage, laptops, samplers and keyboards, both musical and qwerty.

The interesting thing here, is that the work in the studio producing the original sound embodies the essence of a live performance, that sound and composition is created on the fly, a unique sound for a unique moment. But in an actual live performance, the cultural demand is for that performance to reflect the 'original' recording.

If you've been consuming music over the last 30 odd years, particularly since CDs became common place, then you've likely seen 'Deluxe' versions of albums or Extended re-releases which do indeed feature more than one take of the original song.

CDs with their longer playing time allow more music to be included on an album, and the cost of 'pressing' a CD is basically the same whether it contains 3 minutes or 81 minutes of audio. So, for a successful band with a back catalog, it starts to make sense to re-release with an extra 20 or 30 minutes, or perhaps a whole second disc. The extra production costs are justified by the extra sales (particularly true of longer lived artists with older and generally more affluent fans).

The CD single starts to be more than an A and B side, and becomes something like an EP, featuring 'remixes' or live versions of the title track, as well as sometimes a 'B side' track.

However, in all this, we are still left with a 'Radio edit' and/or a music video edit which become the definitive version. Albums are still released, and might contain a longer, more 'artistic' version of the track.

The other big underlying artefact that remains with us from the days of celluloid and 78s is that the tracks are still designed more or less to be listened to linearly: one after another. For a single with 5 remixes of the same track, this can sometimes be unpleasant.

But we live in a digital age, portable digital music players are common place, the shuffle option is there, we can download 5 versions of the same song and not have to hear them one after the other.

So, why not take the obvious next step? A step beyond the paradigm of the last 80-90 years, and combine the advantages of our new technologies with the traditions of the past.

An album, or the modern single or EP, is in fact meta data. It is a specific collection of discrete tracks, with or without a specified order. If you then abstract this just one or two steps, you can produce an album which specifies not a discrete track i.e. a single definitive recording, but can select from one of several versions, and at each listening the album is different, as a different version of each track is chosen at random (or with any other method). So, an album might consist of Song A through Song G, but might play one of several versions of Song A, followed by one of any several versions of Song B etc. and order may or may not be specified.

The recorded experience begins to approach the live experience.

As the single is beginning to disappear in physical release of music, but has become increasingly the basic unit in which music is sold (in the form of digital downloads) and consumed, we now have the real possibility of changing the way the album exists.

We could have a world in which we can still buy discrete tracks, but in multiple versions. Where once you've bought a single, it can be part of one or several albums.

It might no longer make sense to embed in the metadata of these digital downloads simply the track name, and its album or single name, in the way that they are made to correspond to largely outmoded physical recordings. Metadata describing collections should be separate. If a track exists in your music library, it should be able to belong to multiple collections.

Music marketers could be freed, largely, from making decisions about what will be the definitive version or the album version or the marketing version. Artists could make as many recordings as they like of the same tracks. Producers could experiment, all those underground remixes could be re-legitimised and re-appropriated by the original artists as part of their collections. Consumers could choose just their favourite versions, or in a pay per listen model, more popular versions would create more revenue. Fans could 'collect them all' and have a listening experience that differs greatly from the set in stone model of the physical album.

By disentangling the metadata of the album from marketing and distribution, and embracing the potential that digital delivery gives us, we start to open up to new and unthought of possibilities. As a band continues to record and perform, or other artists produce remixes, the album could grow, perhaps not in length, but in depth. The album or compilation itself could become a new medium. Other people could select the tracks that they think go together, these collections could be distributed separately from the discrete tracks that they describe. The album, as metadata, has become open source.

This is all possible because today's technologies allow for music to be produced far more cheaply, distributed far more easily, and compiled with almost negligible cost.

New Blogs

18/09/11 | by admin [mail] | Categories: Announcements [A]

I'm adding three new blogs today. One on Programming, for all the bits and pieces of programming I come across, another on System Administration, for all the sysadmin stuff I discover day to day, and one dedicated to what is the central purpose of the site: freeing intellectual property.

I'm going to fold the Politics blog into the new Free IP blog, as the material is not so much about Politics in general, but about the political and philosophical considerations of Intellectual Property.

New System Admin Blog

18/09/11 | by admin [mail] | Categories: Mac OS X

I've decided to start a new blog to consolidate my System Adminy type posts. Which makes sense, as this is the kind of post I have the most material for.

There's also likely to be a fair bit of crossover with the other blogs, so I'll how it goes. I might need to recategorise everything at some point to make better sense of it.

It's likely to contain a lot of bash, unixy, and Mac OS X stuff, as that is what I am doing most of, day to day.

Return a random line from a file with Bash

18/09/11 | by admin [mail] | Categories: Scripting, Bash, System Programming

I wanted to create a randomised signature generator for my email. I created a file with several different signatures, each on one line, and then used this piece of bashery to take a random line from the file.


head -$((${RANDOM} % $(wc -l < ~/file2.txt) + 1)) ~/file2.txt | tail -1 >> ~/file3.txt

What this does is given a random line from a file (file2.txt) and appends it to another file (file3.txt)
How this does it, is best explained out of order to how it is presented.

First we count the number of in the input file (file2.txt) using wc -l

We feed this in to the RANDOM function of bash as a limit. This means that RANDOM will give us a value within a range between 1 and the number of lines in the file.

We feed this into head, the effect of which is to return the first x lines of the file.

Then use tail to return the last line of this chunk of lines.

The last step directs the output to your output file (file3.txt), and appends it.

Caveats - RANDOM can return zero (hence the +1), RANDOM gives a nominal signed 16bit value (but using the top half 0-32767). So, this might not work well on very large files. RANDOM is pseudo random.

Bars n Pips and Toccata

18/09/11 | by admin [mail] | Categories: Amiga Links

All My Bloggings

All Blog entries in one convenient location.

Are we getting close?

September 2011
Mon Tue Wed Thu Fri Sat Sun
 << < Current> >>
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30    


Amiga Links

XML Feeds

What is RSS?

powered by b2evolution free blog software