Thursday, March 22, 2007

ARCast video up for Kiwibank Case Study

I've just found out that Ron Jacobs has posted a video interview of myself and two members of my team on his ARCast site.

The interview was based on the Kiwibank Case Study which we undertook last year with Microsoft. The interview takes a broad look at how Kiwibank has used technology to help it go from 0 to 12% of NZ's population in 5 years. I speak along with David Grahame and Sushil Kamanahalli. David is the client applications architect and Sushil is the service layer architect.

Special thanks go out to Mark Carroll for helping to organise this and to Ron Jacobs for the interview and the work to put it together.

Fashion at Government House!

What a fantastic night. I had the absolute pleasure of seeing my wife, Miriam Gibson, present her winter collection womens fashion wear at a charity event held at Government House in Wellington on Tuesday night.

It was magnificent. The show was held in the Ballroom with the Governer General, His Excellency The Honorable Anand Satyanand, and his wife, Her Excellency Mrs Susan Satyanand, army staff in regalia, members of Rotary and the charitable organisation, Refugees as Survivors, and over 200 guests who had come to see the launch of the 2007 winter range and support a good cause.

And I was very, very, very impressed. Miriam, Victoria, Sue, Sarah, Veronica, and all the models - you all did a helluva job!

And standing up there following up the Governor General, the local Rotary head, and representatives of the charity with a speech on the podium with microphones, photographers, and press present. Crikey - my recent speaking engagements pale into comparison.

The range was fantastic so I definitely recommend checking it out: head to the stores in Margaret Rd, Raumati and Hunter St, Wellington, or check out the web site to find out more (pictures from the event are promised over the next few days). Nothing for the guys, this is ladies only. And don't forget the charity - they're a very worthwhile cause in this country which has been the fortunate recipient of many refugees in the past, including the Polish half of my ancestry...

Saturday, March 17, 2007

Rod Drury on investment for IP

http://www.drury.net.nz/2007/03/17/building-intellectual-property/

Notice the graph that Rod has obtained of normalised patents per country. See how NZ sits at about 0.5 in 22nd place. Finland rides at the head on 4.5 and the OECD average sits at just under 2 which means it's closer to Finland than NZ.

This is a graphic representation of the under investment in R&D in NZ compared to other countries. This isn't about centralised R&D run out of the government, but about the failing of companies to invest in product development. Great thing that Rod pointed this measure out.

The Business, You and Me; Get It Together!

If you've read some of the previous posts you may have realised that I currently work in the IT area of a bank: the chief architect role at Kiwibank in fact (as those who attended the keynote at the recent Microsoft NZ Tech Briefings, or have read our Microsoft case study will know). Prior to Kiwibank I worked for a year at ENZA, then before that a few years at Deloitte Consulting, and prior to that I undertook a physics PhD from Victoria University in association with Industrial Research Ltd, a Crown Research Institute.

Each of these organisations has taught me a little bit more about how people work together and what makes us succeed in delivering. It has also highlighted the precarious and unappreciated position that the shared service line holds, especially the IT shared service line.

ENZA was an organisation that went during the year that I was there, from an external market focussed apple and pear exporter with producer board status and mandated export control, to a grower focussed commercial entity that retreated from the political environment of Wellington to the safety and security of the grower stronghold Hastings. As a company it had two strategic directions ahead of it, either strive to become a global category specialist based perhaps in one of the major trading hubs, or become a grower focussed organisation that showed it's value at the farm gate. Retreating was certainly the less risky option and it was that path that lead to the rationalisation with Turners and Growers in a 2003 merger.

At ENZA the (naive) question faced in 2000 was what part of the company represented the future of the business: the export facing arm or the grower facing arm. IT at this time was treated as a cost centre run under the finance group. And seeing as the organisation ran SAP it was certainly some cost.

In the middle of 2000 while working at ENZA I happened to come across a chap at Turners and Growers who explained that they were running a home grown software suite that was at the end of the tether. Myself and two others actually went and visited them in Auckland and it was apparent that they had problems. The obvious thought to the three of us was imagine combining the two organisations and taking advantage of the SAP implementation at ENZA. It would be a great asset right?

Well, that is indeed what happened. Someone out there saw the synergy: did Tony Gibbs think about this I wonder? Whoever it was they certainly knew a thing or two about SAP and IT in general. I note that there's now a customer success story about SAP and Turners and Growers on the SAP web site.

Prior to that experience I was at Deloitte Consulting which presented me with opportunities to work in companies and organisations across government, the health sector, and telecommunications. Now for all the minor gripes that many of us had there at the time - long hours etc etc - one thing definitely stands out: the value of good people. I did work with some very good people and while often thrown in at the deep end we did ok. A small group went on to do especially well, witness Trademe and AMR. Being a consulting group we didn't have much of an IT function. Information Technology was a core attribute of our service line and overlaid across the group was a matrix model representing sector and service advocacy. I think it worked well.

In comparison many of the companies we worked for had well defined structures with strong vertical focus on product delivery. You'd walk into these organisations and there were barriers everywhere. Internal development was hardly ever undertaken. Individual business units would occasionally issue RFIs, RFPs or succumb to the salesmanship of a clever vendor. Work would always proceed on the basis of a long chain approach that ensured the people that understood what was possible never had a chance to really influence the development of new ideas in the organisation. Certainly not outside of the immediate business unit.

In these environments you'd always here the catch phrase: "it's up to the business to decide", or often from the PMs/BAs, "we have to listen to the business", or the classic "the business wants...".

The depressing thing is that this is more often voiced by the staff of the IT department than the business units themselves. If people in an IT department don't think they're value contributing then they deserve to be treated as a cost centre and outsourced to the likes of EDS or IBM.

It's a personal mission of mine that my application delivery group does not come out with the same nonsense. It's the innovation that comes from those that know what's possible combined with the people that can advocate for a customer, and those that know the financial constraints and tools, and those that can market the products that creates value contribution in a company. Any organisation that forgets the value of the combined talent of all its resources deserves to lose market value.

Oh, and Industrial Research? A depressing environment of disillusioned scientists with ideas but no knowledge of how to commercialise them...

Thursday, March 15, 2007

Wellington Microsoft Tech Briefing

It was another great event yesterday and it's just fantastic to be seeing so many people. Wellington is my home town so there were plenty (plenty) of faces I recognised in the audience. A big thank you for opportunity goes out to Mark Carroll, Rebecca, Sean, Dean, Carol and all the others. Being the second time through the short speech I make in the keynote gave me a chance to think twice about the message I was trying to present.

And it's confirmed in my mind that the main message I want to get across to people is actively think about opportunities in their organisations, experiment with technologies and tools, and work on marketing any ideas they may come up with. It's how to make things happen and you know, life is too short to being dumb stuff when you could be doing cool stuff.

As a great way to finish off the day I got to attend the Microsoft Architects Council meeting at the hotel. I like to attend these events as it's a good chance to catch up with people I don't see every day. We have an active group of people up and down the country that attend these events and the chance to explore ideas is never something to pass up.

Next week is the final Tech Briefing in Christchurch. I can't wait for this as I know by then I'll be wanting to tune the message once more!

Prioritisation: Apples and Oranges

Every company seems to share the ritual of the prioritisation session. It has a common format and a common process.

Each business head gets to voice their opinion on what's most important to them. These are dutifully collated into a master list and then a discussion takes place to rank one above the other based upon some number of criteria; typically financial, customer experience, and compliance. Finally the agreed list is circulated for action.

It's value is normally limited because importance is not a good measure for prioritisation.

Importance is a measure of emotional conviction. It is a broad term that can mean many things depending upon the subject. The importance of a programme of work is not the same thing as the importance of an immediate fix, or the importance of a process review, or the importance of addressing a particular risk. In each case the definition of the term, importance, differs and therefore it can not be used for comparison. It is accurately a measure of emotional response, but I doubt little else.

What is the alternative? Perhaps it's better to ask what's the point.

The purpose of the prioritisation session is to allocate scarce resources. Scarcity can only be resolved through a process of trade-off (this is text book economics). What complicates the task in an organisation are the differing time requirements, resource specialisations, and dependency effects.

We have a limited ability to weigh up the combination of time, resource, specialisation and dependency factors to determine how limited resources can be applied to a range of competing tasks. Our minds have to make best guess estimates and the wider the scope the greater the problem (I bet someone out there can prove this is a power law expansion).

Which drives us to smaller delivery teams to reduce the scope of the problem.

So what should the prioritisation session be?

Perhaps to set areas of focus and define the criteria for prioritisation. I doubt little more.

Tuesday, March 13, 2007

Innovation in the Corporate Environment

This is a hot topic for me. I work in one of these environments and I'm involved in a fairly traditional (these days) role of enterprise architect: nominally responsible for the overall design of systems to ensure they meet business needs, and typically driven more from the perspective of policy and process than the introduction of new ideas. I'm afraid I'm not a very good enterprise architect.

Being on the back foot and not contributing to the ideas that form the basis of many of the commercial opportunities seems quite daft to me.

My aim in life is instead to communicate the opportunities of technology, or in fact, any thing that comes to mind actually. You know I went through university in a rather clueless manner and it's only now I see the possibilities of the methods taught to me at the time. There is just so much out there that can help give you an edge. (Wish I'd paid a bit more attention in the lecture rooms....)

Ensuring technology meets business needs is never going to see innovative solutions deployed, it's never going to see solutions applied when problems aren't yet realised to exist. How often have we looked around and seen only in retrospect that we missed the ball completely (trust me in my 4o years it's happened a helluva lot!).

So, yes, innovation is a hot topic for me.

Today I read an article syndicated from some offshute of The Economist called, innovatively enough, The Economist Newspaper, referring to the demise of traditional R&D and the rise of a new form of directed innovation concentrating on the D aspect. I'm not against this, a lot of the great ideas out there (that I've missed the boat on) have typically only been a couple of years ahead of everyone else's thinking. But the fact they were ahead proved a significant advantage.

The article began by looking at the output of Vannevar Bush, a gifted thinker in his own right, and an advisor to the Roosevelt and later administrations. It was Vannevar that spearheaded America's implementation of government and military funded R&D from the 1940s to the 1970s. "Industry is generally inhibited by preconceived goals, by its own clearly defined standards, and by the constant pressure of commercial necessity," he wrote in 1945. It still rings true today.

But, the days of the big labs are gone. Bell Labs has fallen apart, IBM's research labs are far more highly directed now, Microsoft Research nominally allow free reign but then look at the narrow range of papers on their site. Where's all the Research and what can a smaller company do?

It seems to me that in the mid-size corporate environment (with a few hundred staff) there is one classic failing: the creation of the product delivery chain. You know the one. It starts with the customer on the street, then there's marketing, then BAs, then project teams, and at the end of the line, the implementors.

Nothing driven down such a long chain will be innovative. The people at the end of the line act out a Dilbertesque cartoon living in perpetual frustration. The customers only get what they ask for; and no more. The nimble, smart companies out there create their own new niches and the slow ones are left to play catch up.

I think the secret to this is to keep team sizes down and allow small teams to experiment with ideas ensuring that at an overview level there is a process of nurturing and selection. Allowing failure to occur has to be an integral part. "Please fail very quickly - so that you can try again" says Eric Schmidt from Google.

Ground breaking products and processes are always due to the conceptual insights of individuals. So it should be the task of every innovative organisation to provide a mechanism to foster the intellectual output of their staff.

Response Time Distributions, IIS Log Files, and the question of the Missing Events

Over the last year I've been involved in a number of investigations attempting to find bottlenecks in systems consisting of clients, web service hosts, and databases (usually containing application logic in addition to data). The details of each system's implementation is not especially important to this discussion because what I want to do here is just relate one of my recent experiences regarding measurement of response times. You might like to check if you get similar behaviour.

Firstly, let's just describe the typical situation I find myself looking it. It's very generic, I'm sure the same thing will apply to you. I usually have some client systems accessing a service layer hosted in IIS6 talking to a database server (usually with significant embedded application logic).





The Problem
The problem (or my lesson in this case) is how to interpret the numbers you get from the IIS log files on the web service hosts. These files give you HTTP request duration and the arrival time of the request. Now what happens when you naively plot a distribution of the the duration (ie request execution time)? You might expect a nice symmetrical peak centered on some value, or you might get something such as the following.


I found that I was consistently getting this sort of shape across different types of requests. This isn't really all that unexpected. Each URI in the log file corresponds to a web service against which a number of web methods may be called. The web methods may have significantly different response times so the graph of the service call is really just a summation of all the individual web method calls. And in fact we've now implemented duration timing on each individual web method call, and in fact we get a much simplified distribution centred round one peak.

Anyhow, while I was looking into the double peak I decided to look at the arrival rate and compare that to the duration of the call. Theoretically you should get a Poisson distribution for random, uncorrelated events and on those occasions when there were many simultaneous arrivals you'd also expect the response time to slow down (although whether this was linear/non-linear is another question).

So, I looked for a period of time during which we have fairly constant activity and chose 2 hours in the middle of the day.



You can see that there's a nice distribution with the expected shape centred on 4 arrivals/second. Of course the IIS log files only record data when a request actually arrives. Looking at the bar graph you'd therefore naively expect about 300 one second intervals over the two hour period during which no requests arrived.

And this is where I got a surprising result.

So, it appears that the IIS HTTP arrival time is not accurate. I think that what I'm seeing here is that IIS is already queuing the requests up for processing - presumably because processing downstream is taking too long.

If you get anything like this, have seen it before, or have a bit more knowledge of what is going on I'd love to hear about it.

Monday, March 12, 2007

Powershell versus Perl

I've just been preparing some data for a post which I've been meaning to put up for a few months now. The data comes from an IIS log file and I need to pull out of it the time of the HTTP request, the HTTP service (uri-stem), and the duration of the request (time-taken). In the past I've always used Perl for this sort of task taking advantage of the regular expression syntax to extract my chosen data elements. For a 50MB sample log file (all logging options turned on) this takes approximately 12 seconds.

The Perl I've just used to test this follows:


use File::DosGlob 'glob';
use File::DosGlob 'GLOBAL_glob';

@logfiles = glob "ex*.log";

for my $logfile (@logfiles) {

open(INFILE, "$logfile");
$logfile =~ s/ex//g;

open(OUTFILE, ">$logfile");

while() {
if (m"^(\d\d\d\d-\d\d-\d\d\s\d\d:\d\d:\d\d).*/(.*\.asmx).*\s(\d+)$") {
$file = lc($2);
print OUTFILE "$1\t$file\t$3\n";
}
}
close(INFILE);
close(OUTFILE);
}


Now, since I've started using Powershell recently to extract and manipulate data for analysis I thought I'd also try the same thing with that. Note I'm just a beginner at this so I could be doing this the wrong way but here's what I tried:


 
Get-Content ex070223.log | 
 
foreach-object { if ($_ -match "(?^\d\d\d\d-\d\d-\d\d\s\d\d:\d\d:\d\d).*/(?.*\.asmx).*\s(?\d+$)")
 { & { $matches["occured"] + "," + $matches["service"] + ", " + $matches["duration"] } } }

The regular expression syntax is very powerful, I like the named matches - I guess this is a straight .net runtime feature, but I'm easily impressed. However, the time it takes to complete is abominable! It took 20 minutes to complete where the Perl program took 12 seconds.

Still, the power available from the command line is impressive...

Wednesday, March 07, 2007

Gadgets

I'm having a relaxing evening now after an exciting day mostly spent in Auckland at a Microsoft Tech Briefing. A great experience for me as I had the opportunity to be part of the key note speech. I happen to work for an up and coming NZ bank and I was able to present some of the experiences I've gained from my time in the bank - from being a member of a 20 something project team in 2001 - before the bank establishment - to the present day: an organisation of 700 with a 6 month profit of $11m.

The experience of being part of the key note was enlighting in itself. The nervous energy beforehand, the videos, sound, lighting, the 900+ people in the room, being part of a team that consisted of some much brighter people than I, it was great. And I still have Wellington and Christchurch to come over the next 2 weeks! What a buzz! I've never spoken to so many people before. For better or worse, the nearest thing to it in my memory is being 14 and being asked to recite a poem at my uncle's funeral. He was a very popular person and the church was packed. As sad as the event was I still remember the energy of the occasion clearly.

Of the event itself, and the reason for this posting, was the cementing in my mind of the importance of Vista's new gadgets. Jeremy Boyd of Mindscape demonstrated a remarkable gadget for the approval of Vista Dreamscene videos on a community site (can't find it - maybe they haven't put it live on the net?). This particular gadget used the capabilities of WCF to securely connect to a service and perform an operation. Working as enterprise architect at a bank you can imagine my interest. I've been harping on about this at work for a while now and recently noticed a blog post from someone I've met that used to work as a banking consultant at the Microsoft Sydney office, (James Gardner). It's just a matter of time... will we get there first?

The question of who get's there first was actually one of the main themes of my presentation. Looking at the way in which innovation can occur. Looking back on my historical work experiences I can now see how large corporates fail so often at delivering innovation. They start off so nimble and quick and then slow down to a near grinding halt. Achieving change becomes increasingly difficult because of the burden of process and competition of people. Does it have to be this way?

I'm sure I'm going to post more on this but it seems to me that there's a lot to be learnt from the field of R&D in traditional high risk, high reward industries and the application of R&D to service based industries. I know the potential margins haven't traditionally been seen as high enough to counter the cost but I think the time is right. I believe the risk reward matrix is increasingly favouring small experimental developments to highlight problem domains and visualise potential solutions. If we just manage to do those two things we'll be making downstream project delivery so much better (let alone considering the commercial benefits).

I strongly feel that the opportunities for applying technology are stronger today than ever before. It's the technologists who are currently creating the business models of tomorrow, not the business school graduates.

In the meantime, I recommend following JB, JD, and Andrew at Mindscape - they are a clever bunch of guys. I'm sure they'll go far.