Friday, December 07, 2007

barcamp agile Wellington and the power of the individual

http://barcamp.org/BarCampAgileWellington

...is underway and the it's proving more thought provoking than I'd assumed. The development process isn't something I'm directly focused on in my everyday work but it's certainly an important area for me. The biggest theme striking me today is the focus on the individual which I find fascinating.

There was a trend for awhile that worked hard to deliver results through process. Methodology x, prescriptive guidance y. The dehumanisation of the worker.

It just doesn't work that way - we're human after all. It really is all about individuals. Teams yeah, sure - but every member of that team is an individual with ideas and beliefs. You either recognise it and work with that or you ignore at your peril.

I love the blogging world today and I love the way guru characters have appeared that lead through the posting of their ideas. They're enormously important at driving company directions and in defining respect and appreciation from those that watch and listen. Companies such as Google, IBM, Microsoft and Sun all have these people, but there are plenty of companies that don't yet. Big companies, large consulting companies and product companies. I have no faith in their ideas, direction, or products not until I see a passionate, articulate voice from someone I trust.

Friday, November 23, 2007

Schneier on Security: More "War on the Unexpected"

One can only hope that the increasing information transparency we see occurring on the Internet and in modern computing technology continues...

Schneier on Security: More "War on the Unexpected"

Thursday, October 18, 2007

Wellington Weather...

...is incomprehensible!

Yesterday morning I awoke to howling westerlies and heavy rain in Paraparaumu, then exited the train in Wellington (50km distance) to a warm, dry, windy day with the sun shining!

This morning I awake to incredibly heavy rain, take the train to Wellington and find it cold with a strong damp southerly.

What will it be tomorrow? Heck, what will it be tonight?

Living here you know that you just don't know. Looking on the bright side, at least we have something to talk about.

And as for tonight, best advice I can give is follow the MetVUW site (and keep looking out the window). Based upon the forecast for 7pm, I'm guessing it may have cleared up again!

Tuesday, October 16, 2007

Heat Mapping

I've recently had the pleasure of uncovering some new data visualisation -visualization for you Americans- capabilities, in particular heat mapping a measure that's a function of two input variables.

Generating eye catching representations of information is crucial in the corporate world, and yet for the most part, business data visualisation is very poor. Back in the 90s I was busy with research for a physics PhD and I lived by gnuplot and a Windows package called Origin. In transitioning into the business world I also transitioned into Excel. I've enjoyed using Excel and I'm a really great fan of Excel 2007 and especially the data mining tools that Microsoft now provide with it but the problem with the graphs, is that everyone else at work generates graphs that look, well, exactly the same. So, I thought I'd go and take a look at what else is now available.

First off I discovered that Gnuplot and Origin still exist and their capabilities have advanced far beyond what I used when I produced a thesis on a 386 linux pc with.. was it 120MB of disk But, then I discovered Python, the scipy (scientific Python) and numpy (numeric python) libraries, and the MatPlotLib library. Now these looked especially useful because of the image mapping capability which could be used to generate a heat map.

Working with a range of people with different backgrounds in a corporate environment you're constantly trying to re-represent information in differing ways to achieve some connection with your audience. For some people an Excel bar chart works great, for others scatter plots, for others they'll want error bars etc. So you want a range of graphing tools available. Now Excel is good, especially Excel 2007, but there's still a range of visualisations that it can't perform, especially image mapping or heat mapping and some 3D plot types.

Now MatPlotLib can help to fill some of the void, particularly with the image mapping. Check the screenshots to see a range of examples of image (heat) maps and polar plots mixed along with more conventional 2D plot types.

In one example I looked at the pattern of customer usage through our Internet channel as a function of customer age and their total relationship with the company. To do this with MatPlotLib I first collected the data from our database environment, I then used Excel to manipulate the data into an array format (same format as an Excel surface plot), then saved it to a file, I then loaded it into a Python array and used the imshow() command to generate a heat map. I regenerated the graph over successive half year intervals keeping the color range set to run over a constant interval, then combined the images together into a movie with Windows Movie Maker (you can also do this with command line tools such as memcoder which part of the opensource mplayer suite).

The output of this exercise was a movie that showed customer activity through the Internet increasing over time across all age ranges but accentuated by the degree of total transactional relationship they had with the company.

One of the biggest problems you face with generating heat maps/image plots is getting a full range of data. If the value you want to plot is a function of x and y, and you're not actually taking measurements, but just relying instead on historical data you'll probably end up with a lot of missing data points. The imshow() command interpolates data points to smooth over the holes.

The easiest way to get MatPlotLib on your pc/laptop is to install the standard Python distribution and then over the top install the IPython shell. IPython provides you with a scipy item in your programs list which gives you a Python command prompt with all the necessary libraries pre-loaded.

If you're interested in other visualisation tools then I've since found a good background reference at IBM: http://www.ibm.com/developerworks/linux/library/l-datavistools/. It's say's it's for Linux but in reality most of this software runs on Windows (and in fact if you look at the numbers of downloads on some of the packages the majority of downloads are for the WinOS).

Thursday, September 06, 2007

The Business Shutdown Statement

http://www.andredurand.com/2007/05/10.html#a689

It's not a day goes by I get the very same. As if I'm not thinking about the business!

Friday, August 24, 2007

The Value Of An Employee

Some recent project activity I've been involved with has reminded me of the all too commonly occurring scenario that plays out in corporate offices up and down the land each day; the relegation of the employee to support while the contractors populate the projects. (Disclaimer : I've been a contract resource, an employee, a project manager, and team manager so I've covered pretty much all the bases...)

Now, it never ceases to amaze me that organisations persist in driving project delivery with their contract staff in preference to their corporate staff. The usual argument is to better manage resources and budget for asset development enabling easy capitalisation of time, depreciation of asset cost, and flexible staffing levels with project load, but consider this.

  • How will your employees feel if they're forever stuck on support and bug fixing while the glory of new project delivery (and the happy celebrations on project completion) are forever the domain of your contractors?
  • How will you stop your people believing that your employees are second rate staff to the contractors you bring on for the big projects?
  • How do you successfully manage operation of the project deliverables when the operational staff were not involved in the project development?
  • How do you continue long term development and idea innovation on a system produced by one team and operated by another?
  • And just how many projects actually really do successfully deliver generating the actual value originally envisaged in the business cases? Wouldn't it be better to manage the cost up front knowing you can capitalise after the fact?

If you're building a valuable asset then you should be building a commensurately valuable human structure to continue development over time, not just hand over and dash off to the next engagement.

It's a naive view that sees a complex system developed, deployed and left alone. Again, anything valuable at a point in time needs to change to remain valuable in response to a changing environment. It's simply ridiculous to imagine a complex system can be simply bounded by the original project definition.

Relegating employees to operational activity because of concerns over managing budget is a strategic error for any company.

Thursday, August 16, 2007

Richard Rumelt on making choices and Disruptive Innovation

 

Thanks go to the McKinsey Quarterly Journal for this great interview. You'll need to register (it's free) to read the original article.

Now the title of this post isn't exactly what Rumelt wrote but the theme runs strongly through the interview.

While responding to questions on the nature of corporate strategy Rumelt gives the example of the resurgence of Apple (and really Steven Jobs) through the iPod:

"Jobs didn't give me a doorknob-polishing answer. He didn't say, 'We're cutting costs and we're making alliances.' He was waiting until the right moment for that predatory leap, which for him was Pixar and then, in an even bigger way, the iPod. That very predatory approach of leaping through the window of opportunity and staying focused on those big wins - not on maintenance activities - is what distinguishes a real entrepreneurial strategy."

And,

"Enter Jobs. He was perfectly positioned because he was a bit of an insider in the entertainment industry but didn't have any of those asset positions that were being threatened. He didn't need to make a fantastic leap of imagination into the far future. He found a set of ideas that needed to be quickly and decisively acted upon."

Two other good points in the interview are the power of writing down thoughts in sentences over bullet points and the concept of value denial.

Computerworld Article Response

Well I was quoted in Computerworld yesterday(http://computerworld.co.nz/news.nsf/tech/DABFB2798D9A6E50CC257336007EAB04) as a result of statements made during my voice of the customer track presentation on Kiwibank at TechEd 07 in Auckland. During the presentation I made a point that platform infrastructure is an important enabling factor in exploiting future opportunities; and that at Kiwibank we were embarking on a desktop upgrade, initially to XP and at a later date to Vista.

During question time at the end of the presentation I was asked to explain in more detail why we were not going directly from Windows 2000 to Vista. The answer is simply that we need to complete our natural desktop replacement to ensure people get a reasonable performance. This is underway and from my perspective, being mostly interested in application deployment, it is not a great inhibitor to future progress. Windows XP does provide organisations with the ability to deploy the .net 3 components for presentation, workflow management, and communication and while I'm sure that from an infrastructure perspective there are many good reasons to deploy Vista for improved management, I'm satisfied with the operating system for application hosting.

The remainder of the article referred to some of the more interesting ways in which we're trying to take advantage of a range of new technologies within the bank; if I had had time in the presentation I could've shown many more.

Tuesday, August 14, 2007

New Web

These notes come from Michael Platt's Web 2.0 presentation at TechEd07 Auckland.

Why TCP?
Why HTTP?

Why not UDP and BitTorrent...
This follows on from the SAF06 event and Bill Gates talk. Current web protocols were designed with low bandwidth environments in mind; now there is a high bandwidth environment and the time is right for a disruptive innovation. A new read/write peer to peer protocol could easily supplant HTTP with the PUT/POST/GET verbs.

If we do this then REST could provide a model for a new implementation.

Basic support available in WCF1.0 but it's native in Orcas.

Question could be what REST support exists in Silverlight?

Interesting point raised on process management and the Robotics SDK which contains graphical process designer. In a bidirectional web environment you need process support.

Monday, August 13, 2007

TechEd 2007 Auckland Presentation

Just finished! I can never tell in advance how the presentation will go until it's performed in front of the obvious. It's only then that you can gauge the feed back and know which areas of the presentation were actually important to the audience.

In this particular case I think the following areas stood out:
  • Innovation is not the result of projects
  • My rant on incubators never succeeding
  • The requirement to sell internally your ideas
  • The need to think ahead two, three or more years and use that to build your strategy for your technology development

Big thanks go out to Lou Carbone for his keynote, best keynote speech I've seen, and I took a hint and used my time before the presentation to add some pictures - they speak a thousand words.

Main message from Lou's speech was that emotional clues are everywhere; and you're daft if you don't take heed.

Friday, June 01, 2007

Surfacing a new Internet

The message has been clear for months; the internet has been limited through protocols designed in 1994 andwe're now on the verge of a new electronic world. I watched Bill Gates give this message last December at the Strategic Architects Forum in Seattle and it's now becoming very clear what he had in mind.

First the release of the Silverlight beta with vector graphics; then the Silverlight alpha with a downloadable sandbox; and now Surface. I've been telling people in my industry ever since that the world will be dramatically different in 5 years time. I don't think it's been well understood but maybe now it will be different.

When you think of a core Internet application such as Internet Banking you realise that a customer's expectations of what Internet banking is has only been based upon 10 years of experience at most. Five years from now this experience will be very different. Silverlight and Surface will enable whole new styles of user interaction. Interacting with the bank becomes a flick of the wrist!

Now, one last memory from that conference in Seattle: the statement that the Internet is increasingly becoming a mirror of reality. Seems more accurate every day...

Monday, May 28, 2007

Almost Forgotten

A few weeks ago it was ANZAC day in NZ; a day of remembrance originally defined by the failed WW1 landings at Gallipoli, but now extended to encompass those Australians and New Zealanders injured or lost during all the wars since. A day also celebrated in Turkey for the creation of a secular state: for one world a disastrous invasion; for another the rise of a modern state.

Now, I usually find these days disconcerting; there seems too great a yearning for days past, as though something about those historic events made us better people, proved our metal, gave perhaps a reason to celebrate our forefather's heroic achievements. It seems a dangerous thing.

Now that might be true but on this occasion, I think I've found something else that warrents mentioning.

My family background on my father's side is Polish. That side of my family came to NZ as refugees in the 1940s and 1950s via the Soviet Union, Persia, and Palestine. Of 1.5 million Poles deported by Stalin in 1939/1940 to Siberian labour camps, only about 700 made there way to start new lives in NZ. The original arrivals consisted of orphaned kids, followed after the war, thanks to the Red Cross, by those close family members that managed to find their children or siblings. It was by this mechanism that what remained of my polish ancestry came together in NZ.

The impact upon families of the trip from labour camps was severe with many dying along the way including members of my father's family. The men of fighting age went into the army, navy or airforce and variously split into those fighting for the West and those fighting for the Soviets.

I recently came across a web site which attempts to document some of the Polish forces during the war, a force which included members of my family. I know that the internet is unreliable but the site seems to confirm what I heard from my family as I grew up.

The Polish forces were a major part of WW2 and yet also now largely forgotten.

The Soviet controlled Polish army fought through to the battle for Berlin and put a Polish flag temporarily on the Brandenburg gate next to the Soviet flags, albiet temporarily as it was rapidly taken down. They numbered 396,000 through the war with 23000 killed or missing in action.

The British controlled forces numbered 255,000 across airforce, army, and navy with 13,000 killed or missing in action.

This means that ignoring the initial invasion of Poland, and the 15,000 officers believed killed by the Soviets in camps in 1940, Poland still fielded a combined total of approximately 650,000 servicemen through WWII making it the 4th largest allied armed forces.

Comparing the casuality rate with that suffered by the US forces during the war is enlightening: 5% of those fighting in the West were killed or missing; 6% of those fighting in the East were killed or missing. As many Poles fought in WW2 under British and Soviet command as did US Marines but they suffered twice  the casualty rate (http://www.usmm.org/casualty.html). And that's excluding the initial invasion of Poland by Germany and the Soviet Union in 1939.

Yet you'll never hear that mentioned on the History channel!

With Yalta and Stalin's desire to incorporate Poland into the Eastern Bloc the efforts of those 2 armies was effectively sealed. In 1946 the British command demobilized the Polish army. They were not allowed to join the London victory parade and according to an interview on the Polish Solder website they were given identity cards with "Enemy Alien" written on them.

In the case of my family 3 members were mobilized into the army, navy and cadets. All survived and ended up coming to live in NZ, a country that interestingly fought alongisde the Polish 2nd Corps throughout North Africa and Italy.

I think it's about time that the efforts of these forgotten soldiers were better brought to light.

SQL Server 2005 Data Mining Classification Matrix

Finally figured it out, so easy, and yet so poorly described in the documentation; the classification model tab in SQL Server 2005 data mining shows you the accuracy of your model prediction. Figuring out how to use it had me confused for quite some time. If you're as daft as me then perhaps the following might help.

The easiest way to discover how to correctly read the classification matrix is to generate a table from your testing data set (you do split your original data into training and test data sets don't you...!?) with the ID, the actual outcome you're trying to predict, and the predicted value. You can do this in the Mining Model Prediction Tab. It's easier there because the option to save to table in your database is accessible from the little disk icon in the top left corner. You choose the ID, the actual and the predicted value in the bottom half of the screen. The first time I looked at this it was a bit of mystery how to drive that half of the screen, I'm guessing you've mastered that part.

Load up the table that is saved from your mining query into SQL Server's query window and count up the number of entries that are correctly predicted; and the number of actual values for each state of the actual attribute. You'll then see how it maps to the classification table. Basically, the succinct description at the top of the table is correct. Columns correspond to actuals; rows to predicted values. The bit they leave out that would've been useful to me is that the total number of cases of any one actual value you get from summing vertically; the total number of predicted cases of a value you get from summing the row.

Thursday, April 19, 2007

Singleton DMX

If you've used DMX to generate predictions from a data mining model you will be familiar with the singleton query. This is where you obtain a prediction for a particular set of input attributes; either within a case table or in a nested table. There was an example of one of these in a previous post.

Now the form of that query was:

select prediction
from data mining model
prediction join
(select (select something as nested row 
          union select somethingelse as nested row etc) as nestedtable)
as t

Now, what I wanted to do was using openquery() to return all the nested rows, but I did not have an explicit case. How do you make it work?

After bugging the team on the www.sqlserverdatamining.com web site in the newsgroup section the answer was obvious. Just go ahead and create a dummy case. A sample query that works for one of my models follows.

 

SELECT
    Predict([Model_NaiveBayes].[Bucket]),
    PredictProbability([Model_NaiveBayes].[Bucket])
From [Model_NaiveBayes]
NATURAL PREDICTION JOIN
SHAPE { OPENQUERY(Test, 'SELECT 1 as CaseKey') }
APPEND (
    { OPENQUERY(Test,'
    select 1 as ForeignKey, Term
    from Terms
    cross apply Matches('some long discourse containing many terms that I want to characterise'',''\b('' + Term + '')\b'')')
    } RELATE [CaseKey] TO [ForeignKey]
) AS [Msg Term Vectors]
AS T

In this case it uses the Matches TVF which I describe in a previous post to identify the terms in the text.

Wednesday, April 18, 2007

Exposing the Regular Expression Match Collection to SQL Server as a Table-Value Function

For a recent pet project I've been attempting to create a text mining model in SQL Server 2005 to analyse incoming messages and automatically bucket them into one of a number of categories. This follows straight on from the text mining example contained in the SQL Server tutorials.

With the model implemented (using Decision Trees, Naive Bayes etc) it's easy to create a singleton query by hand that has the following form:

 

SELECT
[Model_NB].[Bucket],
TopCount(PredictHistogram([Bucket]), $AdjustedProbability, 3)
From
[Model_NB]
NATURAL PREDICTION JOIN
(SELECT (SELECT 'some defining term' AS [Term]
UNION SELECT 'another identifying noun or phrase' AS [Term]) AS [Msg Term Vectors]) AS t

But I still needed to extract the identifying noun phrases that make up the terms. Given a dictionary of terms and a length of freeform text how do you find all the term occurences?

Using the SQL Server string functions is painful so I thought I'd try the Match Collection object in the CLR. To expose this you need to perform the following operations.

Firstly, enable CLR integration with

EXEC sp_configure 'clr enabled' , '1'

Then create a SqlServer function in .net to expose the MatchCollection eg

using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Text.RegularExpressions;
using System.Collections;
public partial class UserDefinedFunctions
{
     [Microsoft.SqlServer.Server.SqlFunction (FillRowMethodName="RowFiller", TableDefinition="Match NVARCHAR(MAX)")]
     public static IEnumerable Matches(String text, String pattern)
    {
        MatchCollection mc = Regex.Matches(text, pattern);
        return mc;
    }
    // this gets called once each time the framework calls the iterator on the underlying matchcollection
    public static void RowFiller(object row, out string MatchTerm) {
        Match m1 = (Match)row;
        MatchTerm = m1.Value;
    }
};

Then deploy using Visual Studio, or if you want to do it manually try:

CREATE ASSEMBLY MatchesAssembly FROM 'c:\somewhere\some.dll' WITH PERMISSION_SET = SAFE

CREATE FUNCTION Matches(@text NVARCHAR(MAX), @term NVARCHAR(MAX))
RETURNS TABLE
(Matches NVARCHAR(MAX))
AS
EXTERNAL NAME TextTools.UserDefinedFunctions.Matches

And voila, you can do the following...

select *

from terms cross apply Matches(@text,'\b(' + terms.term + ')\b')

Where terms is a table of noun phrases you're searching for in the variable @text that contains the message text.

Tuesday, April 10, 2007

Measuring how quickly we learn from our mistakes

For 10 years I've heard how you need to learn from your mistakes and it seems quite a reasonable mantra. But the problem I've got with learning from mistakes is how quickly do you learn.

Some people and some organisations seem to learn really quickly, others, more like myself, take a while longer, others take the whole of their existence.

So the question I've got is how do measure the responsiveness of someone, some organisation, to learning from their mistakes?

What metric could we use to tune performance?

Not sure, but maybe we can think something up.

Sunday, April 08, 2007

Whales off the Beach

Wonderful sight - whales off the beach, maybe 300m from shore on a beautiful still, sunny day. Not sure what type, but I don't think they were Orcas - they didn't seem large enough. The shame of it is that if it weren't for the cold I've had for the last few days, my plan was to be out kayaking this morning. Wouldn't that have been amazing!

Oh, perhaps a bigger shame, especially from the point of view of the whales, was the idiot out there going round and round in circles on a jetski. Confirms every opinion I've ever had of them.

Update: Talking to someone this morning I found out they were indeed Orcas. Now I really wish I'd been up to going out on the kayak.

Friday, April 06, 2007

The value of a demonstrable prototype

I've just been reading an interview by Ron Jacobs with Scott Guthrie; it's up on The Microsoft Architect Journal (http://msdn2.microsoft.com/en-us/library/bb266332.aspx).

What's caught my attention is the importance Scott put on having a demonstrable prototype of their ASP.Net technology early in the development. "There were three or four of us" and yet they managed to create one of the key product offerings of one of the world's largest companies. A truly remarkable outcome.

Now I'm sure that the team itself was extraordinary, but still I bet in many companies they would have been driven into the ground under the weight of the stakeholders. The more people involved, the more diluted the good ideas become and the greater effort needs to be put into selling ideas and managing the stakeholders. The processes companies typically put into place to manage new delivery put a stranglehold on true innovation.


'...That's one of the successful things that we did with ASP.NET. We said that we're
going to throw away every line of code we're going to write for the next couple
of months. Let's all agree on that. We're not going to say, "Oh let's take this
and adapt it; we can clean it up." No. We're going to throw it away. We're going
to "deltree" this subdirectory at some point, and that way we can be more
adventurous about trying new things. We don't have to worry about making sure
that everything's robust because it's going to be in the final version.


We actually did that for a few months and said, "We're done, delete it; let's start
over from scratch; now let's write the full production code and make sure we
bake in quality at the time." I think a lot of teams could benefit from that.
The hardest thing is making sure you delete the prototype code. Too often,
projects develop with "Well, it's kind of close." It's very difficult to start
with a prototype and make it robust. I'm a firm believer in starting with a
prototype phase and then deleting it.
'

The typical corporate company approach would not have sustained this type of development.

It would be an incredibly gifted individual that could convince a steering committee to support expenditure on experimentation when little or no artifacts could be easily capitalised at the end of a 3 month development. Most companies want functional delivery early with a clear line of connection from end user requirements to final deliverable.

Scott talks specifically about the value of a prototype to help sell the idea.

"We certainly had to persuade a number of people along the way. One thing we
did early in the project was get running code in prototypes that we could show
people. Often when you're working on a project that's new or something that
hasn't been done before, it's easy to put together a bunch of PowerPoint slides
that sound good, but it's especially valuable to actually show code and walk
people through code that's running. Not only does the prototype prove that it's
real, but also you just learn a terrific amount by doing it.
"

Perhaps their greatest achievement was the successful marketing of the development from initial concept through to the current deliverables. The early prototyping and demonstrations were obviously a critical part of this achievement.

At the end of the day all of this makes great fodder for the creation of a myth list: the My Myth List. My first two myths are going to be:

  1. End users know what they require
  2. Product delivery is a linear process

Because it's apparent that people don't know what they want. It's the visualisation of what is possible that provides a basis for requirements. And product delivery is never linear.

I'm sure I'll think of more over the next few weeks.

Sunday, April 01, 2007

21C in the pool on April 1st

Global warming? Well it was a late start to summer...

Commotion at the beach

One of the Kapiti island ferrys was caught out this afternoon. I joined in a small crowd to watch as the tractor that was supposed to haul it out of the water got stuck in the sea. The engine appeared to have failed and in a scene reminescent of the Little Digger, 3 tractors lined up in a row to try and tow the stranded tractor and ferry out, without success. They temporarily gave up on the ferry, hauled out the tractor before the tide came in further, then towed the ferry onto the shore to allow the passengers out. A bit of excitment while I was down at the playground with Lil!

Thursday, March 22, 2007

ARCast video up for Kiwibank Case Study

I've just found out that Ron Jacobs has posted a video interview of myself and two members of my team on his ARCast site.

The interview was based on the Kiwibank Case Study which we undertook last year with Microsoft. The interview takes a broad look at how Kiwibank has used technology to help it go from 0 to 12% of NZ's population in 5 years. I speak along with David Grahame and Sushil Kamanahalli. David is the client applications architect and Sushil is the service layer architect.

Special thanks go out to Mark Carroll for helping to organise this and to Ron Jacobs for the interview and the work to put it together.

Fashion at Government House!

What a fantastic night. I had the absolute pleasure of seeing my wife, Miriam Gibson, present her winter collection womens fashion wear at a charity event held at Government House in Wellington on Tuesday night.

It was magnificent. The show was held in the Ballroom with the Governer General, His Excellency The Honorable Anand Satyanand, and his wife, Her Excellency Mrs Susan Satyanand, army staff in regalia, members of Rotary and the charitable organisation, Refugees as Survivors, and over 200 guests who had come to see the launch of the 2007 winter range and support a good cause.

And I was very, very, very impressed. Miriam, Victoria, Sue, Sarah, Veronica, and all the models - you all did a helluva job!

And standing up there following up the Governor General, the local Rotary head, and representatives of the charity with a speech on the podium with microphones, photographers, and press present. Crikey - my recent speaking engagements pale into comparison.

The range was fantastic so I definitely recommend checking it out: head to the stores in Margaret Rd, Raumati and Hunter St, Wellington, or check out the web site to find out more (pictures from the event are promised over the next few days). Nothing for the guys, this is ladies only. And don't forget the charity - they're a very worthwhile cause in this country which has been the fortunate recipient of many refugees in the past, including the Polish half of my ancestry...

Saturday, March 17, 2007

Rod Drury on investment for IP

http://www.drury.net.nz/2007/03/17/building-intellectual-property/

Notice the graph that Rod has obtained of normalised patents per country. See how NZ sits at about 0.5 in 22nd place. Finland rides at the head on 4.5 and the OECD average sits at just under 2 which means it's closer to Finland than NZ.

This is a graphic representation of the under investment in R&D in NZ compared to other countries. This isn't about centralised R&D run out of the government, but about the failing of companies to invest in product development. Great thing that Rod pointed this measure out.

The Business, You and Me; Get It Together!

If you've read some of the previous posts you may have realised that I currently work in the IT area of a bank: the chief architect role at Kiwibank in fact (as those who attended the keynote at the recent Microsoft NZ Tech Briefings, or have read our Microsoft case study will know). Prior to Kiwibank I worked for a year at ENZA, then before that a few years at Deloitte Consulting, and prior to that I undertook a physics PhD from Victoria University in association with Industrial Research Ltd, a Crown Research Institute.

Each of these organisations has taught me a little bit more about how people work together and what makes us succeed in delivering. It has also highlighted the precarious and unappreciated position that the shared service line holds, especially the IT shared service line.

ENZA was an organisation that went during the year that I was there, from an external market focussed apple and pear exporter with producer board status and mandated export control, to a grower focussed commercial entity that retreated from the political environment of Wellington to the safety and security of the grower stronghold Hastings. As a company it had two strategic directions ahead of it, either strive to become a global category specialist based perhaps in one of the major trading hubs, or become a grower focussed organisation that showed it's value at the farm gate. Retreating was certainly the less risky option and it was that path that lead to the rationalisation with Turners and Growers in a 2003 merger.

At ENZA the (naive) question faced in 2000 was what part of the company represented the future of the business: the export facing arm or the grower facing arm. IT at this time was treated as a cost centre run under the finance group. And seeing as the organisation ran SAP it was certainly some cost.

In the middle of 2000 while working at ENZA I happened to come across a chap at Turners and Growers who explained that they were running a home grown software suite that was at the end of the tether. Myself and two others actually went and visited them in Auckland and it was apparent that they had problems. The obvious thought to the three of us was imagine combining the two organisations and taking advantage of the SAP implementation at ENZA. It would be a great asset right?

Well, that is indeed what happened. Someone out there saw the synergy: did Tony Gibbs think about this I wonder? Whoever it was they certainly knew a thing or two about SAP and IT in general. I note that there's now a customer success story about SAP and Turners and Growers on the SAP web site.

Prior to that experience I was at Deloitte Consulting which presented me with opportunities to work in companies and organisations across government, the health sector, and telecommunications. Now for all the minor gripes that many of us had there at the time - long hours etc etc - one thing definitely stands out: the value of good people. I did work with some very good people and while often thrown in at the deep end we did ok. A small group went on to do especially well, witness Trademe and AMR. Being a consulting group we didn't have much of an IT function. Information Technology was a core attribute of our service line and overlaid across the group was a matrix model representing sector and service advocacy. I think it worked well.

In comparison many of the companies we worked for had well defined structures with strong vertical focus on product delivery. You'd walk into these organisations and there were barriers everywhere. Internal development was hardly ever undertaken. Individual business units would occasionally issue RFIs, RFPs or succumb to the salesmanship of a clever vendor. Work would always proceed on the basis of a long chain approach that ensured the people that understood what was possible never had a chance to really influence the development of new ideas in the organisation. Certainly not outside of the immediate business unit.

In these environments you'd always here the catch phrase: "it's up to the business to decide", or often from the PMs/BAs, "we have to listen to the business", or the classic "the business wants...".

The depressing thing is that this is more often voiced by the staff of the IT department than the business units themselves. If people in an IT department don't think they're value contributing then they deserve to be treated as a cost centre and outsourced to the likes of EDS or IBM.

It's a personal mission of mine that my application delivery group does not come out with the same nonsense. It's the innovation that comes from those that know what's possible combined with the people that can advocate for a customer, and those that know the financial constraints and tools, and those that can market the products that creates value contribution in a company. Any organisation that forgets the value of the combined talent of all its resources deserves to lose market value.

Oh, and Industrial Research? A depressing environment of disillusioned scientists with ideas but no knowledge of how to commercialise them...

Thursday, March 15, 2007

Wellington Microsoft Tech Briefing

It was another great event yesterday and it's just fantastic to be seeing so many people. Wellington is my home town so there were plenty (plenty) of faces I recognised in the audience. A big thank you for opportunity goes out to Mark Carroll, Rebecca, Sean, Dean, Carol and all the others. Being the second time through the short speech I make in the keynote gave me a chance to think twice about the message I was trying to present.

And it's confirmed in my mind that the main message I want to get across to people is actively think about opportunities in their organisations, experiment with technologies and tools, and work on marketing any ideas they may come up with. It's how to make things happen and you know, life is too short to being dumb stuff when you could be doing cool stuff.

As a great way to finish off the day I got to attend the Microsoft Architects Council meeting at the hotel. I like to attend these events as it's a good chance to catch up with people I don't see every day. We have an active group of people up and down the country that attend these events and the chance to explore ideas is never something to pass up.

Next week is the final Tech Briefing in Christchurch. I can't wait for this as I know by then I'll be wanting to tune the message once more!

Prioritisation: Apples and Oranges

Every company seems to share the ritual of the prioritisation session. It has a common format and a common process.

Each business head gets to voice their opinion on what's most important to them. These are dutifully collated into a master list and then a discussion takes place to rank one above the other based upon some number of criteria; typically financial, customer experience, and compliance. Finally the agreed list is circulated for action.

It's value is normally limited because importance is not a good measure for prioritisation.

Importance is a measure of emotional conviction. It is a broad term that can mean many things depending upon the subject. The importance of a programme of work is not the same thing as the importance of an immediate fix, or the importance of a process review, or the importance of addressing a particular risk. In each case the definition of the term, importance, differs and therefore it can not be used for comparison. It is accurately a measure of emotional response, but I doubt little else.

What is the alternative? Perhaps it's better to ask what's the point.

The purpose of the prioritisation session is to allocate scarce resources. Scarcity can only be resolved through a process of trade-off (this is text book economics). What complicates the task in an organisation are the differing time requirements, resource specialisations, and dependency effects.

We have a limited ability to weigh up the combination of time, resource, specialisation and dependency factors to determine how limited resources can be applied to a range of competing tasks. Our minds have to make best guess estimates and the wider the scope the greater the problem (I bet someone out there can prove this is a power law expansion).

Which drives us to smaller delivery teams to reduce the scope of the problem.

So what should the prioritisation session be?

Perhaps to set areas of focus and define the criteria for prioritisation. I doubt little more.

Tuesday, March 13, 2007

Innovation in the Corporate Environment

This is a hot topic for me. I work in one of these environments and I'm involved in a fairly traditional (these days) role of enterprise architect: nominally responsible for the overall design of systems to ensure they meet business needs, and typically driven more from the perspective of policy and process than the introduction of new ideas. I'm afraid I'm not a very good enterprise architect.

Being on the back foot and not contributing to the ideas that form the basis of many of the commercial opportunities seems quite daft to me.

My aim in life is instead to communicate the opportunities of technology, or in fact, any thing that comes to mind actually. You know I went through university in a rather clueless manner and it's only now I see the possibilities of the methods taught to me at the time. There is just so much out there that can help give you an edge. (Wish I'd paid a bit more attention in the lecture rooms....)

Ensuring technology meets business needs is never going to see innovative solutions deployed, it's never going to see solutions applied when problems aren't yet realised to exist. How often have we looked around and seen only in retrospect that we missed the ball completely (trust me in my 4o years it's happened a helluva lot!).

So, yes, innovation is a hot topic for me.

Today I read an article syndicated from some offshute of The Economist called, innovatively enough, The Economist Newspaper, referring to the demise of traditional R&D and the rise of a new form of directed innovation concentrating on the D aspect. I'm not against this, a lot of the great ideas out there (that I've missed the boat on) have typically only been a couple of years ahead of everyone else's thinking. But the fact they were ahead proved a significant advantage.

The article began by looking at the output of Vannevar Bush, a gifted thinker in his own right, and an advisor to the Roosevelt and later administrations. It was Vannevar that spearheaded America's implementation of government and military funded R&D from the 1940s to the 1970s. "Industry is generally inhibited by preconceived goals, by its own clearly defined standards, and by the constant pressure of commercial necessity," he wrote in 1945. It still rings true today.

But, the days of the big labs are gone. Bell Labs has fallen apart, IBM's research labs are far more highly directed now, Microsoft Research nominally allow free reign but then look at the narrow range of papers on their site. Where's all the Research and what can a smaller company do?

It seems to me that in the mid-size corporate environment (with a few hundred staff) there is one classic failing: the creation of the product delivery chain. You know the one. It starts with the customer on the street, then there's marketing, then BAs, then project teams, and at the end of the line, the implementors.

Nothing driven down such a long chain will be innovative. The people at the end of the line act out a Dilbertesque cartoon living in perpetual frustration. The customers only get what they ask for; and no more. The nimble, smart companies out there create their own new niches and the slow ones are left to play catch up.

I think the secret to this is to keep team sizes down and allow small teams to experiment with ideas ensuring that at an overview level there is a process of nurturing and selection. Allowing failure to occur has to be an integral part. "Please fail very quickly - so that you can try again" says Eric Schmidt from Google.

Ground breaking products and processes are always due to the conceptual insights of individuals. So it should be the task of every innovative organisation to provide a mechanism to foster the intellectual output of their staff.

Response Time Distributions, IIS Log Files, and the question of the Missing Events

Over the last year I've been involved in a number of investigations attempting to find bottlenecks in systems consisting of clients, web service hosts, and databases (usually containing application logic in addition to data). The details of each system's implementation is not especially important to this discussion because what I want to do here is just relate one of my recent experiences regarding measurement of response times. You might like to check if you get similar behaviour.

Firstly, let's just describe the typical situation I find myself looking it. It's very generic, I'm sure the same thing will apply to you. I usually have some client systems accessing a service layer hosted in IIS6 talking to a database server (usually with significant embedded application logic).





The Problem
The problem (or my lesson in this case) is how to interpret the numbers you get from the IIS log files on the web service hosts. These files give you HTTP request duration and the arrival time of the request. Now what happens when you naively plot a distribution of the the duration (ie request execution time)? You might expect a nice symmetrical peak centered on some value, or you might get something such as the following.


I found that I was consistently getting this sort of shape across different types of requests. This isn't really all that unexpected. Each URI in the log file corresponds to a web service against which a number of web methods may be called. The web methods may have significantly different response times so the graph of the service call is really just a summation of all the individual web method calls. And in fact we've now implemented duration timing on each individual web method call, and in fact we get a much simplified distribution centred round one peak.

Anyhow, while I was looking into the double peak I decided to look at the arrival rate and compare that to the duration of the call. Theoretically you should get a Poisson distribution for random, uncorrelated events and on those occasions when there were many simultaneous arrivals you'd also expect the response time to slow down (although whether this was linear/non-linear is another question).

So, I looked for a period of time during which we have fairly constant activity and chose 2 hours in the middle of the day.



You can see that there's a nice distribution with the expected shape centred on 4 arrivals/second. Of course the IIS log files only record data when a request actually arrives. Looking at the bar graph you'd therefore naively expect about 300 one second intervals over the two hour period during which no requests arrived.

And this is where I got a surprising result.

So, it appears that the IIS HTTP arrival time is not accurate. I think that what I'm seeing here is that IIS is already queuing the requests up for processing - presumably because processing downstream is taking too long.

If you get anything like this, have seen it before, or have a bit more knowledge of what is going on I'd love to hear about it.

Monday, March 12, 2007

Powershell versus Perl

I've just been preparing some data for a post which I've been meaning to put up for a few months now. The data comes from an IIS log file and I need to pull out of it the time of the HTTP request, the HTTP service (uri-stem), and the duration of the request (time-taken). In the past I've always used Perl for this sort of task taking advantage of the regular expression syntax to extract my chosen data elements. For a 50MB sample log file (all logging options turned on) this takes approximately 12 seconds.

The Perl I've just used to test this follows:


use File::DosGlob 'glob';
use File::DosGlob 'GLOBAL_glob';

@logfiles = glob "ex*.log";

for my $logfile (@logfiles) {

open(INFILE, "$logfile");
$logfile =~ s/ex//g;

open(OUTFILE, ">$logfile");

while() {
if (m"^(\d\d\d\d-\d\d-\d\d\s\d\d:\d\d:\d\d).*/(.*\.asmx).*\s(\d+)$") {
$file = lc($2);
print OUTFILE "$1\t$file\t$3\n";
}
}
close(INFILE);
close(OUTFILE);
}


Now, since I've started using Powershell recently to extract and manipulate data for analysis I thought I'd also try the same thing with that. Note I'm just a beginner at this so I could be doing this the wrong way but here's what I tried:


 
Get-Content ex070223.log | 
 
foreach-object { if ($_ -match "(?^\d\d\d\d-\d\d-\d\d\s\d\d:\d\d:\d\d).*/(?.*\.asmx).*\s(?\d+$)")
 { & { $matches["occured"] + "," + $matches["service"] + ", " + $matches["duration"] } } }

The regular expression syntax is very powerful, I like the named matches - I guess this is a straight .net runtime feature, but I'm easily impressed. However, the time it takes to complete is abominable! It took 20 minutes to complete where the Perl program took 12 seconds.

Still, the power available from the command line is impressive...

Wednesday, March 07, 2007

Gadgets

I'm having a relaxing evening now after an exciting day mostly spent in Auckland at a Microsoft Tech Briefing. A great experience for me as I had the opportunity to be part of the key note speech. I happen to work for an up and coming NZ bank and I was able to present some of the experiences I've gained from my time in the bank - from being a member of a 20 something project team in 2001 - before the bank establishment - to the present day: an organisation of 700 with a 6 month profit of $11m.

The experience of being part of the key note was enlighting in itself. The nervous energy beforehand, the videos, sound, lighting, the 900+ people in the room, being part of a team that consisted of some much brighter people than I, it was great. And I still have Wellington and Christchurch to come over the next 2 weeks! What a buzz! I've never spoken to so many people before. For better or worse, the nearest thing to it in my memory is being 14 and being asked to recite a poem at my uncle's funeral. He was a very popular person and the church was packed. As sad as the event was I still remember the energy of the occasion clearly.

Of the event itself, and the reason for this posting, was the cementing in my mind of the importance of Vista's new gadgets. Jeremy Boyd of Mindscape demonstrated a remarkable gadget for the approval of Vista Dreamscene videos on a community site (can't find it - maybe they haven't put it live on the net?). This particular gadget used the capabilities of WCF to securely connect to a service and perform an operation. Working as enterprise architect at a bank you can imagine my interest. I've been harping on about this at work for a while now and recently noticed a blog post from someone I've met that used to work as a banking consultant at the Microsoft Sydney office, (James Gardner). It's just a matter of time... will we get there first?

The question of who get's there first was actually one of the main themes of my presentation. Looking at the way in which innovation can occur. Looking back on my historical work experiences I can now see how large corporates fail so often at delivering innovation. They start off so nimble and quick and then slow down to a near grinding halt. Achieving change becomes increasingly difficult because of the burden of process and competition of people. Does it have to be this way?

I'm sure I'm going to post more on this but it seems to me that there's a lot to be learnt from the field of R&D in traditional high risk, high reward industries and the application of R&D to service based industries. I know the potential margins haven't traditionally been seen as high enough to counter the cost but I think the time is right. I believe the risk reward matrix is increasingly favouring small experimental developments to highlight problem domains and visualise potential solutions. If we just manage to do those two things we'll be making downstream project delivery so much better (let alone considering the commercial benefits).

I strongly feel that the opportunities for applying technology are stronger today than ever before. It's the technologists who are currently creating the business models of tomorrow, not the business school graduates.

In the meantime, I recommend following JB, JD, and Andrew at Mindscape - they are a clever bunch of guys. I'm sure they'll go far.