Programme Management – Planning and Estimation

When was the last time you ‘estimated’ your project? At bid time? Post contract award? When the Project Manager was put in post? When the project ran into trouble? At what point was the requirement secured, complete, unchanging?

If you have defined your software lifecycle, and the processes to be used in each component of that lifecycle, a programme plan should be a natural output; an instance for that particular product where much of the opportunity for task independence is defined by virtue of the product architecture.

Most software programmes will consist of many parallel activities, which are likely to be in different phases of the lifecycle, following different processes. If the lifecycle management tools are sufficiently instrumented, many ‘waypoints’ are available as tracking information, with little or no overhead on the developers! The ability to convert all this information into meaningful progress is dependent on understanding the expected progress through the plan; a relationship that has to be modified to account for previous experience to improve estimation.

Because all programmes are unique, calibration values are purely ‘seeds’ for a new programme. Estimation should be a periodic process that reassesses the remaining programme based on the progress made on the programme thus far executed.

Traditional Programme Managers don’t get Software Programme Management! There is nothing physically generated that is a clear indication of progress. Components that are generated may just as easily have their entire value destroyed by what appear to be an innocuous change, so ‘Chickens’, ‘Eggs’ and ‘Hatched’ comes to mind.

The key to software programme management is differentials – i.e. the rate of change. There are two competing processes i) the generation of components and ii) the removal of errors. Even in agile i) is not complete until ii) is satisfied. During the generation process, we also create errors, whose correction is a generation process and so on. With the application of good skills, focus on the objective, and carefully controlled correction, this recursion is finite, as the number of errors diminishes with each iteration. We will see in a later instalment, that measuring this recursion, its number of iterations, place relative to plan can provide us with useful progress indicators.

Armed with knowledge of when things are supposed to happen (the Programme Plan and its associated Lifecycle processes) it should be possible to predict where errors will be detected and thus a component is complete. This depends on the component’s nature/interfaces, the point at which that component makes a feature contribution to the product and the process in which the test environment (in which the errors are due for detection and removal) is planned to be used.

For economic purposes, all software programmes strive to achieve a singular goal, the removal of errors as close as possible to their point of introduction, hence the lifecycle processes are usually defined with this objective in mind.

The most significant risk that a Project manager must deal with however, is volatility of requirement, as this has the ability to change the components, their interfaces, the architecture and may result in significant rework of componentry and plan. Requirement change is inevitable. How you plan to minimise the impact of change is key to the success of the programme. Recognising that risk emerges from volatility enables you to derive an understanding of risk and an ability to plan its management.

___________________________________________________________________________

Let us help you maximise the business potential of your product and its software

Email me on stuart.jobbins@sofintsys.com

Visit our website, or follow us on your preferred Social Media for our latest views.    linkedin twitter (2)___________________________________________________________________________

 

Leave a Reply