Life in a highly-regulated development environment
High-integrity software not only requires a well-defined product structure, but also a clear and robust mechanism of development that recognises the potential sources of error injection and responds to those with suitable mechanisms for identifying and removing those errors. To do this effectively (and economically) the point of correction needs to be near the point of error injection. The longer an error goes un-discovered, the less controllable the circumstances become at reproducing the error and the cost of such test environments increases exponentially with each development stage.
The regulatory environment not only expects plans that deliver such rigour in the development process, they require them to generate the evidence that confirms, or at least assures, that the process has been followed and that the expected outcomes are achieved.
A similar expectation is placed on electronic hardware developments.
Although the regulator is not, per se, interested in the architecture and design of the product that is the particular solution, there is clearly a relationship. The separation of concerns, simplicity of structure and independence of safeguards all play a part in ensuring that product complexity is managed to a deterministic, or at least highly-predictable, solution.
The point of concern for the regulator is usually in the detailed implementation, in that a well-defined set of requirements on either electronic hardware or software is faithfully commuted to a sufficiently predictable solution, that can be subsequently demonstrated. This, along with assurances and evidence to show that no 'hidden functionality' exists that the designers have not intentionally disabled, such evidence gathered and sufficient independence deployed to satisfy the assurance level goals/needs, may satisfy ourselves and the regulator that we have built the product right... but does little for assurance that we have built the right product.
The systems engineering that is the precursor to the 'well-defined requirements' for the implementation disciplines has to be sufficiently well defined (usually through trade-studies, prototyping and customer feedback) to ensure we have described the right product. The system validation then has to ensure the faithfully implemented solution still matches these needs.
- Electronic System Product Strategy Experience, Electronic and Software System architecture trades
- Design, Development and Test Expertise including Application and Safety separations and Redundant behaviour
- Software Development Process Experience for various assurance levels
- System, Subsystem and Software Design, Development and Test Expertise
- System and Software Architecture, Modelling
- Development and Integration Experience
- Processing Platform understanding, Microcontroller use, Hardware Abstraction Layer design and implementation
- Control Sensitivity analysis - know which aspects are a problem and what you do to bring them under control
- Multi-domain control strategies (e.g. Electromagnetic to Hydraulic for controlling solenoid operated valves or pilots)
- Techical Training for all of the above, in abstract, or product specifics
High-Integrity and Agile?
Not obvious bed-fellows, but there are techniques that can be borrowed that can help de-risk high-integrity development programmes.
Incremental delivery helps steady achievement, but may not give optimal development cost, but significantly alters the risk profile.
Know your strengths to know what to adopt and why.
Good process tooling helps reduce errors, or at least makes the process sufficiently repeatable to enable you to understand and control errors.
The traditional gaps exist in hand-off between Systems and Software or Hardware (e.g. requirements imposed versus a product architecture expectation constraint) and the transition from description to implementation and test.
At what point do you stop conveying philosophy and structure and formalise detailed interfaces?