Safety is Key
The Nuclear industry is commercially very risky. Whilst fossil fuels are still available and realtively cheap, it is difficult to make a business case for Nuclear in the short term that makes it an economic competitor to the simpler fossil fuel plant alternatives. With long development programmes, regulatory standards that are differentiated by geographic region and nationality, and undergoing constant change in response to fielded failures, developments have a history of running late. The 'long-tail' of the development cycle can quickly eat through any profit margin in the business case.
The ecological benefit of a carbon-free power generation has to run the political gauntlet because of such widely publicised failure events and the social angst with living in the proximity of such a plant.
Defence in depth
The systematic answer in the design of extremely robust systems, is of employing a number of independent operational systems, developed independently (to remove common mode failure), possibly implemented in diverse technologies (removing common mode failure) and layered as successive safety measures. In both commercial and miltary solutions, software systems are employed for many of those layers of control, and instrumentation, including systems of warnings, alarms, safety mechanisms and containment systems.
Simple solutions
Whilst it is possible to bring more automation to these systems, there is a reluctance, borne of the inability to prove, in a deterministic fashion, that such large systems will operate correctly in all scenarios (most critically being the correct action on failure). Ultra-safe systems tend to be simple, cognitively intuitive (i.e. matched with common perceptions of physical laws) and palpable (i.e. you can see what it does and how it does it). Unfortunately software systems may lack many of these attributes.
Software Intensive Systems implies System Complexity
Those 'obvious' systems are generally continuous, linear, systems. They are relatively easy to describe (in formal mathematics) and can be translated to 'mechanical' implementations. Whilst it is possible to design such solutions in software, in general the reason for deploying software solutions is inherently because we wish to define discontinuous or arbitrary system relationships that could not be realised with other implementations. The complexity then is borne out of the system need, rather than as a result of employing a software solution.
In this conservative environment, employing software solutions inevitably means a tougher than usual software development.
The rigour required to ensure the correct implementation, verification and validation of any component non-linear system is in itself difficult. The rigour required to understand the emergent behaviour where many discontinuous systems interact can quickly defy recognition. In these systems we rely on high-fidelity simulations to validate the designs and the resultant behaviour under a range of (possibly exhaustive) conditions. Unfortunately these simulations are often software intensive systems of their own (and the inevitable recursion of problems).
Complexity = unsafe?
Human beings are unsafe. We bound our behaviour by risk perception.
Having a 'man-in-the-loop' is inherently unsafe... but socially acceptable.
As system complexity becomes incomprehensible by an individual, we use simpler abstracts (a model) to communicate what the system does.
By definition, this less refined abstract is inaccurate, but cognitively useful.
We use software to enable us to define arbitrary realtionships in a system that we can alter easily (no significant cost of rework).
Most software systems rely on a series of abstracts, at increasing levels of detail to define their behaviour, to make them cognitively 'comprehensible'.
By eating the elephant in bitesize pieces, we can comprehend the complex systems we use software to execute.
Once logically ascribed and correctly implemented, software systems are consistent in their operation - unlike humans, especially in moments of stress.