I originally came across the The Power of 10 by watching the video how NASA writes space-proof code. These are a set of rules to produce code that can be reviewed and statically analysed. As they’ve come from the space industry you can understand why they want to be really sure what their code does. You can’t send someone up to turn it off and on again.

The rules

The creator of the rules, Gerand Holzmann, admits:

…the rules I will propose are somewhat strict – some might say even draconian. The trade-off, though, should be clear.

Here we go:

  1. Avoid complex flow constructs, such as goto and recursion.
  2. All loops must have fixed bounds. This prevents runaway code.
  3. Avoid heap memory allocation.
  4. Restrict functions to a single printed page.
  5. Use a minimum of two runtime assertions per function.
  6. Restrict the scope of data to the smallest possible.
  7. Check the return value of all non-void functions, or cast to void to indicate the return value is useless.
  8. Use the preprocessor sparingly.
  9. Limit pointer use to a single dereference, and do not use function pointers.
  10. Compile with all possible warnings active; all warnings should then be addressed before release of the software.

While that is strict, it’s not actually all bad. Some of it makes sense in everyday code whether you’re writing safety critical systems or not.

Flow control

  1. Avoid complex flow constructs, such as goto and recursion.

Back in the day people only had if-statements and goto-statements. It’s really easy to make confusing code with gotos. In my first year of university they told us never to touch them. I have used them as a stepping stone when I was reverse engineering assembly code back in to C. I’ve seen them inside automatically generated parser code and they can indeed be confusing. They are a fundamental internal component of all other flow control but there is almost always going to be a better way to do things than a direct goto-statement. I’m okay with a blanket ban.

Recursion would be harder to do without. It’s not something that necessarily comes up often but sometimes it’s really useful. Some problems are even described in a recursive manner so it’s a very direct replacement. On the other hand seeing a recursive function accidentally blow up the stack because of a bug isn’t uncommon. Actually, even more common would be a function that’s indirectly recursive, one where foo() calls bar() and bar() calls foo() or something more complicated. That’s not mentioned here but could lead to the same sort of problems. Maybe they’d ban it all. You can always re-write something in a non-recursive way but it’s nice to have the option of recursion.

  1. All loops must have fixed bounds. This prevents runaway code.

All loops having a fixed upper bound is a bit different. Of course some loops will already have a fixed upper bound. If you’re looping through an array often the size of the array is the upper bound. However if you’re looping through a string the end condition might be finding the null terminator. If you’ve used null-terminated strings before then you’ve probably experienced a string that hasn’t been terminated. Often that’s because most of the string was copied but someone forgot to account for the + 1 needed for the null terminator. If your string processing loop also has a fixed upper bound, perhaps the maximum expected size of any string, then the loop is still going to stop. The code is broken but at least it’s not stuck in an infinite loop. One for the space industry.

Memory allocation

  1. Avoid heap memory allocation.

Avoiding memory allocation avoids a host of potential bugs: no running out of memory, no dangling pointers, no heap fragmentation. These rules were designed to use with C so they don’t have garbage collection to fall back on and that has a different set of potential issues. On the other hand without memory allocation and with a fixed stack size there are definite limits on what can be done. So far I’ve been in computing environment where I don’t have to worry about this but any constrained system might benefit restricted allocation.

For different reasons games often go down a similar path. Games might allocate fixed numbers of objects early on and reuse them throughout. While these system are used for less serious purposes they are performance critical and might push the bounds of even extensive hardware.

Functions

  1. Restrict functions to a single printed page.

I’ve talked about the benefits of seeing things in one glance and apparently NASA agrees with me. We even landed on a similar size limit for functions, 70 for me and 60 for them. I planned it around screen size but they were thinking about the printed page. A smaller function is easier to understand and test so you can be sure of it’s behaviour.

  1. Use a minimum of two runtime assertions per function.

This has been simplified a bit from it’s original statement. It was originally “…average to a minimum of two assertions per function.” This could be used to check that the data is in an expected form before the function and has been correctly transformed after the function.

It’s good if you can write code that can only be used properly. You can use an object reference for a function parameter rather than an object pointer to shows that only a real object should be passed in. However we all make mistakes and asserts can catch those.

Restrict access

  1. Restrict the scope of data to the smallest possible.

I tend to restrict the scope of data. Protecting class variables behind private, declaring static variables inside the cpp file or within a function. This might not necessarily be the smallest possible scope. Sometimes it feels like there is too much going on in a class and it would be good to tighten it further. That’s probably an indication I should have broken the class into separate parts. This relates to the classes cohesion which is on my list to post about.

Check return values

  1. Check the return value of all non-void functions, or cast to void to indicate the return value is useless.

This isn’t about checking the return values when you wanted the return value. It’s about dealing with the return values when you didn’t want the return value. When you call printf it returns a value. You can cast this value to (void) to show readers of the code that you know about this but don’t care. In modern C++ you can use std::ignore instead.

I feel conflicted about this. Part of me likes having to explicitly show the reader that the return value is know but unwanted. It feels like coding should be very deliberate. However part of me rebels at the extra code that would have to be strewn everywhere. Having discard variables would make it more compact.

It often indicates that we’re not dealing with pure functions. Certainly in the printf case we assume that the function is successful. It just happens to return the number of characters printed. It could return void and provide a separate function if we really wanted to know the number of characters. That would be less efficient but cleaner.

I’m not planning to start checking all my return values but I will try to use the nodiscard attribute to force the behaviour where it makes sense.

Limit preprocessor

  1. Use the preprocessor sparingly.

Apparently NASA limits the preprocessor to file includes and simple conditional macros because:

The C preprocessor is a powerful obfuscation tool that can destroy code clarity and befuddle many text-based checkers.

This seems very true and clever preprocessor use can be synonymous with complicated code.

Another objection is around flags that are used to control the build process. If you have 10 build flags then you effectively have 2^10 build targets. How do you go about testing all of those?

I have been caught out before by code that was behind a build flag. If a build flag has disabled a section of code then the compiler doesn’t get to see it at all. It won’t get checked for build errors. It can be safer to keep code behind an if-statement where the if-expression is always false. The code is checked at build time and the optimiser should remove it from the binary.

Simple pointers

  1. Limit pointer use to a single dereference, and do not use function pointers.

A single dereference of a pointer isn’t too difficult a limitation. You are allowed to build more complicated chains but you’ll have to mix it up with intermediate structures.

Not using function pointers, and presumably things like lambdas, is a big drawback. These are really powerful in reducing the amount of code that needs to be written. Apparently they can really make it difficult for static analysis. I suspect a lot of my use cases aren’t that difficult. Providing a comparison function to a sort algorithm sounds possible to analyse. However the general case could get very complicated.

Address warnings

  1. Compile with all possible warnings active; all warnings should then be addressed before release of the software.

This can be a difficult one to get started in the middle of a project. However once it’s achieved it does feel good and is beneficial. In particular once the code is warning free any new warnings are obvious. Warnings often are useful but can get buried.

On balance

I think about half of that makes sense for day to day programming. Given these are meant to be strict, even draconian, rules that’s surprising. The one I’m going to keep thinking about is checking return values. How to be deliberate about discard values without complicating the code.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *