Writing Solid Code

My apologies to Steve Maguire for "borrowing" a title.

I constantly see code, examples, and advice that perpetuate unsafe coding practices.  As programmers we have the habit of getting something to a "working" state and call it "done".  This is especially true in processes that have no real architecture or design phases.  Over the years, as a society, programmers have begun to realize some of the obvious flaws and have been perpetuating practices and code checkers to avoid such flaws.  But, there's still the mentality of "but it works [in my limited tests], how could it be wrong". 

For example, I don't know of any programmers that would sanction the following C++ code:

    int * const p = new int[10];

    p[1] = 10;

    delete[] p;

    if(p[1] == 10)

    {

        puts("ten");

    }

    else

    {

        puts("not ten");

    }

But, "it works" in a release build.

There are many, many similar examples of code that "works" in limited circumstances and this is deemed acceptable by, what seems to be, a majority of programmers.  I've seen many discussions of programming constructs that can't work 100% of the time; with impassioned participants that will always argue that either they can prove it works with an example and simply ignore proofs where it fails as "contrived" or statistically insignificant.  Although, I don't know of a single programmer that can claim they've never been guilty of this.

From a bricks-and-mortar building standpoint; we, as a society, realized the errors of assuming just 99% is good enough.  From this there were the instigation of Engineering certifications/licensing, building standards, etc.  All to ensure that 1% was equally as important as the other 99%; to ensure engineers don't unintentionally kill someone.  Even with this we're still reminded how important it is to abide by these standards and what happens when we don't (like the Hyatt Regency Walkway Collapse, or the Sampoong Department Store Collapse), despite the likelihood.

To a certain extent, our tools, processes, training, all seem to perpetuate the "good enough" mentality.  The ANSI C library is a prime example.  Largely designed in the 70's when security wasn't an issue yet, it's rife with functionality to let programmer write buffer overflow code to their heart's content.  For example:

#pragma pack(1)

    struct MyStruct

    {

        char s[10];

        int i;

    } myStruct = {"", 1};

#pragma pack(pop)

    sprintf(myStruct.s, "1234567890");

    printf("%d", myStruct.i);

…where the output is "0", not "1"; with nary a compiler warning or runtime error.  It's APIs like these and the mentality of "when is that ever going to happen" that lead to software security flaws.  Even with continual bombardment of security patches, developers still can't get past the "works 99% of the time" hurdle.

Here is small list of some of the "hot spots" that will still cause heated discussions even amongst experienced developers:

  • .NET: Avoid catch(Exception) in anything other than a last-chance handler.
  • C++: Avoid catch(…) in anything other than a last-chance handler.
  • Windows: Don't access windows data from a thread that didn't create the window.
  • .NET: Avoid Control.Invoke.
  • .NET/VB: Avoid DoEvents.
  • Performing potentially lengthy operations on the main/GUI thread.
  • Testing for valid pointers and IsBad*Ptr().

2 thoughts on “Writing Solid Code”

  1. Amen, brother.
    Of course, one could note that most of the code security failures are also code reliability failures. The example you quote may or may not be exploitable, but it’s certainly a cause of the application behaving outside of its designed expectations.
    That’s why, whenever I hear programmers complain that security is a hard sell, I try to get them to sell it to management as a reliability initiative, as well as a security one.
    [Of course, depending on the industry of your target users, you can also raise the spectre of compliance.]
    It doesn’t hurt to mention that this also makes the code more maintainable in the future.

  2. Hi alunj. With regard to the sample: likely exploitable: no, exploitable: yes. It’s less than trivial to suspend any thread, access (write to) another processes memory, and resume a thread. Unlikely that, in this example, someone could write a reliable exploit to do that. But, the example is academic–similar code put into a “useful” application would be even more likely to be exploitable.

    With regard to maintainability: yes, a fine point. With the clients I’ve dealt with that’s sadly not a big criteria (at least until I arrive). Simply not doing the items on my list makes for a more maintainable code base; but there’s lots of other things about maintainable source that gets ignored…

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>