This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit

C-Based Toolchain Hardening

Revision as of 12:28, 16 February 2013 by Jeffrey Walton (talk | contribs)

Jump to: navigation, search

C-Based Toolchain Hardening is a treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. This article will examine Microsoft and GCC toolchains for the C, C++ and Objective C languages. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.

There are four areas to be examined when hardening the toolchain: configuration, preprocessor, compiler, and linker. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, Visual Studio-based, and Xcode-based.

The article will also detail steps which quality assurance personnel can perform to ensure third party code meets organizational standards. Many organizations have Security Testing and Evaluation (ST&E) programs or operate in the US Federal arena where supply chain audits are necessary. If you audit a program with a lot of gaps, it could indicate the company providing the binaries does not have a mature engineering process or has gaps in its internal QA processes. For those who lack mature quality assurance or acceptance and testing criteria, then this article will also provide helpful suggestions.

Proper use of auditing tools such as checksec and readelf on Linux and BinScope on Windows means source code will be rarely needed for some portions of an audit. Lack of source code clears a number of legal obstacles in the acceptance testing process since NDAs or other agreements may not be required. For those who are not aware, the US's DMCA (PUBLIC LAW 105–304) has proper exceptions for reverse engineering and security testing and evaluation. The RE exemption is in Section 1205 (f) REVERSE ENGINEERING; and the ST&E exemption is in Section 1205 (i) SECURITY TESTING. If you don't need source code access, then you can decompile, re-engineer, and test without the need for consent or worry of reprisals.

A secure toolchain is not a silver bullet. It is one piece of an overall strategy in the engineering process. A secure toolchain will compliment existing processes such as static analysis, dynamic analysis, secure coding, negative test suites, and the like. And a project will still require solid designs and architectures.

This is a prescriptive article, and it will not debate semantics or speculate on behavior. As such, it will specify semantics, assign behaviors, and present a position. The semantics, behaviors, and position have worked extremely well in the past for the author (Jeffrey Walton). They have been so effective in his development strategy that code in the field often goes years between bug reports.

Finally, the OWASP ESAPI C++ project eats its own dog food. Many of the examples you will see in this article come directly from the ESAPI C++ project.


Code must be correct. It should be secure. It can be efficient.

Dr. Jon Bentley: "If it doesn't have to be correct, I can make it as fast as you'd like it to be".

Dr. Gary McGraw: "Thou shalt not rely solely on security features and functions to build secure software as security is an emergent property of the entire system and thus relies on building and integrating all parts properly".


Authors in the toolchain work hard to help projects deliver reliable, secure, and efficient programs. Their efforts are available to you at all stages of the engineering processes - from project configuration and preprocessing to compiling and linking. For example, its non-trivial to ensure line numbers in source files match debug information due to the processing of #include, considering projects can be configured with no macros, the DEBUG macro, or the NDEBUG macro.

Compiler writers also provide a rich set of warnings from the analysis of code during compilation. Both GCC and Visual Studio have static analysis capabilities to help find mistakes early in the development process. In addition, both tool chains have options to produce a hardened executable by taking advantage of the security offered by the platform. Since users expect trouble free and safe code, it would be wise if you used all tools available in your war chest.

The built in static analysis capabilities of GCC and Visual Studio are usually sufficient to ensure proper API usage and catch a number of mistakes such as using an uninitialized variable or comparing a negative signed int and a positive unsigned int. As a concrete example, (and for those not familiar with C/C++ promotion rules), a warning will be issued if a signed integer is promoted to an unsigned integer and then compared because a side effect is -1 > 1 after promotion! GCC and Visual Studio will not currently catch, for example, SQL injections and other tainted data usage. For that, you will need a tool designed to perform data flow analysis or taint analysis.

Some in the development community resist static analysis or refute its results. For example, when static analysis warned the Linux kernel's sys_prctl was comparing an unsigned value against less than zero, Jesper Juhl offered a patch to clean up the code. Linus Torvalds howled “No, you don't do this… GCC is crap” (referring to compiling with warnings). For the full discussion, see [PATCH] Don't compare unsigned variable for <0 in sys_prctl() from the Linux Kernel mailing list.


Configuration tools are popular on many Linux and Unix based systems, and they present the first opportunutiy for hardening. Configuration tools and auto tools include Autosetup, Autoconf, Automake, config, and Configure. In addition to the command line tools, you also have Integrated Development Environments (IDE) such as Eclipse and Visual Studio. Both allow you to adjust settings for a project, and there are three opportunities at this stage. The first opportunity is optimizations (and debugging), the second is language and platform, and third project specific settings.

When using command line tools for auto configuration, there are files of interest worth mentioning when working with Linux and Unix command line tools. The files are part of the auto tools chain and include m4 and the various *.in, *.ac (autoconf), and *.am (automake) files. A project often supply "one size fits all" settings in these files.

There are two downsides to some of the command line configuration tools in the toolchain: (1) security is often not a goal (for modern expectations of 'secure'), and (2) they cannot create configurations! The latter presents significant challenges even for switching between optimization levels (-O0 or -O2) and debug levels (-g or -g3).

You will probably be disappointed to learn tools such as Autoconf and Automake miss many security related opportunities and ship insecure out of the box. There are a number of compiler switches and linker flags that improve the defensive posture of a program, but they are not 'on' by default. Tools like Autoconf - which are supposed to handle this situation - often provides setting to serve the lowest of all denominators.

A recent discussion on the Automake mailing list illuminates the issue: Enabling compiler warning flags. Attempts to improve default configurations were met with resistance and no action was taken. The resistance is often of the form, "<some obscure platform> does not support <established security feature>" or "<some useful warning> also produces false positives". Its noteworthy that David Wheeler, the author of Secure Programming for Linux and Unix HOWTO, was one of the folks trying to improve the posture.


Optimizations and debug symbolication are controlled through two switches: -O and -g. You should use the following as part of your CFLAGS and CXXFLAGS to get the most from a debug session:

* -O0 -g3 -ggdb

-O0 turns off optimizations and -g3 ensures maximum debug information is available. At -g3, symbolic constants and #defines are available under the debugger. Finally, -ggdb includes extensions to help with a debug session under GDB. For completeness, Jan Krachtovil stated -ggdb currently has no effect in a private email.

Conversely, you should use the following as part of CFLAGS and CXXFLAGS for release builds:

* -On -g2

-On sets optimizations for speed or size, and -g2 ensure debugging information is created. Debugging information should be stripped and retained in case of symbolication for a crash report from the field. While not desired, debug information can be left in place without a performance penalty. See How does the gcc -g option affect performance? for details.

Another reason you might open files such as is to tune optimizations for an environment. Many default configurations simply use -O3 as a hardcoded CFLAG. -Os is often a better choice in embedded and mobile spaces because devices do not have a page file, they are memory constrained, and minimizing code size helps keep the caches "hot".


IDEs and command line tools offer opportunities to tune settings for a platform. IDEs usually provide the setting with point-and-click convenience; but you will usually need to verify and override settings from command line tools. For example, some command line configurations tools don't configure securely and don't honor command line settings, so the following might not produce expected results:

$ configure CFLAGS="-g2 -O2 -Wall -fPIE" LDFLAGS="-pie"

The unexpected loss of position independent code or layout randomizations is due to tools not detecting PIE or ALSR and ignoring CFLAGS and LDFLAGS. Because the auto tools don't enable some flags by default and ignore requested settings, you will need to open some of the template files (such as and add the settings by hand to ensure the project is configured to specification.

Its truly unfortunate many projects do not honor a users' preferred settings since it so easy to do. Below is from ESAPI C++ Makefile:

# Merge ESAPI flags with user supplied flags. We perform the extra step to ensure 
# user options follow our options, which should give user option's a preference.

Finally, the problem is not limited to auto tools. Non-auto'd projects, such as GCC and OpenSSL, do the same.


Configuration at the project level presents opportunities to harden the application or library with domain specific knowledge. For example, suppose you are building OpenSSL, and you know (1) SSLv2 is insecure, (2) SSLv3 is insecure, and (3) compression is insecure (among others). In addition, suppose you don't use hardware and engines, and only allow static linking. Given the knowledge and specifications, you would configure the OpenSSL library as follows:

$ Configure darwin64-x86_64-cc -no-hw -no-engines -no-comp -no-shared -no-dso -no-sslv2 -no-sslv3 --openssldir=…

If you think the misconfiguration cannot be found or will be overlooked, then you should rethink your position. Auditing (discussed below) will reveal that compression is enabled, and the following command will bear witness:

$ nm /usr/local/ssl/iphoneos/lib/libcrypto.a 2>/dev/null | egrep -i "(COMP_CTX_new|COMP_CTX_free)"
0000000000000110 T COMP_CTX_free
0000000000000000 T COMP_CTX_new

In fact, any symbol within the OPENSSL_NO_COMP preprocessor macro will bear witness since -no-comp is translated into a CFLAGS define.


Suppose you want to support Debug, Test, and Release configurations. If you are working in an IDE such as Eclipse or Visual Studio, you will be in good shape and won't encounter any hardships. The development environment will manage the settings for you, and allow you to change those settings with point-and-click convenience.

Now, suppose you are using auto tools or a makefile. You would like to type make debug, make release or make test and build the desired configuration. You will probably be disappointed to learn Automake does not support the concept of configurations. Its not entirely Autoconf's or Automake's fault - Make is the underlying problem.

Make has not evolved to support platforms undergoing evolution. Consider what happens when you: (1) type make debug, and then type make release. Each would require different CFLAGS due to optimizations and level of debug support. In your makefile, you would extract the relevant command and set CFLAGS like so (taken from ESAPI C++ Makefile):

# Makefile
DEBUG_GOALS = $(filter $(MAKECMDGOALS), debug)
ifneq ($(DEBUG_GOALS),)
  WANT_TEST := 0

ifeq ($(WANT_DEBUG),1)
  ESAPI_CFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0

Make will first build the program in a debug configuration for a session under the debugger. When you want a release build, Make will do nothing because it considers everything up to date despite the fact that C{XX}FLAGS have changed. Hence, your program will actually be in a debug configuration and could SIGABRT at runtime because, for example, debug instrumentation is present (assert calls abort() when NDEBUG is not defined).

If you think its unlikely, then you should rethink your position. A number of projects have been guilty of executing instrumented code in production, including critical infrastructure. The defect is due to failures in the engineering process, including quality assurance controls.


The preprocessor is crucial to setting up a project for success. The C committee provided one macro - NDEBUG - and the macro can be used to derive a number of configurations and drive engineering processes. Unfortunately, the committee also left many related items to chance, which has resulted in programmers abusing builtin facilities. This section will help you set up you projects to integrate well with other projects and ensure reliability and security.

There are three topics to discuss when hardening the preprocessor. The first is well defined configurations which produce well defined behaviors, the second is useful behavior from assert, and the third is proper use of macros when integrating vendor code and third party libraries.


To remove ambiguity, you should recognize two configurations: Release and Debug. Release is for production code on live servers, and its behavior is requested via the C/C++ NDEBUG macro. Its also the only macro observed by the C and C++ Committees and Posix. Diametrically opposed to release is Debug. While there is a compelling argument for !defined(NDEBUG), you should have an explicit macro for the configuration and that macro should be DEBUG. This is because vendors and outside libraries use the same (or similar) macro for their configuration. For example, Carnegie Mellon's Mach kernel uses DEBUG, Microsoft's CRT uses _DEBUG, and Wind River Workbench uses DEBUG_MODE.

In addition to NDEBUG (Release) and DEBUG (Debug), you have two additional cross products: both are defined or neither are defined. Defining both should be an error, and defining neither should default to a release configuration. Below is from ESAPI C++ EsapiCommon.h, which is the configuration file used by all source files:

// Only one or the other, but not both
#if (defined(DEBUG) || defined(_DEBUG)) && (defined(NDEBUG) || defined(_NDEBUG))
# error Both DEBUG and NDEBUG are defined.

// The only time we switch to debug is when asked. NDEBUG or {nothing} results
// in release build (fewer surprises at runtime).
#if defined(DEBUG) || defined(_DEBUG)

When DEBUG is in effect, your code should receive full debug instrumentation, including the full force of assertions.


Asserts will help you create self-debugging code. They help you find the point of first failure quickly and easily. Asserts should be used everywhere throughout your program, including parameter validation and return value checking. If you have thorough code coverage, you will spend less time debugging and more time developing because programs will debug themselves.

To use asserts effectively, you should assert everything. That includes parameters upon entering a function, return values from function calls, and any program state. Everywhere you place an if statement for validation or checking, you should have an assert. Everywhere you have an assert for validation or checking, you should have an if statement. They go hand-in-hand.

There is one problem with using asserts - Posix states assert should call abort() if NDEBUG is not defined. When debugging, NDEBUG will never be defined since you want the "program diagnostics" (quote from the Posix description). That makes assert and its accompanying abort() completely useless.

Live hosts running production code should always define NDEBUG (i.e., release configuration), which means they do not assert or auto-abort. Auto-abortion is not acceptable behavior, and anyone who asks for the behavior is completely abusing the functionality of "program diagnostics". If a program wants a core dump, then it should create the dump rather than crashing.


Aborting during a debug session while trying to troubleshoot a problem is utterly useless to the point its almost insulting. The result of "program diagnostics" calling abort() due to standard C/C++ behavior is disuse - developers simply don't use them. Its incredibly bad for the development community because self-debugging programs can help eradicate so many stability problems.

Since self-debugging programs are so powerful, you will have to have to supply your own assert with improved behavior. Your assert will exchange auto-aborting behavior for auto-debugging behavior. With properly instrumented code, you will find the point of first failure in under 5 minutes. The auto-debugging facility will ensure the debugger snaps when a problem is detected.

ESAPI C++ supplies its own assert with the behavior described above. In the code below, ASSERT raises SIGTRAP when in effect or it evaluates to void in other cases.

// A debug assert which should be sprinkled liberally. This assert fires and then continues rather
// than calling abort(). Useful when examining negative test cases from the command line.
#if (defined(ESAPI_BUILD_DEBUG) && defined(ESAPI_OS_STARNIX))
#  define ESAPI_ASSERT1(exp) {                                    \
    if(!(exp)) {                                                  \
      std::ostringstream oss;                                     \
      oss << "Assertion failed: " << (char*)(__FILE__) << "("     \
          << (int)__LINE__ << "): " << (char*)(__func__)          \
          << std::endl;                                           \
      std::cerr << oss.str();                                     \
      raise(SIGTRAP);                                             \
    }                                                             \
#  define ESAPI_ASSERT2(exp, msg) {                               \
    if(!(exp)) {                                                  \
      std::ostringstream oss;                                     \
      oss << "Assertion failed: " << (char*)(__FILE__) << "("     \
          << (int)__LINE__ << "): " << (char*)(__func__)          \
          << ": \"" << (msg) << "\"" << std::endl;                \
      std::cerr << oss.str();                                     \
      raise(SIGTRAP);                                             \
    }                                                             \
#elif (defined(ESAPI_BUILD_DEBUG) && defined(ESAPI_OS_WINDOWS))
#  define ASSERT(exp)             assert(exp)
#  define ESAPI_ASSERT1(exp)      assert(exp)
#  define ESAPI_ASSERT2(exp, msg) assert(exp)
#  define ASSERT(exp)             ((void)(exp))
#  define ESAPI_ASSERT1(exp)      ((void)(exp))
#  define ESAPI_ASSERT2(exp, msg) ((void)(exp))

At program startup, a SIGTRAP handler is installed if one is not provided by another component:

struct DebugTrapHandler
    struct sigaction new_handler, old_handler;

        int ret = 0;

        ret = sigaction (SIGTRAP, NULL, &old_handler);
        if (ret != 0) break; // Failed

        // Don't step on another's handler
        if (old_handler.sa_handler != NULL) break;

        new_handler.sa_handler = &DebugTrapHandler::NullHandler;
        new_handler.sa_flags = 0;

        ret = sigemptyset (&new_handler.sa_mask);
        if (ret != 0) break; // Failed

        ret = sigaction (SIGTRAP, &new_handler, NULL);
        if (ret != 0) break; // Failed

      } while(0);

  static void NullHandler(int /*unused*/) { }


// We specify a relatively low priority, to make sure we run before other CTORs
static const DebugTrapHandler g_dummyHandler __attribute__ ((init_priority (110)));

On a Windows platform, you would call _set_invalid_parameter_handler (and sometimes _set_invalid_parameter_handler) to install a new handler. For details on on various Microsoft handlers see Handling Exceptions from STL.

Additional Macros

Additional macros include any macros needed to integrate properly and securely. It includes integrating the program with the platform (for example MFC or Cocoa/CocoaTouch) and libraries (for example, Crypto++ or OpenSSL). It can be a challenge because you have to have proficiency with your platform and all included libraries and frameworks.

Its nearly impossible to enumerate all the possible combinations for all platforms and libraries, but the list below highlights the level of detail you will need when integrating.

In addition to what you should define, defining some macros and undefining others should trigger a security related defect. For example, -U_FORTIFY_SOURCES on Linux and _CRT_SECURE_NO_WARNINGS=1 on Windows.

Platform/Library Debug Release
Table 1: Additional Platform/Library Macros
libstdc++ _GLIBCXX_DEBUG=1a
Android (4.2 and above) _FORTIFY_SOURCE=1
Cocoa/CocoaTouch #define NSLog(...) (define to nothing, preempt ASL)
SQLCipher Remove NDEBUG from Debug builds (Xcode)

a Be careful with _GLIBCXX_DEBUG when using pre-compiled libraries such as Boost from a distribution. There are ABI incompatibilities, and the result will likely be a crash. You will have to omit _GLIBCXX_DEBUG or compile Boost with _GLIBCXX_DEBUG.

b SQLite secure deletion zeroes memory on destruction. Define as required, and always define in US Federal since zeroization is required for FIPS 140-2, Level 1.

c N is 0644 by default, which means everyone has some access.

d Force temporary tables into memory (no unencrypted data to disk).




After all the discussions on hardening in the development stage, there are still opportunities to improve the engineering process.


Auditing a binary for compliance against an SDLC focuses on the compilation phase and the linking phase.