This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

Difference between revisions of "C-Based Toolchain Hardening"

From OWASP
Jump to: navigation, search
Line 35: Line 35:
 
When using command line tools for auto configuration, there are files of interest worth mentioning when working with Linux and Unix command line tools. The files are part of the auto tools chain and include <tt>m4</tt> and the various <tt>*.in</tt>, <tt>*.ac</tt> (autoconf), and <tt>*.am</tt> (automake) files. A project often supply "one size fits all" settings in these files.
 
When using command line tools for auto configuration, there are files of interest worth mentioning when working with Linux and Unix command line tools. The files are part of the auto tools chain and include <tt>m4</tt> and the various <tt>*.in</tt>, <tt>*.ac</tt> (autoconf), and <tt>*.am</tt> (automake) files. A project often supply "one size fits all" settings in these files.
  
There are two downsides to some of the command line configuration tools in the toolchain: (1) security is often not a goal (for modern expectations of 'secure'), and (2) they cannot create configurations! The latter presents significant challenges even for switching between optimization levels (<tt>-O0</tt> or <tt>-O2</tt>) and debug levels (<tt>-g</tt> or <tt>-g3</tt>). Both security goals and configurations are explored below in [[#Security Goals|Security Goals]] and [[#Config_Management|Config Management]].
+
There are two downsides to some of the command line configuration tools in the toolchain: (1) security is often not a goal (for modern expectations of 'secure'), and (2) they cannot create configurations! The latter presents significant challenges even for switching between optimization levels (<tt>-O0</tt> or <tt>-O2</tt>) and debug levels (<tt>-g</tt> or <tt>-g3</tt>).
 +
 
 +
You will probably be disappointed to learn tools such as Autoconf and Automake miss many security related opportunities and ship insecure out of the box. There are a number of compiler switches and linker flags that improve the defensive posture of a program, but they are not 'on' by default. Tools like Autoconf - which are supposed to handle this situation - often provides setting to serve the lowest of all denominators.
 +
 
 +
A recent discussion on the Automake mailing list illuminates the issue: ''[https://lists.gnu.org/archive/html/autoconf/2012-12/msg00038.html Enabling compiler warning flags]''. Attempts to improve default configurations were met with resistance and no action was taken. The resistance is often of the form, "<nowiki><some obscure platform></nowiki> does not support <nowiki><established security feature></nowiki>" or "<nowiki><some useful warning></nowiki> also produces false positives". Its noteworthy that David Wheeler, the author of ''[http://www.dwheeler.com/secure-programs/ Secure Programming for Linux and Unix HOWTO]'', was one of the folks trying to improve the posture.
  
 
=== Optimizations ===
 
=== Optimizations ===
Line 82: Line 86:
  
 
In fact, any symbol within the <tt>OPENSSL_NO_COMP</tt> preprocessor macro will bear witness since <tt>-no-comp</tt> is translated into a <tt>CFLAGS</tt> define.
 
In fact, any symbol within the <tt>OPENSSL_NO_COMP</tt> preprocessor macro will bear witness since <tt>-no-comp</tt> is translated into a <tt>CFLAGS</tt> define.
 
=== Security Goals ===
 
 
You will probably be disappointed to learn tools such as Autoconf and Automake miss many security related opportunities and ship insecure out of the box. There are a number of compiler switches and linker flags that improve the defensive posture of a program, but they are not 'on' by default. Tools like Autoconf - which are supposed to handle this situation - often provides setting to serve the lowest of all denominators.
 
 
A recent discussion on the Automake mailing list illuminates the issue: ''[https://lists.gnu.org/archive/html/autoconf/2012-12/msg00038.html Enabling compiler warning flags]''. Attempts to improve default configurations were met with resistance and no action was taken. The resistance is often of the form, "<nowiki><some obscure platform></nowiki> does not support <nowiki><established security feature></nowiki>" or "<nowiki><some useful warning></nowiki> also produces false positives". Its noteworthy that David Wheeler, the author of ''[http://www.dwheeler.com/secure-programs/ Secure Programming for Linux and Unix HOWTO]'', was one of the folks trying to improve the posture.
 
  
 
=== Config Management ===
 
=== Config Management ===

Revision as of 13:18, 15 February 2013

C-Based Toolchain Hardening is a treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. This article will examine Microsoft and GCC toolchains for the C, C++ and Objective C languages. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.

There are four areas to be examined when hardening the toolchain: configuration, preprocessor, compiler, and linker. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, Visual Studio-based, and Xcode-based.

The article will also detail steps which quality assurance personnel can perform to ensure third party code meets organizational standards. Many organizations have Security Testing and Evaluation (ST&E) programs or operate in the US Federal arena where supply chain audits are necessary. If you audit a program with a lot of gaps, it could indicate the company providing the binaries does not have a mature engineering process or has gaps in its internal QA processes. For those who lack mature quality assurance or acceptance and testing criteria, then this article will also provide helpful suggestions.

Proper use of auditing tools such as checksec and readelf on Linux and BinScope on Windows means source code will be rarely needed for some portions of an audit. Lack of source code clears a number of legal obstacles in the acceptance testing process since NDAs or other agreements may not be required. For those who are not aware, the US's DMCA (PUBLIC LAW 105–304) has proper exceptions for reverse engineering and security testing and evaluation. The RE exemption is in Section 1205 (f) REVERSE ENGINEERING; and the ST&E exemption is in Section 1205 (i) SECURITY TESTING. If you don't need source code access, then you can decompile, re-engineer, and test without the need for consent or worry of reprisals.

This is a prescriptive article, and it will not debate semantics or speculate on behavior. As such, it will specify semantics, assign behaviors, and present a position. The semantics, behaviors, and position have worked extremely well in the past for the author (Jeffrey Walton). They have been so effective in his development strategy that code for secure channels and secure containers in the field often goes years between bug reports.

Finally, the OWASP ESAPI C++ project eats its own dog food.

Wisdom

Code must be correct. It should be secure. It can be efficient.

Dr. Jon Bentley: "If it doesn't have to be correct, I can make it as fast as you'd like it to be".

Dr. Gary McGraw: "Thou shalt not rely solely on security features and functions to build secure software as security is an emergent property of the entire system and thus relies on building and integrating all parts properly".

Introduction

Authors in the toolchain work hard to help projects deliver reliable, secure, and efficient programs. Their efforts are available to you at all stages of the engineering processes - from project configuration and preprocessing to compiling and linking. For example, its non-trivial to ensure line numbers in source files match debug information due to the processing of #include, considering projects can be configured with no macros, the DEBUG macro, or the NDEBUG macro.

Compiler writers also provide a rich set of warnings from the analysis of code during compilation. Both GCC and Visual Studio have static analysis capabilities to help find mistakes early in the development process. In addition, both tool chains have options to produce a hardened executable by taking advantage of the security offered by the platform. Since users expect trouble free and safe code, it would be wise if you used all tools available in your war chests.

The built in static analysis capabilities of GCC and Visual Studio are usually sufficient to ensure proper API usage and catch a number of mistakes such as using an uninitialized variable or comparing a negative signed int and a positive unsigned int. As a concrete example, (and for those not familiar with C/C++ promotion rules), a warning will be issued if a signed integer is promoted to an unsigned integer and then compared because a side effect is -1 > 1 after promotion! GCC and Visual Studio will not currently catch, for example, SQL injections and other tainted data usage. For that, you will need a tool designed to perform data flow analysis or taint analysis.

Some in the development community resist static analysis or refute its results. For example, when static analysis warned the Linux kernel's sys_prctl was comparing an unsigned value against less than zero, Jesper Juhl offered a patch to clean up the code. Linus Torvalds howled “No, you don't do this… GCC is crap” (referring to compiling with warnings). For the full discussion, see [PATCH] Don't compare unsigned variable for <0 in sys_prctl() from the Linux Kernel mailing list.

Configuration

Configuration tools are popular on many Linux and Unix based systems, and they present the first opportunutiy for hardening. Configuration tools and auto tools include Autosetup, Autoconf, Automake, config, and Configure. In addition to the command line tools, you also have Integrated Development Environments (IDE) such as Eclipse and Visual Studio. Both allow you to adjust settings for a project, and there are three opportunities at this stage. The first opportunity is optimizations (and debugging), the second is language and platform, and third project specific settings.

When using command line tools for auto configuration, there are files of interest worth mentioning when working with Linux and Unix command line tools. The files are part of the auto tools chain and include m4 and the various *.in, *.ac (autoconf), and *.am (automake) files. A project often supply "one size fits all" settings in these files.

There are two downsides to some of the command line configuration tools in the toolchain: (1) security is often not a goal (for modern expectations of 'secure'), and (2) they cannot create configurations! The latter presents significant challenges even for switching between optimization levels (-O0 or -O2) and debug levels (-g or -g3).

You will probably be disappointed to learn tools such as Autoconf and Automake miss many security related opportunities and ship insecure out of the box. There are a number of compiler switches and linker flags that improve the defensive posture of a program, but they are not 'on' by default. Tools like Autoconf - which are supposed to handle this situation - often provides setting to serve the lowest of all denominators.

A recent discussion on the Automake mailing list illuminates the issue: Enabling compiler warning flags. Attempts to improve default configurations were met with resistance and no action was taken. The resistance is often of the form, "<some obscure platform> does not support <established security feature>" or "<some useful warning> also produces false positives". Its noteworthy that David Wheeler, the author of Secure Programming for Linux and Unix HOWTO, was one of the folks trying to improve the posture.

Optimizations

Optimizations and debug symbolication are controlled through two switches: -O and -g. You should use the following as part of your CFLAGS and CXXFLAGS to get the most from a debug session:

* -O0 -g3 -ggdb

-O0 turns off optimizations and -g3 ensures maximum debug information is available. At -g3, symbolic constants and #defines are available under the debugger. Finally, -ggdb includes extensions to help with a debug session under GDB. For completeness, Jan Krachtovil stated -ggdb currently has no effect in a private email.

Conversely, you should use the following for release builds:

* -On -g2

-On sets optimizations for speed or size, and -g2 ensure debugging information is created. Debugging information should be stripped and retained in case of symbolication for a crash report from the field. While not desired, debug information can be left in place without a performance penalty. See How does the gcc -g option affect performance? for details.

Another reason you might open files such as Makefile.in is to tune optimizations for an environment. Many default configurations simply use -O3 as a hardcoded CFLAG. -Os is often a better choice in embedded and mobile spaces because devices do not have a page file, they are memory constrained memory, and minimizing code size helps keep the caches "hot".

Platforms

IDEs and command line tools offer opportunities to tune settings for a platform. IDEs usually provide the setting with point-and-click convenience; but you will usually need to verify and override settings from command line tools. For example, some command line configurations tools don't configure securely and don't honor command line settings, so the following might not produce expected results:

$ configure CFLAGS="-g2 -O2 -Wall -fPIE" LDFLAGS="-pie"

The unexpected loss of position independent code or layout randomizations is due to tools not detecting PIE or ALSR and ignoring CFLAGS and LDFLAGS. Because the auto tools don't enable some flags by default and ignore requested settings, you will need to open some of the template files (such as Makefile.in) and add the settings by hand to ensure the project is configured to specification.

Its truly unfortunate many projects do not honor a users' preferred settings since it so easy to do. Below is from ESAPI C++ Makefile:

# Merge ESAPI flags with user supplied flags. We perform the extra step to ensure 
# user options follow our options, which should give user option's a preference.
override CFLAGS := $(ESAPI_CFLAGS) $(CFLAGS)
override CXXFLAGS := $(ESAPI_CXXFLAGS) $(CXXFLAGS)
override LDFLAGS := $(ESAPI_LDFLAGS) $(LDFLAGS)

Finally, the problem is not limited to auto tools. Non-auto'd projects, such as GCC and OpenSSL, do the same.

Projects

Configuration at the project level presents opportunities to harden the application or library with domain specific knowledge. Foe example, suppose you are building OpenSSL, and you know (1) SSLv2 is insecure, (2) SSLv3 is insecure, and (3) compression is insecure (among others). In addition, suppose you don't use hardware and engines, and only allow static linking. Given the knowledge and specifications, you would configure the OpenSSL library as follows:

$ Configure darwin64-x86_64-cc -no-hw -no-engines -no-comp -no-shared -no-dso -no-sslv2 -no-sslv3 --openssldir=…

If you think the misconfiguration cannot be found or will be overlooked, then you should rethink you position. Auditing (discussed below) will reveal that compression is enabled, and the following command will bear witness:

$ nm /usr/local/ssl/iphoneos/lib/libssl.a 2>/dev/null | egrep -i "ERR_load_COMP_strings"

In fact, any symbol within the OPENSSL_NO_COMP preprocessor macro will bear witness since -no-comp is translated into a CFLAGS define.

Config Management

Suppose you want to support Debug, Test, and Release configurations. If you are working in an IDE such as Eclipse or Visual Studio, you will be in good shape and won't encounter any hardships. The development environment will manage the settings for you, and wallow you to change those settings with point and click convenience.

Now, suppose you are using auto tools or a makefile. You would like to type make debug, make release or make test and build the desired configuration. You will probably be disappointed to learn Automake does not support the concept of configurations. Its not entirely Automake's fault - Make is the underlying problem.

Make has not evolved to support platforms undergoing evolution. For example, most modern platforms support executables and shared objects. Yet Make only supports one CFLAGS (and CXXFLAGS) and one LDFLAGS. Assuming you have a library and test suite and you want to use position independent code (or layout randomizations), it means you must make a choice: either -fPIE (compiler) and -pie (linker) for an executable; or -fPIC (compiler) and -shared for a shared object.

As another example, consider a modern engineering process that includes debug, test, and release configurations. Each would require different CFLAGS, and you would extract the relevant command and set CFLAGS like so (taken from ESAPI C++ Makefile):

# Makefile
DEBUG_GOALS = $(filter $(MAKECMDGOALS), debug)
ifneq ($(DEBUG_GOALS),)
  WANT_DEBUG := 1
  WANT_TEST := 0
  WANT_RELEASE := 0
endif
…

ifeq ($(WANT_DEBUG),1)
  ESAPI_CFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0
  ESAPI_CXXFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0
endif
…

Now consider what happens when you:(1) type make debug, and then type make release. Make will first build the program in a debug configuration for a session under the debugger. Next you want a release build to ensure there are no "optimized" side effects. Make will do nothing upon typing type make release because it considers everything up to date despite the fact that C{XX}FLAGS have changed. Hence, your program will actually be in a debug configuration and could SIGTERM at runtime because, for example, debug instrumentation is present (assert calls abort() when NDEBUG is not defined).

If you think its unlikely, then you should rethink your position. A number of projects have been guilty of executing debug instrumented code in production, including critical infrastructure. The defect is due to failures in the engineering process, including quality assurance controls.

Preprocessor

The preprocessor is crucial to setting up a project for success. The C committee provided one macro - NDEBUG - and the macro can be used to derive a number of configurations and drive engineering processes. Unfortunately, the committee also left many related items chance, which has resulted in programmers abusing builtin facilities. This section will help you set up you projects to integrate well with other projects and ensure reliability and security.

There are two areas to discuss when hardening the preprocessor. The first is well defined configurations which produce well defined behaviors, and the second is proper use of macros when integrating vendor code and third party libraries.

Configurations

To remove ambiguity, you should recognize two configurations: Release and Debug. Release is for production code for a live server, and its behavior is requested via the C/C++ NDEBUG macro. Diametrically opposed to release is Debug. While there is a compelling argument for !defined(NDEBUG), you should have an explicit macro for the configuration and that macro should be DEBUG. This is because vendors and outside libraries use the same (or similar) macro for their configuration.

In addition to NDEBUG (Release) and DEBUG (Debug), you have two additional cross products: both are defined or neither are defined. Defining both should be an error, and defining neither should default to a release configuration. Below is from ESAPI C++ EsapiCommon.h, which is its configuration file used by all source files:

// Only one or the other, but not both
#if (defined(DEBUG) || defined(_DEBUG)) && (defined(NDEBUG) || defined(_NDEBUG))
# error Both DEBUG and NDEBUG are defined.
#endif

// The only time we switch to debug is when asked. NDEBUG or {nothing} results
// in release build (fewer surprises at runtime).
#if defined(DEBUG) || defined(_DEBUG)
# define ESAPI_BUILD_DEBUG 1
#else
# define ESAPI_BUILD_RELEASE 1
#endif

When DEBUG is in effect, your code should receive full debug instrumentation, and redefine the behavior of assert. Debug instrumentation and asserts go hand-in-hand, and help ensure your code is self-debugging. Self-debugging code will not only validate parameters and return values, it will also assert those values. The assert will alert to a problem in the code (for example, an unexpected parameter). The real beauty of the assert is they will fire at any time there is a problem without the need to set a breakpoint.

There is one problem with using asserts - Posix states an assert should call abort() if NDEBUG is not defined. When debugging, NDEBUG will never be defined since you want the "program diagnostics" (quote from the Posix description). That makes abort() and SIGTERM completely useless behavior in this case.

The end result

Additional Macros

Compiler

Linker

Runtime

After all the discussions on hardening in the development stage, there are still opportunities to improve the engineering process.

Auditing

Auditing a binary for compliance against an SDLC focuses on the compilation phase and the linking phase.