This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

Difference between revisions of "Test Network/Infrastructure Configuration (OTG-CONFIG-001)"

From OWASP
Jump to: navigation, search
(Added objectives section.)
 
(50 intermediate revisions by 14 users not shown)
Line 1: Line 1:
[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]<br>
+
{{Template:OWASP Testing Guide v4}}
{{Template:OWASP Testing Guide v2}}
 
  
Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers or the authentication servers are not properly reviewed and secured they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.
+
== Summary ==
 +
The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can include hundreds of web applications, makes configuration management and review a fundamental step in testing and deploying every single application.
 +
It takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and seemingly unimportant problems may evolve into severe risks for another application on the same server. In order to address these problems, it is of utmost importance to perform an in-depth review of configuration and known security issues, after having mapped the entire architecture.
  
For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.
 
  
In order to test the configuration management infrastructure the following steps need to be taken:
+
Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers, or the authentication servers are not properly reviewed and secured, they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.
  
* the different elements that make up the infrastructure need to be determined in order to understand how they interact with web application and how they affect its security
 
* all the elements of the infrastructure need to be reviewed in order to make sure that they don’t hold any known vulnerabilities
 
* a review needs to be done of the administrative tools used to maintain all the different elements
 
* the authentication systems, if any, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated to leverage access by external users.
 
* A list of defined ports which are required for the application should be maintained and kept under change control.
 
  
==Review of the application architecture==
+
For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (a vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application, as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.
  
The application architecture needs to be reviewed through the test to determine what different components are used to build the web application. In small setups, such as a simple CGI-based application, a single server might be used that runs the web server which executes the C, Perl, or Shell CGIs application and maybe authentication is also based on the web server authentication mechanisms. On more complex setups, such as an online bank system, multiple servers might be involved including: a reverse proxy, a front-end web server, an application server and a database server or LDAP server. Each of these servers will be used for different purposes and might be even be divided in different networks with firewalling devices between them creating different DMZs so that access to the web server will not grant a remote user access to the authentication mechanism itself and compromises of the different elements of the architecture can be isolated in a way such that they will not compromise the whole architecture.
 
  
Getting knowledge of the application architecture can be easy, if this information is provided to the testing team by the application developers in document form or through interviews, or can prove to be very difficult to determine if doing a blind penetration test.
+
The following steps need to be taken to test the configuration management infrastructure:
  
In the later case, a tester will first start with the assumption that there is a simple setup (a single server) and will, through the information retrieved from other tests will derive the different elements and question this assumption the architecture will be extended. He will start by making simple questions such as: “Is there a firewalling system protecting the web server?” which will be answered based on the results of network scans targeted at the web server and the analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-listening ports). This analysis can be enhanced in order to determine the type of firewall system used based on network packet tests: is it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed?
+
* The different elements that make up the infrastructure need to be determined in order to understand how they interact with a web application and how they affect its security.
 +
* All the elements of the infrastructure need to be reviewed in order to make sure that they don’t contain any known vulnerabilities.
 +
* A review needs to be made of the administrative tools used to maintain all the different elements.
 +
* The authentication systems, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated by external users to leverage access.
 +
* A list of defined ports which are required for the application should be maintained and kept under change control.
  
Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner which might directly disclose the existence of a reverse proxy (for example, if ‘WebSEAL’[1]  is returned). It can also be determined by the answers of the web server to requests and comparing them to the expected answers. For example, some reverse proxies act as “intrusion prevention systems” (or web-shields) by blocking known attacks targeted at the web server. If the web server is known to answer with a 404 message to a request which targets and unavailable page and returns a different error message for some common web attacks like those done by CGI scanners it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and returning a different error page than the one expected. Another example, if the web server returns a set of available HTTP methods (including TRACE) but then the expected methods return errors then there probably is something in between blocking them. And, in some cases, even the protection system gives itself away:
 
  
<pre>
+
After having mapped the different elements that make up the infrastructure (see [[Map_Network_and_Application_Architecture_(OTG-INFO-012)|Map Network and Application Architecture]]) it is possible to review the configuration of each element founded and test for any known vulnerabilities.
GET / web-console/ServerInfo.jsp%00 HTTP/1.0
 
  
HTTP/1.0 200
+
== Test Objectives ==
Pragma: no-cache
+
Map the infrastructure supporting the application and understand how it affects the security of the application.
Cache-Control: no-cache
 
Content-Type: text/html
 
Content-Length: 83
 
  
<TITLE>Error</TITLE>
+
== How to Test==
<BODY>
 
<H1>Error</H1>
 
FW-1 at XXXXXX: Access denied.</BODY>
 
Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server
 
</pre>
 
  
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header, or timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.
+
===Known Server Vulnerabilities===
 +
Vulnerabilities found in the different areas of the application architecture, be it in the web server or in the back end database, can severely compromise the application itself. For example, consider a server vulnerability that allows a remote, unauthenticated user to upload files to the web server or even to replace files. This vulnerability could compromise the application, since a rogue user may be able to replace the application itself or introduce code that would affect the back end servers, as its application code would be run just like any other application.
  
Other elements that can be detected are network balancers. Typically, these systems will be balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done based on multiple requests and comparing results in order to determine if the requests are going to the same or different web servers, for example, based on the Date: header if the server clocks are not synchronised. In some cases the network load balance might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.
 
  
Application web servers are usually easy to detect. Sometimes because the request for several resources is handled by the application server itself and not the web server and the response header will vary significantly (including different or additional values in the answer header). Another possibility to detect these is if the web servers tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or rewrite URLs automatically to do session tracking.
+
Reviewing server vulnerabilities can be hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to be tested from a remote site, typically using an automated tool. However, testing for some vulnerabilities can have unpredictable results on the web server, and testing for others (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful.  
  
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way since they will be hidden by the application itself.
 
  
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly” it is probably being extracted from some sort of database by the application itself. Sometimes even the way information is requested might give insight to an existence of a database back-end, for example, an online shopping applications that uses numeric identifiers (‘id’) when browsing the different articles in the shop. However, when doing a blind application test knowledge of the underlying database is usually only available when some vulnerability surfaces in the application, such as an SQL injection, which indicates that the application is actually talking to a database (or the vulnerability would not be possible otherwise).  
+
Some automated tools will flag vulnerabilities based on the web server version retrieved. This leads to both false positives and false negatives. On one hand, if the web server version has been removed or obscured by the local site administrator the scan tool will not flag the server as vulnerable even if it is. On the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed, the scan tool will flag vulnerabilities that do not exist. The latter case is actually very common as some operating system vendors back port patches of security vulnerabilities to the software they provide in the operating system, but do not do a full upload to the latest software version. This happens in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication back ends, the back end database, or reverse proxies in use.
  
==Known server vulnerabilities==
 
  
Vulnerabilities found in the different elements that make up the application architecture, be it the web server itself or the database backend can severely compromise the application itself in some cases even more if a vulnerability had been found in the application itself. For example, consider a server vulnerability that allows a remote, unauthenticated user, to upload files to the web server even replacing existing files, this vulnerability would compromise the application itself since a rogue user would be able to replace the application itself or introduce code that would affect the backend servers since its application code would be run just like any other application.
+
Finally, not all software vendors disclose vulnerabilities in a public way, and therefore these weaknesses do not become registered within publicly known vulnerability databases[2]. This information is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for lesser known products.
  
Reviewing server vulnerabilities can be either hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to tested from a remote site, typically using an automated tool, however, the test of some vulnerabilities can have unpredictable results to the web server or testing for some kinds of vulnerabilities (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful. Also, some automated tools will flag vulnerabilities based on the web server version retrieved. This leads both to false positives and false negatives: on one hand, if the web server version has been removed or obscured by the local site administrator the scan tool will not flag the server as vulnerable even if it, on the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed in it the scan tool will flag vulnerabilities that do not exist. The later case is actually very common in some operating system vendors  that do backport patches of security vulnerabilities to the software they provide in the operating system but do not do a full upload to the latest software version, this happens, for example, in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication backends, the database backends or, even, reverse proxies in use.
 
  
Finally, not all software vendors disclose vulnerability information in public way and, even, information of the vulnerabilities present in their different releases is not published in vulnerability databases[2] but is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for less known products.
+
This is why reviewing vulnerabilities is best done when the tester is provided with internal information of the software used, including versions and releases used and patches applied to the software. With this information, the tester can retrieve the information from the vendor itself and analyze what vulnerabilities might be present in the architecture and how they can affect the application itself. When possible, these vulnerabilities can be tested to determine their real effects and to detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of successful exploitation. Testers might even determine, through a configuration review, that the vulnerability is not even present, since it affects a software component that is not in use.
  
This is why reviewing vulnerabilities is best done when the tester is provided internal information of the software used including versions and releases used and patches applied to the software. Which this information in its hand, the tester can retrieve the information from the vendor itself and analyse what vulnerabilities might be present in the architecture and how they can affect the application itself. All these vulnerabilities can, when possible be tested in order to determine their real effects and detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of exploiting these vulnerabilities. Testers might even determine, through a configuration review, that the vulnerability is not even present since it affects a software component that is not in use.
 
  
It is also worthwhile noticing that vendors will sometimes silently fix vulnerabilities and make them available on new software releases. Also, different vendors will have difference release cycles that determines the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer under support the systems personnel might not be directly aware of it: no patches will be ever made available for it and advisories might not list that version as vulnerable (as it is unsupported). And even in the event that they might be aware that the vulnerability is present and the system is, indeed, vulnerable, they will need to do a full upgrade to a new software release which might introduce significant downtime in the application architecture or might force the application to be recoded due to incompatibilities with the latest software version.
+
It is also worthwhile to note that vendors will sometimes silently fix vulnerabilities and make the fixes available with new software releases. Different vendors will have different release cycles that determine the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer supported, the systems personnel might not be directly aware of it. No patches will be ever made available for it and advisories might not list that version as vulnerable as it is no longer supported. Even in the event that they are aware that the vulnerability is present and the system is vulnerable, they will need to do a full upgrade to a new software release, which might introduce significant downtime in the application architecture or might force the application to be re-coded due to incompatibilities with the latest software version.
  
==Administrative tools==
 
  
Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application: static content (web pages, graphic files), applications source code, user authentication databases, etc. Depending on the site, technology or software used administrative tools will be differ. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as when using the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net). In most cases, however, the server configuration will be handled using different tools than the maintenance of the files used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously operating system of the elements that make up the application architecture will also be managed using other tools. Also, applications might have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.)
+
===Administrative tools===
  
Review of the administrative interfaces used to manage the different parts of the architecture is very important since if a user gains access to any of them he can compromise or damage the application architecture. Thus it is important to:
+
Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application. This information includes static content (web pages, graphic files), application source code, user authentication databases, etc. Administrative tools will differ depending on the site, technology, or software used. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net). I
  
* list all the possible administrative interfaces.
 
* determine if administrative interfaces are available only from an internal network or are also available from the Internet.
 
* if available from the Internet, determine what are the access control methods used to access these interfaces and if they are susceptible to attacks.
 
  
Some sites do not directly manage the web server applications fully, they might have other companies manage the content provided by the web server application. This external companies might either provide only parts of the content (news updates or promotions) or might manage the web server completely including content and code. It is common to find administrative interfaces be available from the Internet in these situations, since using the Internet, as the web servers are directly connected to it anyway, is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation it is very important to test if the administrative interfaces can be vulnerable to attacks.  
+
n most cases the server configuration will be handled using different file maintenance tools used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously, the operating system of the elements that make up the application architecture will also be managed using other tools. Applications may also have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.).
  
==Authentication back-ends==
 
  
Many applications rely heavily on the authentication methods implemented to provide information only to the authorised user and no other user. In some cases, like in a merchant shop, the information might be the same (the history of items bought in the shop and the user profile) but it should only be viewed by the legitimate user. In other cases, like an internal human resources application, different users will have different roles that determine what actions or functionality is available to them in the application.
+
After having mapped the administrative interfaces used to manage the different parts of the architecture it is important to review them since if an attacker gains access to any of them he can then compromise or damage the application architecture. To do this it is important to:
  
It is important to review and test the security of the authentication back-end to determine that the information they store cannot be recovered by any means. This means ensuring that the authentication information is stored in encrypted form, specially the passwords, if any, used by users to access the application[5]. Of course, backups of the authentication system should also be kept encrypted to prevent disclosure of this sensible information in the event of loss.
+
* Determine the mechanisms that control access to these interfaces and their associated susceptibilities. This information may be available online.
 +
* Change the default username and password.
  
Review user’s application privileges?
 
Review default users?
 
Admin-level and user-level access use same authentication back-end?
 
  
==Notes==
+
Some companies choose not to manage all aspects of their web server applications, but may have other parties managing the content delivered by the web application. This external company might either provide only parts of the content (news updates or promotions) or might manage the web server completely (including content and code). It is common to find administrative interfaces available from the Internet in these situations, since using the Internet is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation, it is very important to test if the administrative interfaces can be vulnerable to attacks.
* [1]WebSEAL, also known as Tivoli Authentication Manager, is a reverse Proxy from IBM which is part of the Tivoli framework.
 
* [2]Such as Symantec’s Bugtraq, ISS’ Xforce, or NIST’s National Vulnerability Database (NVD)
 
* [3]There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.
 
* [4]It is very common the use of database back-ends for authentication purposes with user tables that include the password that grants access to users in plain text.
 
  
  
{{Category:OWASP Testing Project AoC}}
+
==References==
 +
* [1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse proxy from IBM which is part of the Tivoli framework.
 +
* [2] Such as Symantec’s Bugtraq, ISS’ X-Force, or NIST’s National Vulnerability Database (NVD).
 +
* [3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.

Latest revision as of 17:46, 28 April 2015

This article is part of the new OWASP Testing Guide v4.
Back to the OWASP Testing Guide v4 ToC: https://www.owasp.org/index.php/OWASP_Testing_Guide_v4_Table_of_Contents Back to the OWASP Testing Guide Project: https://www.owasp.org/index.php/OWASP_Testing_Project

Summary

The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can include hundreds of web applications, makes configuration management and review a fundamental step in testing and deploying every single application. It takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and seemingly unimportant problems may evolve into severe risks for another application on the same server. In order to address these problems, it is of utmost importance to perform an in-depth review of configuration and known security issues, after having mapped the entire architecture.


Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers, or the authentication servers are not properly reviewed and secured, they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.


For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (a vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application, as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.


The following steps need to be taken to test the configuration management infrastructure:

  • The different elements that make up the infrastructure need to be determined in order to understand how they interact with a web application and how they affect its security.
  • All the elements of the infrastructure need to be reviewed in order to make sure that they don’t contain any known vulnerabilities.
  • A review needs to be made of the administrative tools used to maintain all the different elements.
  • The authentication systems, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated by external users to leverage access.
  • A list of defined ports which are required for the application should be maintained and kept under change control.


After having mapped the different elements that make up the infrastructure (see Map Network and Application Architecture) it is possible to review the configuration of each element founded and test for any known vulnerabilities.

Test Objectives

Map the infrastructure supporting the application and understand how it affects the security of the application.

How to Test

Known Server Vulnerabilities

Vulnerabilities found in the different areas of the application architecture, be it in the web server or in the back end database, can severely compromise the application itself. For example, consider a server vulnerability that allows a remote, unauthenticated user to upload files to the web server or even to replace files. This vulnerability could compromise the application, since a rogue user may be able to replace the application itself or introduce code that would affect the back end servers, as its application code would be run just like any other application.


Reviewing server vulnerabilities can be hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to be tested from a remote site, typically using an automated tool. However, testing for some vulnerabilities can have unpredictable results on the web server, and testing for others (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful.


Some automated tools will flag vulnerabilities based on the web server version retrieved. This leads to both false positives and false negatives. On one hand, if the web server version has been removed or obscured by the local site administrator the scan tool will not flag the server as vulnerable even if it is. On the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed, the scan tool will flag vulnerabilities that do not exist. The latter case is actually very common as some operating system vendors back port patches of security vulnerabilities to the software they provide in the operating system, but do not do a full upload to the latest software version. This happens in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication back ends, the back end database, or reverse proxies in use.


Finally, not all software vendors disclose vulnerabilities in a public way, and therefore these weaknesses do not become registered within publicly known vulnerability databases[2]. This information is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for lesser known products.


This is why reviewing vulnerabilities is best done when the tester is provided with internal information of the software used, including versions and releases used and patches applied to the software. With this information, the tester can retrieve the information from the vendor itself and analyze what vulnerabilities might be present in the architecture and how they can affect the application itself. When possible, these vulnerabilities can be tested to determine their real effects and to detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of successful exploitation. Testers might even determine, through a configuration review, that the vulnerability is not even present, since it affects a software component that is not in use.


It is also worthwhile to note that vendors will sometimes silently fix vulnerabilities and make the fixes available with new software releases. Different vendors will have different release cycles that determine the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer supported, the systems personnel might not be directly aware of it. No patches will be ever made available for it and advisories might not list that version as vulnerable as it is no longer supported. Even in the event that they are aware that the vulnerability is present and the system is vulnerable, they will need to do a full upgrade to a new software release, which might introduce significant downtime in the application architecture or might force the application to be re-coded due to incompatibilities with the latest software version.


Administrative tools

Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application. This information includes static content (web pages, graphic files), application source code, user authentication databases, etc. Administrative tools will differ depending on the site, technology, or software used. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net). I


n most cases the server configuration will be handled using different file maintenance tools used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously, the operating system of the elements that make up the application architecture will also be managed using other tools. Applications may also have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.).


After having mapped the administrative interfaces used to manage the different parts of the architecture it is important to review them since if an attacker gains access to any of them he can then compromise or damage the application architecture. To do this it is important to:

  • Determine the mechanisms that control access to these interfaces and their associated susceptibilities. This information may be available online.
  • Change the default username and password.


Some companies choose not to manage all aspects of their web server applications, but may have other parties managing the content delivered by the web application. This external company might either provide only parts of the content (news updates or promotions) or might manage the web server completely (including content and code). It is common to find administrative interfaces available from the Internet in these situations, since using the Internet is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation, it is very important to test if the administrative interfaces can be vulnerable to attacks.


References

  • [1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse proxy from IBM which is part of the Tivoli framework.
  • [2] Such as Symantec’s Bugtraq, ISS’ X-Force, or NIST’s National Vulnerability Database (NVD).
  • [3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.