Current REAPOFF Examples

REAPOFF is very flexible and configurable. By choosing a set of rules from the template files it is possible to achieve many different types of proxies, each serving different purposes. However, the many choices may appear confusing for the novice. For this reason REAPOFF provides a number of examples in the examples/ directory. These examples are common configurations that serve particular purposes or illustrate a particular feature in REAPOFF.

These examples are documented in here. Users are welcomed to contribute specific examples to be put in this directory. Note that by convention each configuration file holds one proxy, or a number of related proxies to perform a particular task. In reality a real configuration file will hold many such proxies. If you want to add these examples to your configuration file use the "File/Merge File" menu option.


This proxy demonstrates a web filtering proxy, similar to junkbuster or webwasher. As usual we use policies to define the type of things we can do, and then select those policies according to certain conditions:
  1. The default policy in this case is ad_block. This takes care of blocking certain sites and URLs with specified substrings in them.
  2. The second policy is rewriting. This policy removes certain headers, removes ActiveX and rewrites certain words in the HTML body returned.
  3. Finally the last policy is fragile. This represents those sites which should not be tampered with in any way.
  4. When the rules are evaluated in order, the first Policy Selection by URL rule selects the rewriting policy for all sites with the word sourceforge in them. The policy is not executed yet.
  5. The second Policy Selection by URL rule selects the fragile policy for Note that in this case is fragile, but for example will still be rewriting.
  6. Note how the policies are merged into one another, for example, the default policy also has all the other policies applied to it. This makes it easy to build hierarchical policies from less restrictive to more restrictive.


This proxy implements an authenticator service. Basically it acts like a HTTP server, when a browser connects to it, the browser will request credentials from the user. Depending of these credentials certain programs will be executed. In this case if the user authenticated successfully, a plug proxy will be launched allowing connections from the exact source IP address to a destination server inside the network.

This mechanism is very useful for remote administration. For example suppose that you really need to allow ssh access to a server inside the DMZ from the internet. If you create a permanent IPtables rule or a permanent proxy, then your ssh server will be susceptible to every script kiddy scanning tool out there. When a new vulnerability appears in ssh you will be immediately vulnerable. The risk can be mitigated by reducing the exposure of the ssh server on the internet. Only after successful authentication, is the ssh server accessible.

As usual we define a policy first and then specify the conditions under which the policy will apply:

  1. The only policy here is Administrator. We could, however, have several more policies if we need more than one service exposed. This policy writes a welcome message and then executes a command in the foreground.
  2. The command executed will start a plug proxy. The proxy will connect to a destination of on port 22 (ssh) and will wait for a connection for 30 seconds. The plug will listen on port 222 locally. Note that only connections from the source IP address that the web request came from will be allowed.
  3. After the command finishes executing (i.e. the plug proxy is terminated) the message OK finished is echoed to the user. This way the user can tell when the tunnel is terminated.
  4. The main proxy configuration is geared toward serving up pages and requesting authentication from the user. The Present Page Auth screen rule requires the user to present authentication for any URL they requested.
  5. The Policy selection by Authentication rule tests the credentials returned. Usernames and passwords are given as regular expressions in a colon seperated line. For example /^root:letmein$/. If these are correct the policy Administrator is selected and executed.


The problem with the previous Authenticator scheme is that the passwords are passed in the clear. One way to mitigate this is to allow Authenticator.conf to run in an SSL enabled proxy. However, this assumes there are no vulnerabilities in the OpenSSL library which reapoff is using.

A better idea is to use one time passwords. These allow the administrator to carry with them a list of passwords to use, once a password has been used, it can no longer be used again. So even if the password had been compromised, it is useless to the attacker. This feature allows one to use cleartext for the authentication screens, and makes the system far more simple.

One time passwords are implemented in reapoff via the Policy selection by one time passwords rule. This rule can be found in the HTTP family.

This rule requires 3 parameters, a password file containing the list of all one time passwords, a filename to store the old passwords and a policy to select. The remainder of the proxy is identical to the Authenticator.conf proxy.

In order to generate a good sequence of one time passwords, REAPOFF comes with a helper shell script called This script takes two arguements, the username and the number of passwords to generate, for example:

bash$ ./ root 4
The output may be redirected to the password file, and printed on a card. Note that you must keep the passwords secure. One time passwords are cryptographically perfect, but very difficult to manage and transfer.


This proxy implements a typical configuration in an enterprise. It demonstrates how the use of policies and selection rules can be used to implement a fine grained, yet easy to understand configuration:
  1. There are 3 general policies defined:
    Blocks particular sites, both optionally and unconditionally.
    This policy blocks activeX. This policy also uses another proxy to chain to via the Hand off proxy directive.
    This proxy allows unfetted access via a handoff proxy. This policy also requires authentication.
  2. Policies are selected first by URLs. The effect of this is that if the URL is or, you must be qualified to use the admin policy to get there.
  3. A power_users policy may be selected by authentication. In order for the power users to first authenticate they must go to the URL http://reapoff/authenticate which is a bogus URL. Otherwise they will not be required to authenticate.
  4. Once power users authenticate they may access power_users policy, which allows them to bypass the blocking of URLs.


This proxy is designed to relay all SMTP connections to an ISP mail server. This creates a "virtual server" as illustrated in the diagram below:

                                  +----------+      +--------+
--------------Internet----------> | Firewall |<-----|Domain  |
                                  +----------+      +--------+
                                  |   ISP      |
                                  |Mail Server |

If a connection arrives from the internet to our firewall, our firewall will relay that connection to the ISP's mail server. The internet mail server believes that our firewall is a real mail server and continues with the SMTP transaction. The ISP's mail server believes that we want to send mail, and so will receive the email for us. This configuration allows our firewall to act like a virtual server.

Not only can our firewall accept mail, but it can act for a completely different domain than the ISP's mail server. So for example, our firewall may advertise its domain as, while the ISPs domain is actually

REAPOFF can rewrite email addresses so that emails addressed to will be relayed to the ISP's mail server as transparently. This is very similar to a relaying SMTP server, (e.g. Gauntlet's smapd, or sendmail) but is much simpler to implement:

  1. There are 2 policies in this proxy
  2. The first rule limits the SMTP commands that the proxy will accept. Commands such as VERFY,EXPN etc will not be allowed.
  3. Next we select which policy to apply depending on the source IP address of the connection. If the connection arrives from the 192.168.1.x address space, it is deemed to be part of the domain and relaying is allowed.
  4. For all other connections, we deem those to be incoming connections, and connect them directly to the ISP mail server.
  5. We define a series of translations replacing certain addresses with addresses on the real mail server. Note that we are also not allowing addresses destined to the real mail server by changing their domain to "no_relay_here"
  6. In the Anti Relay Rule we specify those domains which we will accept relaying for. We must allow our here, otherwise none of the redirected email addresses will work. Here we specify those domains which we wish to service. (Note that we obviously also need MX records for those).
This configuration is most suitable for diskless machines that still need to offer SMTP services. Since they are diskless, there is nowhere to store the mail, and a real mail server can not be installed. Note that we dont really need to do anything special in the, who in fact does not know that it is servicing a couple of other domains. This makes it simple and easy to use in an ISP situation, because all you need to do is buy an MX record for your own domain and let your ISP manage the rest.


This is probably the most interesting of all REAPOFF proxies. I know of only one other proxy that can do this at this time, and its commercial. Most HTTP content filters are completely blind when the user is communicating over SSL. The communication is encrypted and is simply relayed through the proxy. One of the most dangerous services is IE5.5's web folders which implement WebDAV over SSL. This allows users to mount remote shares over SSL, upload and download any files to those shares with no auditing or content controls.

REAPOFF is capable of doing a "man in the middle" attack, decrypting the traffic on the firewall, inspecting it and re-encrypting this traffic. The result is that the traffic can be audited as well as controlled over SSL. The details about how this process actually happens can be found in the documentation, but here we will see how the example configuration is built.

Basically there are 2 proxies in this configuration, the regular HTTP proxy and the SSL proxy. In this example, there are no policies, that is everyone is treated the same, but policies would typically be implemented in a real installation. There is nothing exciting about the standard HTTP proxy, except for the rule "Full, Non-transparent SSL proxy".

This rule relays SSL connections to the SSL master, a separate process which manages SSL connections. We need to specify which port the master should listen on. The master is usually bound to localhost ( since only the HTTP proxy ever needs to talk to the master. Finally the "SSL proxy name" is the name of the proxy which will be invoked to tunnel the SSL connections over. When a new SSL request is made, the master will launch the SSL proxy as a point to point plug proxy.

The SSL proxy is just a standard HTTP proxy and can contain all the regular rules found in the HTTP family. However, the difference is that since the master will launch this proxy point to point, it will not need a HTTP requests rule or a handoff proxy rule.

Last modified: Sun Nov 10 00:23:08 EST 2002