This site is now 100% read-only, and retired.

XML Logo

Posted by lee on Fri 7 Jan 2011 at 19:48
Tags: , , ,
UPDATE: a later entry uses a simplified config.

Support for DKIM signing in Exim is available since version 4.70, and the configuration supplied with Debian makes it fairly straightforward to implement. However it suggests an all or nothing configuration wherein all outgoing mail is signed with the same domain authority.

Where multiple domains are used it may be necessary to selectively switch on DKIM signing, and be able to specify the signing domain. The following details provide a mechanism to do so within the standard Debian Exim configuration.

(This assumes that the keys have been created and the requisite records have been added to DNS for the affected domains. It also assumes a split config.)

Set up a simple look up file such as /etc/exim4/dkim_senders

*@example.com: example.com
test@example.org: example.org

This config should mean that anything sent from any address at example.com is signed as example.com, but only test@example.org will be signed with the example.org key. If default DKIM is not enabled, then no other example.org mail will be signed.

Now create a new router that sits in front of the main router for external main (whatever uses remote_smtp as a transport e.g. dnslookup) such as /etc/exim4/conf.d/router/180_local_primary_dkim (basically a copy of dnslookp with a modified transport)

dnslookup_dkim:
  debug_print = "R: dnslookup_dkim for $local_part@$domain"
  driver = dnslookup
  domains = ! +local_domains
  senders = lsearch*@;/etc/exim4/dkim_senders
  transport = remote_smtp_dkim
  same_domain_copy_routing = yes
  # ignore private rfc1918 and APIPA addresses
  ignore_target_hosts = 0.0.0.0 : 127.0.0.0/8 : 192.168.0.0/16 :\
                        172.16.0.0/12 : 10.0.0.0/8 : 169.254.0.0/16 :\
                        255.255.255.255
  no_more
Then add in a new transport /etc/exim4/conf.d/transport/30_local_remote_smtp_dkim (basically a modified version of remote_smtp)
remote_smtp_dkim:
  debug_print = "T: remote_smtp_dkim for $local_part@$domain"
  driver = smtp
.ifdef REMOTE_SMTP_HOSTS_AVOID_TLS
  hosts_avoid_tls = REMOTE_SMTP_HOSTS_AVOID_TLS
.endif
.ifdef REMOTE_SMTP_HEADERS_REWRITE
  headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE
.endif
.ifdef REMOTE_SMTP_RETURN_PATH
  return_path = REMOTE_SMTP_RETURN_PATH
.endif
.ifdef REMOTE_SMTP_HELO_DATA
  helo_data=REMOTE_SMTP_HELO_DATA
.endif
dkim_domain = ${lookup{$sender_address}lsearch*@{/etc/exim4/dkim_senders}}
dkim_selector = yourhostname
dkim_private_key = /etc/ssl/private/dkim.key
dkim_canon = relaxed
dkim_strict = false
#dkim_sign_headers = DKIM_SIGN_HEADERS
I've left the selector and keys the same since there doesn't appear to be any problem sharing these across domains, but these could also be found via lookups if needed.

 

Posted by lee on Sun 5 Dec 2010 at 03:27
Tags: , ,

Assuming you want to allow uploads to a webhost from a third party that has generated a public key for this purpose.

Set up the account

The following will create a new user and user directory in the standard location
sudo adduser --disabled-password --gecos 'rsync user' rsync01
Alternatively, the home can be set to an existing location as configured in apache. (Note that this shouldn't itself be a directory server by Apache)
sudo adduser --disabled-password --gecos 'rsync user \
  --no-create-home --home /srv/web/example.com rsync01
Then add the id_rsa.pub file into the user's authorized_keys file
sudo su -l rsync01
mkdir -m 700 ~/.ssh
cat /tmp/id_rsa.pub >> .ssh/authorized_keys
chmod 600 .ssh/authorized_keys
mkdir ~/docs

Restricting further access

You'll want to tie the remote user to only using rsync and only in a specific sub-directory, so you probably want to install rrsync.

It's already included in the Debian disribution of rsync.

sudo cp /usr/share/doc/rsync/scripts/rrsync.gz  /usr/local/bin/
sudo gzip -d   /usr/local/bin/rrsync.gz
sudo chmod 755 /usr/local/bin/rrsync
Then modify the new user's authorized_keys
sudo vim ~rsync01/.ssh/authorized_keys
And prefix the key with command specifying the sub-directory to be used, e.g. ~/docs
command="/usr/local/bin/rrsync docs" ssh-rsa AAA...

Note: by locking the command to the specified subdirectory, the "full path" from the point-of-view of the uploader is "/".

 

Posted by lee on Mon 6 Jul 2009 at 16:16
Tags: ,

My mail system has been generating a log of log noise about temporary DNS failures recently. I took a look at the logs and tracked the issue down to a certain (apparently US-based spammer) sending mail out from domains with many MX records associated with it. So many, in fact that the the MX record exceeds the 512 byte limit for UDP, requiring that a TCP query then be made. It's the UDP failure before the TCP retry that's causing the warning in the logs.

While this is technically valid behaviour, it's very unusual and bad practice.

Firstly: TCP-only DNS is unreliable (especially in NAT environs) and considered wasteful network wise if it can be avoided.

Secondly: If you actually need many backup MX records (and you probably don't), it's better to give multiple addresses to a few distinct host names. The algorithm for mail delivery requires going to each host name, not each IP address. In the event of issues on the MX servers, it's an unfair burden for a sender to iterate through each of many hosts before concluding that delivery is not currently possible.

I actually suspect the many-MX design to be some technique for bypassing anti-spam systems, but I don't have any clear example I can point to.

So for now, I'd just like to track them, and later possibly incorporate the information into an anti-spam heuristic.

I'm currently just tagging mails in an ACL, based on the number of MX records associated with the domain of the sender. Oddly, for such a rich set of opperators, Exim doesn't seem to have something counting the number of items in a list. (Note: while this returns the number of MX records, it's not conclusive in recording if TCP was required for a DNS lookup.)

   warn    set acl_m_sender_mx_count = ${reduce {${lookup dnsdb{>: \
            mx=$sender_address_domain}}}{0}{${eval:$value+1}}}
           add_header = X-Sender-MX-Count: ${acl_m_sender_mx_count}

If I actually wanted to act on this information I can apply a test such as:

    condition = ${if >{$acl_m_sender_mx_count}{10}}

 

Posted by lee on Wed 17 Sep 2008 at 11:31
Tags: ,

Mail for a specific domain is passed into a external app via a custom router. When the external app fails the router delays the delivery, but for the case where we need to do a live test on a new installation or configuration we want to freeze any incoming mails and then selectively deliver them from the command line.

A custom router to freeze mail based on the existence of a specific file (in this example "/etc/exim4/eh-freeze") should be placed before the router.

externalhandler_test_freeze:
   debug_print = "R: externalhandler_test_freeze for $local_part_prefix$local_part@$domain"
   condition = "${if exists{CONFDIR/eh-freeze}{true}{false}}"
   driver = redirect
   domains = +eh_domains
   user = www-data
   allow_filter
   allow_freeze 
   data = "#Exim filter \n freeze"

The freezing only works once. A mail manually thawed on the command line will bypass this router regardless of the "eh-freeze" config file existing.

 

Posted by lee on Mon 14 Jul 2008 at 19:17

If you're being deluged by backscatter email, there is a way to block at least some them with Exim using a DNSBL. However you need to treat these sources differently from normal spam sources.

A database of backscatter IPs is available for use via backscatterer.org but, as it warns, you'll want to use it in "SAFE" mode.

Firstly, if you don't already have one, you'll want to add a local ACL file for the RCPT ACL check. On a split config, add something like the following to /etc/exim4/conf.d/00_local_config .

CHECK_RCPT_LOCAL_ACL_FILE=/etc/exim4/local_acl_check_rcpt

Then edit this file, or your local equivalent, and add the following:

deny senders = :
     dnslists = ips.backscatterer.org
     log_message = $sender_host_address listed at $dnslist_domain
     message = Backscatter: $dnslist_text

The trick here is the senders line contains a single colon, which will match the NULL sender used by the vast majority of bounce sources.

If you want to test it out before activating a deny rule, use a warn rule to begin with:

warn senders = :
     dnslists = ips.backscatterer.org
     log_message = $sender_host_address listed at $dnslist_domain
     message = X-Backscatter: $dnslist_text

Update the config with update-exim4.conf and restart the exim daemon to activate.

Note: mail to postmaster is, by default, not affected by locally applied ACLs on a standard configuration. You'd need to make additional changes if you want to block backscatter sources from mailing postmaster - but this is not advised.

 

Posted by lee on Tue 17 Jun 2008 at 17:47
Tags: , ,

The standard Exim configuration for mail delivery in Debian is usually one of two methods: do DNS lookups and deliver direct to IP, or send via a smarthost.

In my specific case I want to keep the server configured to deliver via a smarthost, but for specific domains I want it to do direct delivery (bypassing an issue caused by delivery congestion on the smarthost).

The default Debian Exim config already has the concept of setting up manual routes for specific domains ("hubbed_hosts"), but what I want is to default to dnslookup based routes for specific domains.

The solution is to drop a new router into the config - conf.d/router/190_local_notsmart - that's a cross between hubbed_hosts and dnslookup.

notsmart_dnslookup:
  debug_print = "R: notsmart_dnslookup for $local_part@$domain"
  driver = dnslookup
  domains = "${if exists{CONFDIR/notsmart}\
               {partial-lsearch;CONFDIR/notsmart}\
            fail}"
  transport = remote_smtp
  same_domain_copy_routing = yes
  no_more

Now, if I add "example.com" to "/etc/exim4/notsmart", any mail to test@example.com (or test@foo.example.com) will be delivered directly rather than via the smarthost, but test@example.net will go via the smarthost.

 

Posted by lee on Thu 29 May 2008 at 15:09
Tags: , ,

If you're already familiar with using apache as a reverse proxy between frontend and backend servers then using mod_proxy_balancer to load-balance sites across multiple backend servers is straight forward.

Well, it's straight forward unless you need to track sessions - for example on a site with a shopping cart. Fortunately, you can use the "stickysession" parameter to keep sessions tied to specific backend servers. This uses the value of a session tracking cookie to chose the correct backend server (or "worker") to use.

What the Apache documentation fails to make clear is that the session tracking value has to have a specific format: it must end with the suffix of a period followed by the backend server's route identifier.

If your cookie isn't in the right format, and you're not in the position to monkey with your app's session management code, you'll probably going to want to force a cookie to be set via the apache config as suggested by Mark Round. This solution works fine, provided you can individually modify the config on each server - but the servers I'm using have identical configs managed centrally using a revision control system.

To get around this, I'm using a RewriteMap to do a table lookup on the server IP address to get the correct route identifier. When a session starts being tracked for stateful purposes the next request gets a balancer cookie. The following code appears in the configuration for the backend server for example.com:

RewriteEngine On
RewriteMap      routemap txt:/path/to/routemap
RewriteCond     %{HTTP_COOKIE} (existing_session_cookie) [NC]
RewriteRule     .* - [CO=balanceid:route.${routemap:%{SERVER_ADDR}|error}:%{HTTP_HOST}]

Note that this approach only starts working from the first request containing a session cookie. If this can cause a loss of state information (for example: login credentials) you'll need to either utilise an app-level cookie, or remove the RewriteCond and track all sessions.

Replace "%{HTTP_HOST}" with ".example.com" if the session needs to be sticky across subdomains.

The routemap file is maintained centrally and consists of simple IP address and route pairs:

10.16.4.4       www1
10.16.4.6       www2
10.16.4.7       www3
10.12.2.13      www0

Then the following is added to the apache config on the reverse-proxy:

<IfModule !proxy_balancer_module>
ProxyPass / http://backend.example.com/
</IfModule>
<IfModule proxy_balancer_module>
ProxyPass /balancer-manager !
ProxyPass / balancer://backend.example.com/ stickysession=balanceid nofailover=On
<Proxy balancer://backend.example.com>
  BalancerMember http://10.16.4.4:80 route=www1 loadfactor=60
  BalancerMember http://10.16.4.6:80 route=www2 loadfactor=60
  BalancerMember http://10.16.4.7:80 route=www3 loadfactor=60
  BalancerMember http://10.12.2.13:80 route=www0 loadfactor=20
  ProxySet lbmethod=byrequests
</Proxy>
<IfModule status_module>
<Location /balancer-manager>
  SetHandler balancer-manager
  Order Deny,Allow
  Deny from all
  Allow from 10.8.2.
</Location> 
</IfModule>
</IfModule>

Note that the extended parameters toe BalanceMember will only be read on Apache's startup (or restart); reload will have no effect, even for a fresh entry.

Also note that, if you're caching static content on the reverse proxy, you'll probably want to prevent the cookies from being cached. If you need to keep other cookies for some reason, you'll need to modify the RewriteCond to avoid adding balance cookies to cached content.

CacheIgnoreHeaders Set-Cookie

 

Posted by lee on Thu 7 Feb 2008 at 13:05

One of the primary pieces of advice given to mail administrators regarding backscatter is "reject mail during the SMTP transaction rather than accepting the mail and generating a DSN (bounce)".

It's solid advice to be sure, but it breaks down in situations when mail is accepted by one system to be forwarded on to system with a stricter criteria for message acceptance. The two main scenarios are "dumb" secondaries and primaries that rewrite certain addresses that get forwarded on to remote systems with, for example, different policies regarding spam filtering. For the purposes of backscatter these are essentially the same thing.

Both of these scenarios are quite common but, since they complicate matters, their usefulness is routinely denied by the advocates of various email spam-control schemes. Secondaries are frequently deemed unnecessary because the internet is "more reliable these days" (hah!).

"Dumb" secondaries (secondaries that accept mail without performing the same acceptance checks as primaries) are commonly found on low profile machines not specced-out for the needs of mail filtering. It's not unusual, in my experience, for two companies on different networks to reciprocally host a box installed for the purpose of secondary MX and secondary DNS. It's also common for sites to use secondaries provided by a third-party, such as their upstream network provider.

(And I'd note that there are major email providers out there generating bounce messages on their secondaries, it's not just dusty legacy setups.)

So if I'm administering a machine using a secondary out of my administrative control, I usually put in an exception in my transaction ACLs to blindly accept mails from secondaries in order to prevent them from generating DSNs (and possibly filling their queues with undeliverable bounces).

But then I'm faced with dealing with how to suppress the bogus DSNs.

My exim-fu is weak, and I don't know how to avoid generating bounces for bogus mail that has already been accepted. I'm not sure if any of the ACL rules can actually apply to internally generated DSN mail.

If there's an elegant solution, I don't know it yet.

My solution in the mean-time is just to write custom routers for common scenarios.

Using the split config, I place custom routers after the local delivery routers, but before the remote delivery routers, e.g. /etc/exim4/conf.d/router/188_local_drop_backscatter .

I then use "match" regexps against $message_body . $message_body, by default, is the first 500 bytes of the message body - this is usually enough to capture the rejection explanation at the top, but if you want to match something from the headers of the rejected mail you may need to set something like "message_body_visible = 2000" in the main section of the exim config.

In the following example, assume your server doesn't have virus scanning, but you forward mail to one that does. I've taken the decision here that a scenario in which a bounce message contains "This message contains a virus" (the rejection message from the remote site) near the top of the message body will not be delivered to the listed recipient, but will instead be dropped.

drop_backscatter_virus:
   debug_print = "R: drop_backscatter_virus for $local_part@$domain"
   driver = redirect
   senders = :
   domains = ! +local_domains : ! +relay_to_domains
   data = :blackhole:
   condition = "${if \
       match{$message_body}{This message contains a virus} {true} \
   }"
   allow_defer
   allow_fail
I suppose this approach can be further enhanced by customizing bounce messages to include matchable hints.

 

Posted by lee on Thu 21 Jun 2007 at 19:37
Tags: , ,

I recently started using an LDAP addressbook, and given my correspondents are automatically added into it, I decided to use it for whitelisting in my Exim user filter.

Firstly I added an entry into the LDAP server for exim and assigned it read access to the Addressbook database.

Then I added the following into my user exim filter file. I've placed it below the entries dealing with mailing lists, so lookups only happen for non-list (or "direct") email.

if "${lookup ldap {\
      user=\"cn=exim,dc=example,dc=com\" pass=TRUSTNO1 \
      ldap://localhost/o=Addressbook?cn?sub?(mail=${address:$h_From:}) \
      }{yes}{no}}" is "yes"
then
   #logwrite "userfilter_whitelist_ldap ${address:$h_From:}"
   save /var/mail/foo/bar/whitelist/
   finish
endif

Basically the filter extracts an email address from the "From:" header (alternatively the $reply_address or $sender_address variables could be used) and checks to see if there is a corresponding name listed in the database.

 

Posted by lee on Tue 19 Jun 2007 at 10:45

This post details the steps needed to set up an LDAP server for use as an Evolution network addressbook. Note, I'm not an LDAP expert by any stretch so suggestions/fixes are welcome.

The first thing to do, before installing the server is to use the TCP wrapper support to restrict LDAP access to just the ranges you need, for example 19.0.2.0/24: /etc/hosts.deny

ldap: ALL
/etc/hosts.allow
ldap: 192.0.2.

Alternatively you can make arrangements at your relevant firewalls to allow traffic via port 389 tcp/udp.

Then install the LDAP packages:

# apt-get install slapd ldap-utils

Aside from the admin password, I accepted all of the default configuration choices. My "base" is "dc=example,dc=com" Once slapd is installed add a user account, e.g. for John Smith, first by creating the file john.ldif :

# Organisational Unit, only needed once
dn: ou=People,dc=example,dc=com
objectclass: organizationalUnit
ou: People

dn: cn=John Smith,ou=People,dc=example,dc=com
cn: John Smith
sn: Smith
givenname: John
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
ou: People
mail: smith@example.com
userpassword: topsecret

This is then added into slapd using the admin password

$ ldapadd -x -D "cn=admin,dc=example,dc=com" -W -f john.ldif

Now I add TLS support. There's no out-of-the-box TLS set-up in the Debian package I installed_ (as you might get with MTA/MRA packages). For our purposes we don't need a properly signed certificate, a self-signed "snakeoil" cert will be OK. I already had a self-signed certificate set up for something else, so I just copied it to ldap.pem.

You need to protect the private key, and add openldap to the ssl-cert group (if you're not using the debian setup, other key access methods may vary).

# chown root:openldap /etc/ssl/private/ldap.pem
# chmod 640 /etc/ssl/private/ldap.pem
# adduser openldap ssl-cert

Then you need to add the following to "globals" section of /etc/ldap/slapd.conf

TLSCACertificateFile    "/etc/ssl/certs/ssl-cert-snakeoil.pem"
TLSCertificateFile      "/etc/ssl/certs/ldap.pem"
TLSCertificateKeyFile   "/etc/ssl/private/ldap.pem"

You can have multiple separate databases served by the LDAP server, and you could make it part of the primary database. But to simply things, I've set up the addressbook database as a separate database.

Firstly set up a directory. You can use a subdirectory of /var/lib/ldap/ but I prefer to use /srv.

# mkdir -p /srv/ldap/addressbook
# chown openldap:openldap /srv/ldap/addressbook
# chmod 700 /srv/ldap/addressbook

Now I add the configuration section for the addressbook database to the bottom of /etc/ldap/slapd.conf

database        bdb
suffix          "o=Addressbook"
rootdn          "cn=admin,dc=example,dc=com"
directory       "/srv/ldap/addressbook/"
schemacheck     on
lastmod         on
#
dbconfig set_cachesize 0 2097152 0
dbconfig set_lk_max_objects 1500
dbconfig set_lk_max_locks 1500
dbconfig set_lk_max_lockers 1500
#
access  to dn="" by * read
access  to dn="cn=Subschema" by * read
access  to *
        by dn="cn=admin,dc=example,dc=com" write
        by dn="cn=John Smith,ou=People,dc=example,dc=com" write
        by anonymous auth
        by * none
index           cn,mail,sn,givenname    eq,subinitial

We've given the "John Smith" account write access to this LDAP database, but other accounts, including anonymous in a LAN-only set-up, could just be given read access.

Restart slapd to enable the new database.

# /etc/init.d/slapd restart

Then, before you can add new entries, create addressbook.ldif

dn: o=Addressbook
objectclass: organization
o: Addressbook
description: Online Addressbook
$ ldapadd -x -D "cn=admin,dc=example,dc=com" -W -f addressbook.ldif

Assuming everything went OK during this setup, you should now have an operational LDAP addressbook.

To try it out, open Evolution's Contact tab and select "New Address Book".

Set the type as "LDAP" and put your server name under server information. If you're using it across the Internet and have TLS set up, ensure TLS encryption is selected (it should be by default). Click on the "Details" tab and click "Find possible search bases". This should return the base DN of both databases - if it doesn't there's a problem somewhere (sorry!). Select "o=Addressbook". Set the search scope to "Sub".

Go back to the "General" tab and set the login method to "Using distinguished name" and set the Login to

cn=John Smith,ou=People,dc=example,dc=com

Then click OK. You'll then be prompted for a password (the one set in john.ldif, not the LDAP admin password).

There'll be no entries in the Addressbook yet, so create a new contact and set the "Where" to be your LDAP addressbook. Or, to pull down a few entries search for fields containing "@".

You'll notice that some of the properties are "greyed out", uneditable. This is because they're not covered by the standard schemas.

If you have a copy of evolutionperson.schema somewhere (e.g. in /usr/share/evolution-data-server-1.12/ ) copy it into /etc/ldap/schema/ and add the following to the Global section of /etc/ldap/slapd.conf and restart.

include         /etc/ldap/schema/evolutionperson.schema

This will allow the use of Evolution-specific fields to be stored in your LDAP database [UPDATED: actually it doesn't seem to work, more research needed], but these won't be available from other apps using LDAP (Mozilla Thunderbird, for example, also requires a specific schema. Ugh.)

Another schema is needed for the iCal related fields (TODO) and the Instant Messenger fields (?!).