Elasticsearch with NGINX

Posted on 2022-03-25 in technical • 3 min read

For quite a while, I have been running an elasticsearch cluster of 5 nodes internally. They all talk to each other happily, but there is no authentication. I wanted to add some security with SSL and authentication. I checked out the elastic documentation, and the thought of managing a bunch of SSL certs across all the nodes sounded like a nightmare I didn't want to deal with. We are already using HAproxy for SSL termination, so I would rather just channel everything through that.

HAproxy will take care of the SSL part, but I also wanted to authenticate the front-end users, and deny access if the user isn't part of an allowed group. I know apache can do a basic-auth with mod_autz_ldap, but I wanted to use NGINX because it's quite a bit lighter in terms of server resources consumed. I found "an nginx blog post" that detailed how to set up an auth server (which I didn't know was a thing and has now completely changed my NGINX life :) )

The Setup

There are essentially three parts here. The first is the auth service which points to the nginx-ldap-auth server, the second is the haproxy that terminates the connections and handles SSL, and finally the elastic server itself.

nginx-ldap-auth

I am currently running this on a stand-alone server, which is also backed by HAproxy. I am not currently using SSL on that, but I will eventually probably add a cert to this too. The basic configuration is:

proxy_cache_path cache/ keys_zone=auth_cache:10m;

 server {
   server_name auth.example.com;
   listen 80;

   location = / {
     proxy_pass http://127.0.0.1:8888;
     proxy_cache auth_cache;
     proxy_cache_valid 200 10m;

     # URL and port for connecting to the LDAP server
     proxy_set_header X-Ldap-URL "ldaps://ldap.example.com:636";

     # Negotiate a TLS-enabled (STARTTLS) connection before sending credentials
     proxy_set_header X-Ldap-Starttls "false";

     # Base DN
     proxy_set_header X-Ldap-BaseDN "dc=example,dc=com";

     # Bind DN
     proxy_set_header X-Ldap-BindDN "cn=some bind user,dc=example,dc=com";

     # Bind password
     proxy_set_header X-Ldap-BindPass "someSecretPassword";

     proxy_set_header X-Ldap-Template "(uid=%(username)s)";
   }
 }

By default, the X-Ldap-Template is (cn=%(username)s), but we use uid as the username so I had to change this. Additional filters can (and will eventually be added here) to further restrict access to a particular group of accounts with something like (&(uid=%(username)s)(memberof=cn=somegroup,dc=example,dc=com)).

HAproxy

The config for this is:

frontend http-in-elastic-80
    bind 10.0.0.2:80
    redirect scheme https code 301 if !{ ssl_fc }

frontend http-in-elastic-443
    bind 10.0.0.2:443 ssl crt kibana_example_com.pem crt elastic_example_com.pem
    default_backend farm-elastic-80

backend farm-elastic-80
    server elasticserver 10.0.0.3:80 check

frontend http-in-auth-80
    bind 10.0.0.2:80
    default_backend farm-auth-80

backend farm-auth-80
    server auth auth.example.com:80 check

NGINX

On the elastic server itself, there is the following NGINX configuration. I have split the config into two files, one for elastic, and one for kibana.

Config for elastic:

upstream "elastic" {
    server 127.0.0.1:9200;
    keepalive 15;
}

server {
    listen 80;
    server_name elastic.example.com;

    location / {
      auth_request /auth;

      proxy_pass http://elastic;
      proxy_redirect off;
      proxy_buffering off;
      proxy_read_timeout 90;

      proxy_http_version 1.1;
      proxy_set_header Connection "Keep-Alive";
      proxy_set_header Proxy-Connection "Keep-Alive";
    }

    location = /auth {
      internal;
      proxy_pass http://auth.example.com/;
      proxy_pass_request_body off;
      proxy_set_header Content-Length "";
      proxy_set_header X-Original-URI $request_uri;
      proxy_set_header X-Original-Remote-Addr $remote_addr;
      proxy_set_header X-Original-Host $host;
    }
}

The configuration for kibana is identical except for the server name and upstream server. I could probably do something more clever here, but it works the way it is, so I'm going to leave it:

upstream "kibana" {
    server 127.0.0.1:5601;
}

server {
    listen 80;
    server_name kibana.example.com;

    location / {

    auth_request /auth;
      proxy_pass http://kibana;
      proxy_redirect off;
      proxy_buffering off;

      proxy_http_version 1.1;
      proxy_set_header Connection "Keep-Alive";
      proxy_set_header Proxy-Connection "Keep-Alive";
    }

    location = /auth {
      internal;
      proxy_pass http://auth.example.com/;
      proxy_pass_request_body off;
      proxy_set_header Content-Length "";
      proxy_set_header X-Original-URI $request_uri;
      proxy_set_header X-Original-Remote-Addr $remote_addr;
      proxy_set_header X-Original-Host $host;
    }
}

What happens on the client side?

Any request to the elastic server is now gated by nginx's auth_request directive. So if you attempt to connect without a username and password, you will get a 401 returned. You may specify the username in the URL like https://user:password@elastic.example.com, or if you are using curl with curl -u "user:pass" https://elastic.example.com