28 September 2015

The Day the DNS Died / BIND Triage Server Array

(Credit Jon Watson)

DNS is such a pivotal, central, important part of not just internet browsing, but of everyone's IT infrastructure.... and yet, when it died, I was left to eulagize, alone, at its funeral.  I don't mean to be melodramatic, and I am not going to start singing any Don McLean songs, but I do want to complain.  I AM the IT department, so when something like this breaks and I come up with a great solution, I can't share with anyone.

The best I ever get to do is give analogies... and I usually send them before lunch so it usually has something to do with food or cars, or both:

Our internet is running slow... Technically, our name resolution is running slow.  The internet is now running much faster.  It’s kind of like a pizza delivery guy who drives an Austin Martin DB9 with a super charged V8, but doesn’t know the area, doesn’t have a phone book and has a GPS device from the 90’s whose batteries have run out.  We have been having collisions on the network, and our DNS (internet phone book) is trying to resolve every one of them.  I am loading up a second DNS server right now, and separating internal and external traffic so we can get through testing, and you all can take advantage of our faster internet.  Please be patient this morning and I will keep you all up to date.  
- Tom
We should probably start at the beginning.  I am the sys admin for an "Urban-Ring" School in the midwest. We have a fairly standard setup. We have fiber running from every rack and between every building.  We have a standard AD Windows Domain setup with backup DCs. Our DHCP is also one of the Domain Controllers (DCs).  It feeds both the primary Windows DNS Server and the slave.  Because of the continual need for both internal and external resolution of names and addresses, our Windows machines serve up everything, and are the only DNS servers our clients see.  We have a SonicWall with content filter, firewall, etc... for a gateway.  All very standard.

I am very happy with our Windows servers.  They are easy to manage, they are scale-able, and they have always done exactly what I wanted them to.    We have several virtual networks on our one physical network.  We are in the process of running more fiber so that we have 2 more physical networks.  The problem entered in when we doubled the clients on the network.  We are in the middle of switching our security cameras to IP cameras, and we just added 55 IPads to the system.  We have the IP addresses, and we have the outgoing bandwidth.  But one afternoon, everything screeched to a halt.

I was frantic.  Running every test I know to run, I ruled out anything obvious.  In the end I found an overworked primary DNS server and a useless secondary, and lots of collisions on the network.  I needed a fast solution so that we could get through some standardized testing, and keep our IT services going until I could finish our expansion and put things right.

My solution was a BIND Triage Server Array.  You will not find anything on google about a BIND TSA, but is seemed like a good name for what I was doing.  Essentially, I needed a few forward only DNS servers to seperate out our internal traffic and outgoing traffic, and get it to the right place.   I wanted to continue using my DHCP and AD DCs for internal resolution because it is more efficient and verbose for Windows machines.  All of our IPads and such  were not going to be authenticated by the firewall and content filter at the sonic wall.  BIND is a fast, easy solution.

Here is how I did it:

Spin up CentOS 7, minimal install, headless.

Give the machine a static IP during install.  The DNS server is going to be changed latter, at this point use your standard DNS, or so that it can download additional programs.

Once you are up, install BIND and its utilities, nano (text editor), wget (to update your root fowards), and tcdump (to monitor things).

$ sudo yum install bind bind-utils
$ sudo yum install nano
$ sudo yum install wget
$ sudo yum install tcpdump

OR the quick and clean method of installing them all:

$ sudo yum install -y bind bind-utils nano wget tcpdump

The first thing I do, before I start editing my config file, is make sure my root fowards are up to date.

$ sudo wget --user=ftp --password=ftp ftp://ftp.rs.internic.net/domain/db.cache -O /var/named/named.root

Now, we can edit.

$ sudo nano /etc/named.conf

I changed a few things to make it simple.  This is for a test network, we use different subnetting, but that is for another day, is simple.  Change the IPs to match your network.

The listen-on port is the internal and static IP of the DNS server.
Change your allow-query to your network
Set your forwarders as your ISPs DNS servers
Change the name of the root fowards file
The two other zones are for your internal network.  In this case is the test networks AD DC, DHCP, and DNS server.

// named.conf
options {
        listen-on port 53 {;;};
        listen-on-v6 port 53 { ::1; };
        forwarders {;;
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        allow-query     { localhost;;};
        recursion yes;
        dnssec-enable yes;
        dnssec-validation yes;
        dnssec-lookaside auto;
        managed-keys-directory "/var/named/dynamic";
        pid-file "/run/named/named.pid";
        session-keyfile "/run/named/session.key";
logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
zone "." IN {
        type hint;
        file "named.root";
zone "myschool.edu" {
  type forward;
    forward only;
    forwarders {; };
zone "0.168.192.in-addr.arpa" {
    type forward;
    forward only;
    forwarders {; };
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

Make sure you didn't jack it up....

$ sudo named-checkconf

Make sure clients can connect through your firewall.

$ sudo firewall-cmd --permanent --add-port=53/tcp
$ sudo firewall-cmd --permanent --add-port=53/udp
$ sudo firewall-cmd --reload

And, we want it to start on boot.

$ sudo systemctl enable named

and GO.....

$ sudo systemctl start named

Now we want to change our DNS server's DNS server, first check what the name of your NIC is.

$ sudo nmcli connection show

This DNS server's connection was p3p1.... so ...

$ sudo nmcli con mod p3p1 ipv4.dns ""

At this point, change the DNS servers for your clients to the DNS servers static IP, and we can test the system.  The following command will let you log all the queries going to the new DNS server.

$ sudo rndc querylog

To actually see the logs....

$ sudo tail -f /var/log/messages

or get fancy with perl .... do install perl

$ sudo tail -f /var/log/messages | perl -pe 's/.*named.*/\e[1;31m$&\e[0m/g'

If you want more information, use tcpdump.  Note that you use the name of the NIC connection, in this case p3p1.

$ sudo tcpdump -n -s 1500 -i p3p1 port 53

Spin up a second one to make sure you are redundant.  I actually found that this works really well, and may leave it like this for a while.  It is easy, peasey, and quick to change.

No comments:

Post a Comment