Data Networking/Spring 2017/ERB

TELE 5330 Project 3 focuses on application-layer protocols required to configure a basic enterprise network:


The HTTP, DNS and DHCP services are implemented on separate servers for robustness and load-balancing. Servers run Ubunto 16.04 on virtual host machines running on VMWare Workstation Pro.


Team Members

edit
  • Elliot Landsman
  • Bhoomi Waghela
  • Rishabh Waghela

Component Design & Implementation

edit

Secure (HTTPS) Webserver

edit

We configured a webserver serving a template website (index.html and several other associated pages) using the native Apache 2 service on Ubuntu 16.04. HTTP traffic is delivered securely using Transport Level Security (TLS); connections from clients over the older SSL 2.0 and SSL 3.0 implementations are not allowed. Unsecured HTTP requests to ports 80 and 8080 are redirected to the secure HTTPS port 443.

Instead of relying on the built-in Ubuntu HTTPS certificate (ssl-cert-snakeoil), we issued a new, self-signed certificate. This offers marginally better security then relying on a mass-distributed, default Ubuntu certificate; however, the certificate can still be falsified as its identity cannot be verified with an established Certificate Authority like Verisign.

Protocol & Component Overview

edit

The following application-layer components were used to configure a secure Apache webserver in Ubuntu.

HTTP (Hypertext Transfer Protocol)
edit

HTTP uses a client-server paradigm where a server, with a known IP address and name, listens on well-known ports 80 or 8080 for specially-constructed TCP packets that contain an HTTP request with the following format:

GET /path/index.html HTTP/v.v
Host: www.host.tld:80
AdditionalHeader1: Value
AdditionalHeader2: Value
<NewLine>

In HTTP 1.1, the Host header is required because it is expected that a single server may host several websites. Additional headers make provisions for supporting caching (If-Modified-Since), cookies (Cookie), and different languages and character encoding, among other features.

If the requested resource is found on the server, it will respond with 200 OK. Otherwise, it will respond with 404 Not Found. Additional possible responses are 201 Accepted, 202 Created or 206 Partial Content, which indicate that the request is successful, but the content is queued or being created for the client. These codes are not shown to the user because they usually indicate that the browser must wait for the request to complete. The code 301 Moved Permanently indicates that a resource has permanently moved and indicates its new location; the browser typically automatically makes a request to the new location without showing the message to the user either.

A response message has the following format:

HTTP/v.v NNN CODENAME
Date: [Date/Time Stamp]
Content-Type: [text, image...]
Content-Length: [bytes]
AdditionalHeader1: Value
AdditionalHeader2: Value
<NewLine>
<HTML>HTML content with the byte size specified in header Content-Length</HTML>
<NewLine>

Additional headers may include support for cookies, caching, and headers for data formatting and text encoding.

TLS (Transport Level Security)
edit

TLS 1.2 replaces the older SSL (Secure Sockets Layer) 2.0 and 3.0 standards, both of which have been compromised and retired in 2011 and 2015, respectively.

TLS and HTTPS use public-key cryptography for the server-side only, which means that only the server must carry identity certificates verified by a known party (such as Verisign). Once the server shares its public key with the client, the client generates session keys (either using a random number or via the Hellman-Duffey key exchange), encrypts them with the server's public key, and sends them back. The following steps are required:

  1. The Client contact a Server requesting a secure connection.
  2. The Server responds back with its certificate, which the Client verifies with a 3rd party before proceeding.
  3. The Server also responds with its public key.
  4. The Client generates session keys using one of several accepted methods (either symmetric, or asymmetric - the latter ensures forward secrecy if the server's certificate is later compromised), encrypts them with the Server's public key, and sends them back.
  5. A session is now established, and all communications will be made using the symmetric, or asymmetric session keys.
Apache 2 HTTP Server
edit

Apache is a suite of webserver technologies first released in 1996. It is an evolution of an earlier Linux component called HTTPd (HTTP Daemon), which was released in 1993 and offered one of the earliest widely available webserver capability. Apache splits different features into modular components, each running in a separate process. This improves performance in multi-processor environments, and improves stability.

Apache 2 is available for free with Ubuntu, and can be enabled and configured quickly. However, care must be taken to properly configure security settings, as well as generally configure the webserver host for secure network access. The default Apache 2 configuration is inherently insecure and should not be exposed to external users.

Implementation

edit

The following broad configuration steps must be taken to configure a basic website with TLS:

  1. Install and enable Apache 2 and HTTPS components
  2. Create a configuration file for custom website based on default-ssl.conf
  3. Import custom HTML content, including an index.html page
  4. Configure a self-signed SSL certificate in place of the default ssl-cert-snakeoil
    • Note that this only marginally improves security because self-signed certificates are easy to fake if generated with the same input parameters on a different machine
  5. Configure redirection of all insecure HTTP traffic from ports 80/8080 to 443
  6. Configure Apache 2 settings for enhanced security


Detailed Configuration Instructions:

Enable Apache 2 HTTP and HTTPS Components
edit

Webserver components on Ubuntu distros are not enabled by default; this explains how to deploy and start them.

  1. Install Apache 2 Services:
  2. sudo apt install apache2
  3. Enable the SSL service module:
  4. ssudo a2enmod ssl
  5. Navigate to site templates directory:
  6. cd /etc/apache2/sites-available
  7. Create the Project 3 site configuration from the default-ssl template:
  8. sudo cp default-ssl.conf project3-site.conf
  9. Enable site Project 3:
  10. sudo a2ensite project3-site
  11. Restart the Apache 2 server to apply settings:
  12. sudo service apache2 reload
  13. Test configuration from a client machine that has IP network access to this server:
    1. Open a browser (e.g. Firefox)
    2. Navigate to the webserver's IP address with the HTTPS prefix; e.g.
    3. https://192.168.58.128
    4. When a warning is shown that the server's certificate is not signed by a valid Certificate Authority, add the site to an exception list, or bypass the warning
    5. The default Apache 2 webpage ("It Works!") is shown


Upload Custom HTML Content
edit

This explains how to upload custom index.html and other content to your site.

  1. Download a website template online. Free templates are widely available.
    1. Ensure that the template includes an index.html page on the root level.
    2. Alternatively, construct a simple Index.html page in a text editor.
    3. Note: if an index.html page is not available, additional configuration steps must be taken to hide the websites file structure from visitors for security.
  2. Create a directory for your site where all the HTML content will be stored in the Ubuntu www store (e.g. project3):
  3. mkdir /var/www/project3
  4. Copy HTML files for selected template into the site's HTML folder; -r option indicates recursive copy (include subfolders):
  5. sudo cp /home/elandsman/desktop/lawfirm/* /var/www/project3 –r
  6. Navigate to enabled Apache 2 sites directory:
  7. cd /etc/apache2/sites-enabled
  8. Edit your site's configuration file, which is based on default-ssl.conf:
  9. sudo vim project3-site.conf
    1. Set the DocumentRoot property to the site's www folder; e.g. /var/www/project3
    2. Set the ServerName property to the site's URL path; e.g. project3.home

    Example site .conf configuration:

           ServerName project3.home
           DocumentRoot /var/www/project3
    
  10. Restart the Apache 2 server to apply settings:
  11. sudo service apache2 reload
  12. Test configuration from a client machine that has IP network access to this server:
    1. Open a browser (e.g. Firefox)
    2. Navigate to the webserver's IP address with the HTTPS prefix; e.g.
    3. https://192.168.58.128
    4. If a warning is shown that the server's certificate is not signed by a valid Certificate Authority, add the site to an exception list, or bypass the warning
    5. The custom template content is shown
    6. The default Apache 2 webpage ("It Works!") is NOT shown


Configure Self-Signed SSL Certificate
edit

Note that a self-signed certificate is easy to fake when recreated on a different machine with the same input settings. It is better than using the distro default key (snakeoil), however.

  1. Instantiate new a 2048-bit SSL certificate with 365 day expiration:
  2. sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/project3.key -out /etc/ssl/certs/project3.crt
  3. Edit the Project 3 site configuration to reference correct key and certificate files:
  4. sudo vim /etc/apache2/sites-enabled/project3-site.conf
  5. Set the following properties:
    1. Set the SSLCertificateFile property to the generated certificate file; e.g. /etc/ssl/certs/project3.crt
    2. Set the SSLCertificateKeyFile property to the generated private key file; e.g. /etc/ssl/private/project3.key

    Example secure site .conf file configuration:

           SSLCertificateFile      /etc/ssl/certs/project3.crt
           SSLCertificateKeyFile   /etc/ssl/private/project3.key
    


Redirect HTTP Requests to Secure HTTPS Content
edit

Visitors that request content from the default HTTP 80/8080 ports must be redirected to secure HTTPS content in port 443.

  1. Edit your site's configuration file:
  2. sudo vim /etc/apache2/sites-enabled/project3-site.conf
  3. Add the following VirtualHost XML nodes, for *:80 and *:8080, before the regular site configuration:
  4.  <IfModule mod_ssl.c>
            <VirtualHost *:8080>
                    ServerName www.project3.home
                    RedirectPermanent / https://www.project3.home/
            </VirtualHost>
            <VirtualHost *:80>
                    ServerName www.project3.home
                    RedirectPermanent / https://www.project3.home/
            </VirtualHost>
            <VirtualHost _default_:443>
                    ...Existing Project 3 site configuration
    
  5. Test configuration from a client machine that has IP network access to this server:
    1. Open a browser (e.g. Firefox)
    2. Navigate to the webserver's IP address with the non-secure HTTP prefix; e.g.:
    3. http://192.168.58.128
    4. Traffic is redirected to the secure HTTPS site automatically:
    5. https://192.168.58.128
    6. The custom template content is shown
    7. The default Apache 2 webpage ("It Works!") is NOT shown


Configure Apache 2 for Enhanced Security
edit

The following settings must be changed from default to ensure structural information about the OS, Apache, and installed modules is hidden.

  1. Edit the Apache 2 core configuration file:
  2. sudo vim /etc/apahce2/apache2.conf
  3. Add the following properties to the .conf file:
  4.  #
     # seucirty enhancements
     # reference: https://geekflare.com/10-best-practices-to-secure-and-harden-your-apache-web-server/
     #
     # 1. Disable trace, as it could allow an attacker from stealing another user's cookie information:
     traceenable off
     # 2. Prevent HTTP responses from including Apache version information headers:
     serversignature off
     # 3. Prevent HTTP responses from including OS and installed module information headers:
     servertokens prod
     # 4. Restrict HTTP to use TLS only, as opposed to SSL 2.0/3.0, which have been compromised:
     sslprotocol -all +tlsv1
     # 5. Allow only strong HTTPS ciphers:
     sslciphersuite all:!anull:!adh:!enull:!low:!exp:rc4+rsa:+high:+medium
    
  5. Verify that the properties are not set elsewhere in the document, which could conflict with your settings.
Configure Webserver Backups
edit

This explains how to backups the entire contents of the project www folder to a remote backup machine. We selected the DHCP server as the backup location for HTML content.

On the selected remote backup machine:

  1. Install the SSH server components:
  2. sudo apt-get install openssh-server
  3. Make a .ssh directory for the webserver's public key:
  4. mkdir ~/.ssh/

On the webserver machine:

  1. Install the SSH client components:
  2. sudo apt-get install openssh-client
  3. Generate public/private key pair for secure SSH connection without password entry:
  4. sudo ssh-keygen -t rsa
  5. Push the public key to the remote backup server:
  6. cat ~/.ssh/id_rsa.pub | ssh user@192.168.58.2 'cat >> .ssh/authorized_keys'
  7. Create a folder to stage the backup files, and to contain the backup shell script:
  8. mkdir ~/backup
  9. Create a shell script called backup.sh in the backup directory:
  10. vim ~/backup/backup.sh
  11. Add the following shell script logic to backup the www/project3 directory periodically:
  12. #!/bin/bash
    #Backs up www/project3 directory on DHCP server, which has a fixed
    #IP address of 192.168.58.2
    BUPSTORE=/home/elandsman/backup
    tar -cvzf $BUPSTORE/bup-$(date +%Y-%m-%d-%H-%M-%S).tar.gz  /var/www/project3 #Create a time-stamped tar file from the www/project3 directory
    rsync -avz $BUPSTORE/bup-* -e "ssh -i /home/elandsman/.ssh/id_rsa" elandsman@192.168.58.2:$BUPSTORE/ #Push the tar file to the remote backup server; note that username and private key are specified explicitly to avoid a password prompt
    mv $BUPSTORE/bup-* $BUPSTORE/archived/ #Move any backed-up files to an archive folder
    
    1. Note that the username and private key file must be explicitly specified in the ssh call to ensure public key authentication is successful.
    2. If authentication falls back on password entry, the cron job will fail, since it is automated and cannot supply a password autonomously.
  13. Add a cron job to run the script at 2:00am every day - off-hours (e.g. nighttime) are preferred to ensure files are not being edited:
  14. sudo crontab -e
    1. Add the following job schedule definition to execute the backup shell script:
    2. 0 2 * * * /home/elandsman/backup/backup.sh


Firewall & Network Security

edit

Component Overview

edit

We used IPTables, the built-in firewall mechanism in Linux. Note that the newer ufw front-end for IPTables has been disabled to ensure its settings do not conflict with IPTables.

Note that policies and rules configured in IPTables are not persistent, and will get erased on reboot. Several mechanisms exist, including the IPTables-Persistent aptitude package, and startup scripts. We used IPTables-Persistent.

Standard firewalling practices recommend implicitly denying all traffic except for explicitly enabled components. This is appropriate in server configuration, as only explicitly used/enabled components should be allowed to communicate.

We set the following implicit policies:

  1. Policy: Deny all incoming traffic
  2. Policy: Deny all forward traffic
  3. Policy: Permit all outgoing traffic
    • Note: Permitting all outgoing traffic has the potential to allow compromised applications to download malicious content from a remote server. Proper security policy guidelines recommend only allowing outgoing traffic to specific subnets, or from specific trusted applications (such as Apache modules for a Linux Webserver). However, timing constraints did not allow for fully configuring a proper outgoing traffic policy, as exceptions must be made for DHCP, DNS, HTTP/HTTPS and Ping traffic, greatly complicating the policy implementation.

The following incoming traffic formats were permitted:

  1. Rule: Incoming traffic belonging to established connections. This permits TCP (e.g. HTTP), ICMP (e.g. Ping), and UDP (e.g. DNS and DHCP) traffic initiated from this server.
  2. Rule: Incoming ping requests (ICMP Type 8) from the project IPv4 subnet (192.168.58.0/24) only
  3. Rule: Incoming HTTP/HTTPS traffic (TCP to port 80, 8080 and 443) from the project IPv4 subnet (192.168.58.0/24) only
  4. Rule: Incoming traffic to interface lo (Localhost); this helps with maintenance and debugging activities

All other incoming traffic is implicitly blocked by policy 1 above. Note that since HTML content is only served to IPv4 clients at this time, incoming HTTP/HTTPS traffic was not permitted in IPv6 Tables.

Implementation

edit
IPv4 Firewall Rules & Policies
edit

We used IPSec to implement the above policies and rules.

  1. Disable the Ubuntu firewall to it does not conflict with IPTables configuration:
  2. sudo ufw disable
  3. Install the IPTables-Peristent package:
  4. sudo apt-get install iptables-persistent
  5. Flush (clear) all existing IPTables rules - the default rules are inappropriate for a secure sever configuration:
  6. sudo iptables -f
  7. Configure the base firewall policies:
  8. iptables -p input drop #Block all incoming traffic iptables -p forward drop #Block all forward traffic iptables -p output accept #Allow all outgoing traffic
  9. Permit incoming return traffic for established connections, including TCP, DHCP (UDP), DNS (UDP), and Ping (ICMP):
  10. iptables -a input -m conntrack --ctstate established,related -j accept
  11. Permit ping (ICMP Echo Request) packets from 192.168.58.0/24 subnet only - for troubleshooting:
  12. sudo iptables -i input -s 192.168.58.0/24 -p icmp --icmp-type 8 -j accept
  13. Permit HTTP and HTTPS (TCP to 80, 8080 and 443) traffic from 192.168.58.0/24 subnet only:
  14. sudo iptables -i input -s 192.168.58.0/24 -p tcp --dport 80 -j accept sudo iptables -i input -s 192.168.58.0/24 -p tcp --dport 8080 -j accept sudo iptables -i input -s 192.168.58.0/24 -p tcp --dport 443 -j accept
  15. Permit incoming and outgoing traffic on the Localhost interface - for testing and maintenance:
  16. iptables -a input -i lo -j accept iptables -a output -o lo -j accept
  17. Inspect active IPTables rules:
  18. sudo iptables -s

    Active network policy rules on server:

    #policy: drop all incoming traffic unless otherwise noted
    -p input drop
    #policy: drop all forward traffic unless otherwise noted
    -p forward drop
    #policy: allow all outgoing traffic
    -p output accept	#policy: allow all outgoing traffic
    #rule: allow incoming http (tcp 80 & 8080) from private subnet
    -a input -s 192.168.58.0/24 -p tcp -m tcp --dport 80 -j accept
    -a input -s 192.168.58.0/24 -p tcp -m tcp --dport 8080 -j accept
    #rule: allow incoming https (tcp 443) from private subnet
    -a input -s 192.168.58.0/24 -p tcp -m tcp --dport 443 -j accept
    #rule: allow incoming ping (icmp type 8 – echo request) from private subnet
    -a input -s 192.168.58.0/24 -p icmp -m icmp --icmp-type 8 -j accept
    #rule: allow all incoming/outgoing traffic on interface lo (localhost)
    -a input -i lo -j accept
    -a output -o lo -j accept
    #rule: allow all incoming response traffic for established connections (includes tcp, dns, dhcp, ping echo reply)
    -a input -m conntrack --ctstate related,established -j accept
    
  19. Save configured rules using IPTables-Persistent:
  20. sudo netfilter-persistent save


IPv6 Firewall Rules & Policies
edit

We used IPSec to implement the above policies and rules.

Note that since HTTP/HTTPS content is not served to IPv6 clients at this point, no provisions were made to accept TCP traffic at ports 80, 8080 and 443.

  1. Flush (clear) all policies and rules from IPv6 Tables:
  2. sudo ip6tables -F
  3. Block all incoming traffic:
  4. sudo ip6tables -P INPUT DROP sudo ip6tables -P FORWARD DROP
  5. Accept incoming return traffic for connections originating from this server:
  6. sudo ip6tables -I INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

Protocol & Component Overview

edit

The Domain Name System (DNS) is a hierarchical decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates more readily memorized domain names to the numerical IP addresses needed for locating and identifying computer services and devices with the underlying network protocols. By providing a worldwide, distributed directory service, the Domain Name System is an essential component of the functionality of the Internet, that has been in use since 1985.

Domain Terminology
edit

Domain Name System

The domain name system, more commonly known as "DNS" is the networking system in place that allows us to resolve human-friendly names to unique addresses.

Domain Name

A domain name is the human-friendly name that we are used to associating with an internet resource. For instance, "google.com" is a domain name. Some people will say that the "google" portion is the domain, but we can generally refer to the combined form as the domain name.

The URL "google.com" is associated with the servers owned by Google Inc. The domain name system allows us to reach the Google servers when we type "google.com" into our browsers.

IP Address

An IP address is what we call a network addressable location. Each IP address must be unique within its network. When we are talking about websites, this network is the entire internet.

IPv4, the most common form of addresses, are written as four sets of numbers, each set having up to three digits, with each set separated by a dot. For example, "111.222.111.222" could be a valid IPv4 IP address. With DNS, we map a name to that address so that you do not have to remember a complicated set of numbers for each place you wish to visit on a network.

Top-Level Domain

A top-level domain, or TLD, is the most general part of the domain. The top-level domain is the furthest portion to the right (as separated by a dot). Common top-level domains are "com", "net", "org", "gov", "edu", and "io".

Top-level domains are at the top of the hierarchy in terms of domain names. Certain parties are given management control over top-level domains by ICANN (Internet Corporation for Assigned Names and Numbers). These parties can then distribute domain names under the TLD, usually through a domain registrar.

Hosts

Within a domain, the domain owner can define individual hosts, which refer to separate computers or services accessible through a domain. For instance, most domain owners make their web servers accessible through the bare domain (example.com) and also through the "host" definition "www" (www.example.com).

You can have other host definitions under the general domain. You could have API access through an "api" host (api.example.com) or you could have ftp access by defining a host called "ftp" or "files" (ftp.example.com or files.example.com). The host names can be arbitrary as long as they are unique for the domain.

SubDomain

A subject related to hosts are subdomains.

DNS works in a hierarchy. TLDs can have many domains under them. For instance, the "com" TLD has both "google.com" and "ubuntu.com" underneath it. A "subdomain" refers to any domain that is part of a larger domain. In this case, "ubuntu.com" can be said to be a subdomain of "com". This is typically just called the domain or the "ubuntu" portion is called a SLD, which means second level domain.

Likewise, each domain can control "subdomains" that are located under it. This is usually what we mean by subdomains. For instance you could have a subdomain for the history department of your school at "www.history.school.edu". The "history" portion is a subdomain.

The difference between a host name and a subdomain is that a host defines a computer or resource, while a subdomain extends the parent domain. It is a method of subdividing the domain itself.

Implementation

edit
IPv4
edit

1) Initially, the network manager assigns a dynamic IP address to the port, but servers need to have a static IP address. This can be done changing the configuration in the “/etc/network/interfaces” file.

   sudo nano /etc/network/interfaces

In this file, we need to add the address for the port required and save it using control + X followed by Y.

   auto lo 
   iface lo inet loopback
   auto eth0
   iface eth0 inet static
   address 192.168.58.3
   netmask  255.255.255.0
   network   192.168.58.0
   broadcast  192.168.58.255

2) After changing the /network/interfaces file reboot the system by the following command

   sudo init 6

3) Restart the network-manger

   sudo service network-manger restart

4) Install the bind9 server

   sudo apt-get install bind9

5) After installing the bind9 server we need to make changes in the configuration file in the Bind directory.

   cd /etc
   cd bind
   sudo nano named.conf.options

6) In the named.conf.options we need to add the forwarders

   forwarders
   { 
   192.168.58.3;
   };

7) Configure forward and reverse lookup zones in the named.conf.local

Forward lookup zone for IPv4 in Slave

   sudo nano named.conf.local
   forward lookup zones
   zone "project3.home" {
       type slave;
       masters {192.168.58.3;};
       file "/etc/bind/for.project3.home";};
   zone "58.168.192.in-addr.arpa" {
       type slave;
       masters {192.168.58.3;};
       file "/etc/bind/rev.project3.home";};

Reverse lookup zone for IPv4 in Master

   zone "project3.home" {
       type master;
       file "/etc/bind/for.project3.home";};
   zone "58.168.192.in-addr.arpa" {
       type master;
       file "/etc/bind/rev.project3.home";};


8) Create a sub directory called ‘zones’ and create forward and reverse database files

   $TTL    604800
   @       IN      SOA     project3.home. dns1.project3.home. (
                             6         ; Serial
                        604800         ; Refresh
                         86400         ; Retry
                       2419200         ; Expire
                        604800 )       ; Negative Cache TTL
   ;
   @       IN      NS      dns1.project3.home.
   @       IN      A       192.168.58.3
   @       IN      AAAA    ::1
Additional computers in network
   www     IN      A       192.168.58.128
   dns1    IN      A       192.168.58.3
   dns2    IN      A       192.168.58.4


9) Create the reverse lookup database file

   $TTL    604800
   @       IN      SOA     project3.home. dns1.project3.home. (
                             6         ; Serial
                        604800         ; Refresh
                         86400         ; Retry
                       2419200         ; Expire
                        604800 )       ; Negative Cache TTL;
   @       IN      NS      dns1.
   3       IN      PTR     dns1.project3.home.; 

Other computers in network

   128     IN      PTR     www.project3.home.
   4       IN      PTR     dns2.project3.home.
                                                 

10) Set the nameservers in the resolv.conf file

   sudo nano /etc/resolv.conf
   nameserver 192.168.58.3 
   nameserver 192.168.58.4 
   search www.project3.com

11) Restart the bind9 server

    sudo service bind9 restart

12) Configure the resolv.conf file as in step 10 and restart the bind9 server

IPv6
edit

1) Set static IPv6 address to the master and slave server by the following commands

    sudo nano /etc/network/interfaces 
    auto eth0
    iface eth0 inet6 static
    address  fe80::aba9:79f6:5321:963c
    netmask  64
    

2) In the named.conf.local file add the reverse IPv6 domain for master and slave.

In the master configuration file add

  zone "project3.home"
  {
  type master;
  allow-transfer {192.168.58.3;};
  file "/etc/bind/rev.project3.home";
  };

In the slave configuration file add

  zone "project3.home"
  {
  type slave;
  masters {192.168.58.4;};
  file "/etc/bind/rev.project3.home";
  };


3) Restart both master and slave dns servers.

DHCP

edit

We configured dual-mode IPv4 and IPv6 dynamic addressing using ISC DHCP (also referred to as DHCP Daemon or DHCPD) on Ubuntu.

Protocol & Component Overview

edit

The Dynamic Host Configuration Protocol (DHCP) is used to issue dynamic IP addresses to hosts configured to request them. DHCP may also be configured to issue a specific IP address to nodes with a specific layer-2 MAC. This is necessary when fixed-address nodes have an address inside the DHCP server's configured address space. The addresses are issued for a specific lease duration identified during the negotiation process. At the end of the lease, if a renewal is not requested, the server will assume the address is available again.

DHCP Servers communicate through UDP port 68 (aliased as BOOTPS - Boot Protocol Server - on many systems). The requesting client communicates through UDP port 67 (aliased as BOOTPC - Boot Protocol Client - on many systems).

There are four steps involved in requesting a DHCP address assignment:

Sequence Step Description Source IP Destination IP
1 Discovery A client configured to request dynamic DHCP addressing sends a broadcast discovery message looking for a server willing to issue a lease. Because the requesting host is unaware of what subnetwork it is on, or the identity of the DHCP server(s) on the network, the source IP of the DHCP datagram is 0.0.0.0, and the destination is 255.255.255.255 (the latter is a reserved IP address for broadcasting on "this network", used specifically when the identity of the local network is unknown). 0.0.0.0:67 255.255.255.255:68
2 Offer A server willing to issue a lease responds with an offer message, which includes the IP address available for issuance. The source IP address is the IP of the DHCP server, and the destination is 255.255.255.255, since the client does not yet have an IP and cannot be contacted directly. Server IP:68 255.255.255.255:67
3 Request The client formally requests the offered address. Note that the client does not formally accept the address until the acknowledgement. For this reason, the destination IP address is the server IP - since it is known to the client at this point - but the source IP is still 0.0.0.0. 0.0.0.0:67 Server IP:68
4 Acknowledgement The server sends an acknowledgment ascertaining that the address was issued to the requester. The destination IP is still 255.255.255.255 because it is not expected that the client will begin using that address until the acknowledgement. Server IP:68 255.255.255.255:67

Since the source and destination IP addresses change several times during the exchange, the communication stream is identified via a 32-bit transaction ID.

Implementation

edit

We have implemented a dual-stacked IPv4 & IPv6 server with the following properties:

Property IPv4 Configuration IPv6 Configuration
Subnet Address 192.168.58.0/24 2001:1:1:1::/64
Address Range 192.168.58.1-
192.168.58.254
2001:1:1:1::1-
2001:1:1:1::254
DNS Servers 192.168.58.3
192.168.58.4
2001:1:1:1::3
2001:1:1:1::4
Domain Name project3.home project3.home


The following fixed-address leases were defined for servers:

Node Host Name MAC Address IPv4 Configuration IPv6 Configuration
Webserver www 00:0c:29:3c:52:ed 192.168.58.128 2001:1:1:1::128
DNS Master dns1 00:0c:29:50:da:95 192.168.58.3 2001:1:1:1::3
DNS Slave dns2 00:0c:29:40:7a:49 192.168.58.4 2001:1:1:1::4


The Ubuntu ISC DHCP service was used to implement DHCP server functionality.

  1. Install radvd for IPv6 DHCP functionality:
    1. Deploy radvd using aptitude:
    2. sudo apt-get install radvd
    3. Edit the radvd configuration file:
    4. sudo vim /etc/radvd.conf
    5. Add the following clauses to advertise the appropriate physical interface as IPv6 capable:
    6. interface ens38 { AdvSendAdvert on;
          AdvManagedFlag off;
          AdvOtherConfigFlag on;
      
          prefix 2001:1:1:1::/64 {
              AdvOnLink on;
              AdvAutonomous on;
              AdvRouterAddr on;
          };
      };
      
    7. Edit the System Control configuration file:
    8. sudo vim /etc/sysctl.conf
    9. Enable IPv6 forwarding in the System Control configuration file:
    10. net.ipv6.conf.default.forwarding=1
    11. Restart the radvd service for changes to take effect:
    12. sudo service radvd restart
    13. Check the radvd service status to ensure is started without errors:
    14. sudo service radvd status
    15. Sample output (first 3 rows only):
    16. ● radvd.service - LSB: Router Advertising Daemon Loaded: loaded (/etc/init.d/radvd; bad; vendor preset: enabled) Active: active (running) since Tue 2017-04-11 21:11:26 PDT; 1 day 18h ago
  2. Install ISC DHCP components from Aptitude:
  3. sudo apt-get install isc-dhcp-server
  4. Edit the ISC DHCP service configuration file:
  5. sudo vim /etc/default/isc-dhcp-server
    1. Change the following stanza to enable IPv6 functionality:
    2. OPTIONS="-6"
  6. Edit the DHCP IPv4 lease configuration file:
  7. sudo vim /etc/dhcp/dhcpd.conf
    1. Comment-out the following stanzas, if defined; they will be set later per-subnet:
    2. #option domain-name #The domain name (e.g. "example.local")
      #option domain-name-servers #The DNS server addresses
      
    3. Set the following stanza to identify this server as the official server for this network:
    4. authoritative;
    5. Add the following declaration to define an IPv4 address range using the properties defined in the tables above:
    6. subnet 192.168.58.0 netmask 255.255.255.0 {
              range 192.168.58.1 192.168.58.254;
              option domain-name-servers 192.168.58.3, 192.168.58.4;
              option domain-name "project3.home";
              host www {
                      hardware ethernet 00:0c:29:3c:52:ed;
                      fixed-address 192.168.58.128;
              }
      
              host dns1 {
                      hardware ethernet 00:0c:29:50:da:95;
                      fixed-address 192.168.58.3;
              }
      
              host dns2 {
                      hardware ethernet 00:0c:29:40:7a:49;
                      fixed-address 192.168.58.4;
              }
      }
      
  8. Edit the DHCP IPv6 lease configuration file:
  9. sudo vim /etc/dhcp/dhcpd6.conf
    1. Comment-out the domain-name and domain-name-server stanzas, as for IPv4;
    2. Set the servAr as authoritative, as for IPv4;
    3. Add the following declaration to define an IPv6 address range using the properties defined in the tables above. Note the use of 6 stanzas:
    4. subnet6 2001:1:1:1::/64
      {
         range6 2001:1:1:1::10 2001:1:1:1::254;
              option dhcp6.name-servers 2001:1:1:1::3, 2001:1:1:1::4;
              option domain-name "project3.home";
              host www {
                      hardware ethernet 00:0c:29:3c:52:ed;
                      fixed-address6 2001:1:1:1::128;
              }
      
              host dns1 {
                      hardware ethernet 00:0c:29:50:da:95;
                      fixed-address6 2001:1:1:1::3;
              }
      
              host dns2 {
                      hardware ethernet 00:0c:29:40:7a:49;
                      fixed-address6 2001:1:1:1::4;
              }
      }
      
  10. Check the IPv4 & IPv6 DHCP service configuration:
  11. sudo dhcpd -t sudo dhcpd -6 -t
    1. Example Output:
    2. Internet Systems Consortium DHCP Server 4.3.3 Copyright 2004-2015 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ WARNING: Host declarations are global. They are not limited to the scope you declared them in. Config file: /etc/dhcp/dhcpd.conf Database file: /var/lib/dhcp/dhcpd.leases PID file: /var/run/dhcpd.pid
    3. If no output indicating syntax error in the configuration files is shown, the configuration is OK.
  12. Restart the IPv4 and IPv6 DHCP services for changes to take effect:
  13. sudo service isc-dhcp-server restart sudo service isc-dhcp-server6 restart
  14. Check the status of the IPv4 and IPv6 DHCP services:
  15. sudo service isc-dhcp-server status sudo service isc-dhcp-server6 status
    1. Example IPv4 service output (first 3 rows):
    2. ● isc-dhcp-server.service - ISC DHCP IPv4 server Loaded: loaded (/lib/systemd/system/isc-dhcp-server.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2017-04-08 16:09:17 PDT; 19h ago
    3. Example IPv6 service output (first 3 rows):
    4. ● isc-dhcp-server6.service - ISC DHCP IPv6 server Loaded: loaded (/lib/systemd/system/isc-dhcp-server6.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2017-04-08 17:12:00 PDT; 18h ago
    5. If both services have an active status with no warning or errors indicated, the configuration is correct.

Backup

edit

Configuration

edit

1. Install ssh server one virtual machine

   sudo apt-get install openssh-server

2. Install ssh client on second virtual machine

   sudo apt-get install openssh-client

3. Generate public and private keys in the client server

   sudo ssh-keygen -t rsa

4. Copy the public key to ssh server

   ssh backupserver @192.168.58.3 mkdir -p .ssh
   cat .ssh/id_rsa.pub | ssh backupserver@192.168.58.4 'cat >> .ssh/authorized_keys'

5. For executing backup, use the following

   sudo tar -cvpzf backupfile.tar.gz /var/www/html/index.html

6. For executing automatic backup, use the following

   sudo crontab –e
   * * * * * sudo tar -cvpzf backupfile.tar.gz /var/www/html/index.html
   * * * * * sudo scp backupfile.tar.gz backupserver@192.168.58.4/home/backupserver/

Optional Components

edit

IPSec VPN

edit

Introduction

edit

VPN protocol is used to develop a secured tunnel between two hosts. The data traversing through the tunnel is encrypted using AES 128 bit encryption. It is used for security purposes and to avoid eavesdropping and attacks from hackers. There are two types of VPN namely network to network and IPsec transport VPN. Here transport VPN is used since it is used within a network.

TestPlan

edit

1) Connect to the VPN server and once connected a point to point tunnel session is established which can be retrieved in the interface list.

              ifconfig                                  - Retrieves the detected network interface and its information 
ppp0 Link encap:Point-to-Point Protocol - Shows that the device is connected to a private network.

Benefits

edit

An IPSec Virtual Private Network (VPN) is a virtual network that operates across the public network, but remains “private” by establishing encrypted tunnels between two or more end points. VPNs provide:

  • Data integrity: Data integrity ensures that no one has tampered with or modified data while it traverses the network. Data integrity is maintained with hash algorithms.
  • Authentication: Authentication guarantees that data you receive is authentic; that is, that it originates from where it is supposed to, and not from someone masquerading as the source. Authentication is also ensured with hash algorithms.
  • Confidentiality: Confidentiality ensures data is protected from being examined or copied while transiting the network. Confidentiality is accomplished using encryption.

An IP Security (IPSec) VPN secures communications and access to network resources for site-to-site access using encryption, authentication, and key management protocols. On a properly configured VPN, communications are secure, and the information that is passed is protected from attackers.


Configurations

edit
Server
edit

Step 1: Install the following package used to configure VPN
Command:

               sudo apt-get install ipsec-tools strongswan-starter 

Step 2:Open and Edit the following file
Command:

               sudo nano /etc/ipsec.conf

Step 3: Add the following
Command:

   conn webserver-to-nfs
   authby=secret
   auto=route
   keyexchange=ike
   left=192.168.58.3
   right=192.168.58.4
   type=transport
   esp=aes128gcm16!

Step 4: Create the file which will have the pre shared keys
Command:

        sudo nano /etc/ipsec.secrets

Step 5: Add the following
Command:
192.168.58.3 192.168.58.4 : PSK “ your keys”

Step 6: Restart IPSec
Command:
ipsec restart

Step 7: To check the status use statusall
Command:
ipsec statusall

Host 2

Step 1: Install the following
Command:
sudo apt-get install ipsec-tools strongswan-starter

Step 2: Open andeEdit the following file
Command:

sudo nano /etc/ipsec.conf

Step 3: Add the following
Command:

   conn webserver-to-nfs
authby=secret
auto=route
keyexchange=ike
left=192.168.58.3
right=192.168.58.4
type=transport
esp=aes128gcm16!

Step 4: Create the file which will have the pre shared keys
Command:

sudo nano /etc/ipsec.secrets

Step 5: Add the following
Command:

192.168.58.3 192.168.58.4 : PSK “ your keys”

Step 6: Restart IPSec
Command:

ipsec restart

Step 7: To check the status use statusall
Command:
ipsec statusall


Testing: Step 1: Use this on any one host:
Command:
Ping -s 4048 192.168.58.3

Step 1: Watch status from other host
Command:
watch ipsec statusall

ARP Poisoning

edit

What Is ARP Spoofing?

edit

ARP spoofing is a type of attack in which a malicious actor sends falsified ARP (Address Resolution Protocol) messages over a local area network. This results in the linking of an attacker’s MAC address with the IP address of a legitimate computer or server on the network. Once the attacker’s MAC address is connected to an authentic IP address, the attacker will begin receiving any data that is intended for that IP address. ARP spoofing can enable malicious parties to intercept, modify or even stop data in-transit. ARP spoofing attacks can only occur on local area networks that utilize the Address Resolution Protocol.

ARP Spoofing Attacks

edit
  1. The effects of ARP spoofing attacks can have serious implications for enterprises. In their most basic application, ARP spoofing attacks are used to steal sensitive information. Beyond this, ARP spoofing attacks are often used to facilitate other attacks such as:
  2. Denial-of-service attacks: DoS attacks often leverage ARP spoofing to link multiple IP addresses with a single target’s MAC address. As a result, traffic that is intended for many different IP addresses will be redirected to the target’s MAC address, overloading the target with traffic.
  3. Session hijacking: Session hijacking attacks can use ARP spoofing to steal session IDs, granting attackers access to private systems and data.
  4. Man-in-the-middle attacks: MITM attacks can rely on ARP spoofing to intercept and modify traffic between victims.

Network File System (NFS)

edit

Introduction

edit

NFS (Network File System) is basically developed for sharing of files and folders between Linux/Unix systems by Sun Microsystems in 1980. It allows you to mount your local file systems over a network and remote hosts to interact with them as they are mounted locally on the same system. With the help of NFS, we can set up file sharing between Unix to Linux system and Linux to Unix system.

Benefits

edit
  1. NFS allows local access to remote files.
  2. It uses standard client/server architecture for file sharing between all *nix based machines.
  3. With NFS it is not necessary that both machines run on the same OS.
  4. With the help of NFS we can configure centralized storage solutions.
  5. Users get their data irrespective of physical location.
  6. No manual refresh needed for new files.
  7. Newer version of NFS also supports acl, pseudo root mounts.
  8. Can be secured with Firewalls and Kerberos.

Scenario

edit

In this scenario we are going to export the file system from the an IP address 192.168.58.4 ( NFS server ) host and mount it on an a host with an IP address 192.168.58.3 ( NFS Client ). Both NFS server and NFS client will be running Ubuntu Linux.

Configurations

edit
Server
edit


Installing nfs-common package on both NFS client and NFS server using using:

      apt-get install nfs-common 


Installing extra package on NFS server using:

      apt-get install nfs-kernel-server 


Used the following command to check whether NFS is installed correctly on server side:

      rcpinfo -p 


Used the following command to load the NFS module on server side:

      modeprob nfs 


Now we created a directory /public to the file /etc/exports and created 3 empty files in that using the following commands:

      mkdir /public 
      touch /public/nfs1/public/nfs2/public/nfs3 


Now we edited the file /etc/exports using the following commands:

   /home/rishabh/Public/nfs.p3 192.168.58.3/24(rw,nohide,insecure,no_subtree_check,async,no_root_squash) 


Mount the exported folders on a client machine:

   mount -t nfs 192.168.58.3:/home/rishabh/Public/nfs.p3 /home/nfs_local.p3 

References

edit

Sites Referred:
1. https://help.ubuntu.com/community/BIND9ServerHowto
2. https://help.ubuntu.com/community/Postfix
3. http://www.bind9.net
4. http://net.tutsplus.com/tutorials/other/the-linux-firewall
5. https://help.ubuntu.com/community/OpenVPN
6. https://www.veracode.com/security/arp-spoofing
7. http://www.tecmint.com/how-to-setup-nfs-server-in-linux/
8. https://www.vultr.com/docs/setup-autobackup-on-linux
9. https://www.forum.qnap.com
10. https://www.youtube.com/
11. https://help.ubuntu.com/community/isc-dhcp-server
Books Referred:
1. Computer Networking: A Top-Down Approach, 6/e James F. Kurose, Keith W. Ross