Amazon’s AWS is the leader in public Cloud services bringing 17.5 billions USD revenues with 25% profit margin for Amazon in 2017. The AWS business is growing at a whopping rate of 49% in 2018. This is a very good testimonial that companies all over the world are embracing cloud technology in their IT strategy and deployment.
Microsoft’s Azure Cloud services had a late start than Amazon’s AWS but it is now in a very close 2nd position after Amazon’s AWS. Azure has the benefits that its parent, Microsoft is the owner of the Windows server software and it has well established relationships with IT departments for many years. Also, Azure has more data centers all over the world (i.e., lower network latency) than Amazon’s AWS and it supports nested virtualization that some other Cloud Service Providers (CSPs) cannot support. BTW, if you are planning to run any software or NFV appliances (e.g., GNS3) that itself runs on top of a hypervisor such as KVM on the cloud, you will need a CSP that supports nested virtualization.
This article discusses N-Tier application deployment in a private Data Center (DC) and its migration to Azure Cloud. The focus of this article is to highlight important network and system design issues such as scalability, security and redundancy for running N-Tier applications on these two platforms. In order to make this article easier to read, I will divide this article into multiple parts. Part 1 of this article discusses the deployment of a traditional N-Tier application in a private DC. Most of the information in Part 1 is theory based. Parts 2 and 3 of the article shows how to run this N-Tier application on Azure Cloud while maintaining application control from the private DC (i.e., Hybrid Cloud). The setup of the N-Tier application and the configuration of DMZ, Firewalls, Load-Balancers etc… on Azure Cloud are discussed in details in Parts 2 and 3 of the article.
Keeping with the tradition of this blog to enable readers to try out any network and software design and deployment at zero or minimum cost, I will show in Part 2 of this article how you can run this N-Tier application with all the networking equipment setup on Azure Cloud for free. Manual configuration of all the Firewalls, DMZ, Load-Balancers, IPSec Gateway, Apache VMs and servers etc… to support N-Tier applications in a DC could take weeks if not months in the past; compute and network virtualization and programmable network can expedite the processes dramatically. In Parts 2 and 3 of this article, I will show how you can deploy this web application with all the associated network setup on Azure cloud by simply running two commands and wait for a few hours before you can start testing the setup on Azure!
Many web applications used by billions of users over Internet daily such as Facebook, and eBay are examples of N-Tier web applications running in private DCs. Most web applications can be developed in a 3 Tier architecture as follows:
- Presentation or Web Tier
- Business Logic Tier
- Database Tier
The following shows a typical 3-Tier application deployed in a DC:
The Presentation or Web Tier offers graphical interface for user interaction via HTTP (TCP port 80) and/or HTTPS (TCP port 433) protocols. Apache web servers are commonly used in this tier.
The Business Logic Tier is the soul of the web application. For example, in a shopping cart web application, it can contain logic to promote relevant merchandises to the login users based on their previous purchase or web browsing history.
The Database Tier in the shopping cart web application example can contain all the merchandises’ production information and inventory count as well as previous purchase history of registered users. MySQL database is often used in this tier.
Network and System Design – Scalability, Redundancy and Security
The following discuss key network and system design issues related to scalability, redundancy, and security for deploying a 3-Tier application in a DC:
- Two or more DC Gateways have eBGP connections to two different ISPs for load balancing and redundancy purposes. eBPG export policies using BGP’s Multi-Exit Discriminator (MED) and Local Preference (LP) attributes are commonly used in the DC Gateways and the ISPs’ eBGP routers to load balance Internet traffic to the web applications inside the DC. The load balance algorithm can be based on say, the AS Origin of the web traffic. For more information about eBGP peering with traffic steering policy, please refer to my previous article, ISP Peering with BGP Traffic Steering Policy – Nokia SR and Cisco XR
- The two DC Gateways perform Network Address Translation (NAT) to map public IP addresses offered by the ISPs to the internal private IP subnet of 10.0.0.0/16 used by the web application example. The public IP addresses of the two DC Gateways are added as A-Records in the ISPs’ DNS servers for round robin FQDN to IP address lookup for redundancy purpose
- iBGP with IGP (OSPF or ISIS) are used among the DC Gateways and the Firewalls within the DC to support IP prefix distribution policies set up by the eBPG export polices of the DC Gateways
- Bi-Directional Flow Detection (BFD) is enabled on each iBGP and IGP links in the DC Gateways and Firewalls for fast link and node failure detection and switchover
- Internet traffic to the web application should be completely separated from the Intranet traffic. If multi-service edge routers such as Nokia’s 7750 service routers are used at the DC Gateway, Internet and Intranet VRPN can be setup on the DC Gateways to separate Internet and Intranet traffic within the DC using VPRN’s Route Target and Route distinguisher attributes. When VPRN is used, the iBGP setup in (3) should be enhanced to support Multi-Protocol BGP (MP-BGP) address family
- The use of multi-service edge routers as DC Gateways can reduce the number of physical boxes and connections in the DC as the routers can support Firewall, VPRN, and IPSec etc…
- The interconnection of the two Firewalls in the DC can allow operators on the Intranet to manage the 3-Tier applications such as updating the database records etc.. Care must be taken to ensure Internet web traffic cannot reach the Intranet. The following are some of the technique commonly employed in the Firewalls to block inbound Internet traffic to the Intranet:
- Stateful Firewall to allow only desired traffic from Intranet to the DMZ hosting the web application but NOT from the reverse direction
- If SSH is required for application maintenance, a randomly selected port instead of TCP port 22 should be used
- Implement Just-in-Time access where the login user needs to have write access to the VMs or Server and the randomly selected SSH port is only opened for a pre-defined duration
- Each tier (e.g., Web, Business logic, and Database) has it own IP subnet and only desired TCP and/or UDP ports are opened between tiers. For example, the Firewall of the Web tier has only TCP port 80 (i.e., http) opened whilst for the Database tier, only the ports needed by the database such as MySQL (e.g., TCP 3306) are enabled. The idea is that if any component within a tier is being compromised, intruders can only use the opened ports to attack the next tier to limit the potential demage caused by the security violation. The intruders cannot use the compromised components to attack other subnets that are not in the subnet’s routing table
- Redundancy and scalability of each tier are achieved by running multiple VMs or servers with Load-Balancers. A common load-balaning strategy is to round-robin ingress traffic and distribute them to the available VMs or servers for processing
- Use network segmentation whenever practical. For example, the above DMZ has an frontend IP address of 10.0.0.20/24 that is different from any IP subnets used in the 3-Tier web application. Load-Balancer’s frontend IP address of each tier (e.g., 10.0.1.0/24, 10.0.2.0/24 and 10.0.3.0/24) are totally hidden or encapsulated inside the DMZ and all access to the web application is via the DMZ’s frontend address of 10.0.0.20. If Linux VM or server is used to implement the DMZ, iptables is normally used to automatically change the destination and source IP addresses to and from the 3-Tier web application to support network segmentation. The following shows the iptables’ rule implemented at the DMZ to change ingress Internet web traffic to 10.0.0.20:80 (i.e., DMZ’s frontend address) to the Web tier ‘s frontend address of 10.0.1.100:80. Network segmentation offers additional security by encapsulating or hiding all IP addresses inside a DMZ and control all access to components inside the DMZ via a single entry point or IP address
iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT \ --to-destination 10.0.1.100:80
If the 3-Tier web application demands very high network and system availability, geo-redundancy High Availability (HA) Cluster should be employed in each tier to journal traffic and state information among VMs or servers within a cluster deployed in a different geograpahical location. HA Cluster can be setup in active/standby or active/active configuration. A detailed explanation of geo-redundancy HA Cluster is beyond the scope of this article.
If the above network is setup correctly, an operator on the Intranet can access the 3-Tier web application at 10.0.0.20 (i.e., DMZ’s frontend address) even though the web page served by the Apache web server is at 10.0.1.100.
Similarly, Internet users can point their web browser to the public IP address or FQDN of the 3-tier web application (i.e., NAT public IP address to private IP address 10.0.0.20) and the above test page will show up.
Application Migration to Azure
We just showed some basic network and system design issues related to redundancy, scalability, and security for deploying N-Tier applications in a DC. The following shows the 3-Tier application when it is migrated to Azure. Effectively, we are extending the private DC to include Azure as part of the IT infrastructure (e.g., Hybrid Cloud).
The above network design shows that the private DC is still keeping its Internet connections to ISPa and ISPb for load balance and redundancy but internet traffic to/from the 3-Tier web application is now being diverted from the DC to Azure through an IPSec tunnel for data security. Only web traffic (e.g., TCP port 80) is diverted to the IPSec tunnel. Again, if multi-service edge routers such as Nokia’s 7750 service routers are used as the Gateways, the Firewall and the IPSec gateway functions can be provisioned on the multi-service edge routers to reduce the number of physical boxes and connections required to support the design.
For web application maintenance traffic such as SSH and Remote Desktop Protocol (RDP), a Jumpbox is added so that maintenance traffic from Internet and Intranet for the web application on Azure need additional authentication for enhanced security.
Some DCs may decide to even move the Internet connections for the web application to Azure and simply maintain the Jumpbox for application maintenance on Azure.
In Parts 2 and 3 of this article, we will discuss the setup of the 3-Tier web application and all the networking equipment for supporting the application on Azure in details.
No long ago, deploying N-Tier applications in a DC required IT department of an enterprise to manually install and configure many piece of networking equipment, software, OS, and servers and it could take weeks if not months to complete. With network and compute virtualization and programmable network, enterprises can now deploy this kind of applications in hours. Please see my article, EVPN NVO and Data Center Gateway with Nokia SR and Juniper MX – Part 1 for details. In Parts 2 and 3 of this article, we will cover the setup of the 3-Tier web application on Azure while at the same time, enable operators on the enterprise’s Intranet to manage the Azure’s web application. Effectively. the enterprise is now including Azure as part of its private DC (i.e., Hybrid Cloud). Stay tuned.