FileCloud High Availability Architecture
FileCloud servers can be configured for HA environment to improve service reliability and reduce downtime in your IT environment. FileCloud supports HA in Linux and Windows install environments.
The Load balancer will route the traffic to the FileCloud Application nodes. Load balancers (LB) provide many advantages to serving requests from your FileCloud (FC) servers because they allow you to better control how the traffic is handled in order to provide the best performance.. If one or more App server nodes fail, the load balancer will automatically reroute traffic to other App server nodes.
Typically there is no need to scale the number of load balancers because these servers can handle a very large amount of traffic. However, more than one load balancers can be used to provide additional reliability in the event of one of the load balancer failure.
In order to protect against load balancer hardware failure, multiple A records for the load balancer host name in the DNS service can be used.
The idea here is that different clients will get different ordered lists of IP addresses corresponding to your domain name. This has the effect of distributing requests across the group of IPs in a specific manner. If an IP address is does not respond in an appropriate amount of time, the client will time out on that request and move on to the next IP address until the list is exhausted or it finds a connection that's valid.
FileCloud Component: App server node
The FileCloud App server node consists of the Apache webserver as well as the FileCloud Application code to server the client requests. The FileCloud app server nodes do not contain any application specific data. The data is retrieved from the MondoDB replica sets. Because of this, the FileCloud Appserver nodes can be added or removed without disrupting the service.
FileCloud Component: Mongo DB Replica set
The MongoDB database replica sets provide high availability with automatic failover support. Failover allows a secondary member to become primary in the event of failure to the primary DB node. The minimum number of DB nodes needed for MongoDB is three. All app server nodes will connect to primary and in the event of primary node failure, a new primary is elected and all the app server nodes will switch to the new primary.
This document describes the classic 3-tier approach with Load balancer handling the client traffic, Application server nodes to serve requests and redundant database servers to store application data.
- You have a at least three systems because the database replica set requires a minimum of three servers
- If you are using local storage, the local storage must be a location that is accessible by all the webserver nodes. The local storage CANNOT be a location inside any of the computers that run the FileCloud service. The location must be mounted on the same path string on each of the nodes (/mount/fcstorage or H:\storage)
- Ports 27017 (Mongo DB Ports) must not be blocked by firewall (ideally drop firewall until the install is over)
- Temp storage should be commonly accessible as well (must be a network mounted location). The temp storage should be mounted on each of the nodes and the path must be specified in the amazons3storageconfig.php(on each node) with key "TONIDOCLOUD_NODE_COMMON_TEMP_FOLDER" with the path to the temp storage
define ("TONIDOCLOUD_NODE_COMMON_TEMP_FOLDER", "/mount/tempspace");
- Each webnode must have UNIQUE host name else temp folder clean up will not work properly
The following setup will be created with this set of instruction
The load balancer is not a part of this install, but for completeness sake, we are using HaProxy as an example.
Skip this section if you already have a load balancer setup
Creating MongoDB Cluster
- MongoDB HA requires odd number of nodes for voting of Primary
- MongoDB requires a majority of nodes to be available in order to hold an election (or majority of votes which is controlled by the node's priority)
- Timeout parameter might be needed to reduce latency in case of loss of nodes (mongodb://Ha-WS1,Ha-WS2,Ha-WS3/?replicaSet=rs0&connectTimeoutMS=1000
- Use host name instead of IP address to be robust
- Ensure 27017 port is open in order for DB communication to work
Ensure every node is at the same software level (OS, FileCloud software level and its dependencies must be at the same level)
Step 1: Install MongoDB on all the designated DB nodes. These nodes can be collocated with apache server or can be on a different machine. In this section, we will assume there are three nodes (which is the minimum number needed for a MongoDB cluster).
Step 2: Edit mongo.conf (In Linux it is at /etc/mongodb.conf and in Windows it is c:\xampp\mongodb\bin\mongodb.conf) in each DB node and enable DB replication.
In case of mongodb on Windows(all versions) and mongodb v2.x on Linux, uncomment replSet and set it like the following (or add this line if not present)
In case of mongodb v3.x on Linux, uncomment line containing replication and add the replica set name as follows:
Important: Comment out (or remove) bind directive
Step 3: Open the mongo shell by running the mongo command (in Linux it is /usr/bin/mongo and in Windows it is c:\xampp\mongodb\bin\mongo)
Step 4: This applies to ONLY one node. Select a node (say Ha-Ws1) and issue the following command. If you issue this in more than one system, the configuration will become invalid!
Initialize the replica set with the following command
Step 5: In each of the three database server nodes, connect to mongo shell and enter rs.status() to see the actual value (One of the nodes should show as Primary and other two nodes should show as Secondary)
It should show something like
It is important the the "name" field for each of the members in the replica to match the name used in the connection string.
The name can be changed using mongo client commands in primary. For example, to change the name field of the first member of the replica set (0th element of the output of rs.conf)
cfs = rs.conf()
Configuring FileCloud With MongoDB Cluster
After MongoDB cluster is installed and configured, use the following steps to configure FileCloud to use this cluster as its database.
Step 1: If the app servers are different from DB servers, install the app server portion(Apache web server) of FileCloud on the app server nodes, using latest FileCloud server installer. If they are collocated, proceed to next step.
Step 2: Open the file $XAMPPROOT/config/cloudconfig.php (In linux it is /var/www/html/config/cloudconfig.php, in windows it is c:\xampp\htdocs\config\cloudconfig.php)_
Step 3: Edit localstorageconfig.php and add/replace the following keys ( In linux it is /var/www/html/config/localstorageconfig.php, in windows it is c:\xampp\htdocs\config\localstorageconfig.php)
Step 4 <Step required only for S3 storage> : If you are using Amazon S3 for backend storage, then edit amazons3storageconfig.php and add/replace the following keys ( In linux it is /var/www/html/config/amazons3storageconfig.php, in windows it is c:\xampp\htdocs\config\amazons3storageconfig.php)
If this file is not found, copy the storage sample file and rename it(on each of the nodes). A temp space must be mounted to the same mount point on each of the nodes (For example /mount/fctemp in linux or F:\fctemp in windows)
Setup Managed Storage
Since the FileCloud Appserver nodes do not store any of the application data, the managed storage must be an external location (A NAS, ISCSI , SAN , Amazon S3 or Open stack)
In this example, we assume that either NAS, NFS mount is already available and mounted on each of the webserver nodes.
Open the FileCloud Admin portal by opening http://<load balancer IP/ui/admin/index.html and log into the Administration portal.
Navigate to Settings->Storage Tab and set the mounted path and click “save”
Once the setup is complete, create user accounts by connecting to the admin portal and then log into the user account using the load balancer IP (which will route the traffic to one of the app server node).
To test app server HA, Turn off one of App server by logging into the app server (say Ha-WS1) and stop Apache (using service apache2 stop) . The service will be accessible because HaProxy will reroute traffic to Ha-WS2 or Ha-WS3 (depending on the routing selected)