Archive for the ‘Advanced Computer Administration and Architecture’ Category

FileCloud High Availability Architecture

Enterprise Cloud Infrastructure is a Critical Service

The availability of enterprise hosted cloud services has opened huge potential for companies to effectively manage files. The files can be stored, shared, exchanged within the enterprise and with their partners efficiently while keeping existing security and audit controls in place. The service provides the power and flexibility of public cloud while maintaining the data control.

The main challenge of enterprise hosted cloud services is to guarantee high uptime (in the order of seven nines) while maintaining high quality of service. The dependency on such services means that any disruption to the service can have significant productivity impacts. Enterprise cloud services typically consist of multiple different services to provide the functionality and any High availability architecture must take into account that all critical services need to have redundancies built into them to be effective. Moreover, detection and handling of failures must not require any user interaction as well as be reasonably quick.

FileCloud Enterprise Cloud

FileCloud enables enterprises to seamlessly access their data using a variety of external agents. The agents can be browsers, mobile devices, client applications, while the data that is enabled for access by FileCloud can be stored locally or in internal NAS devices or in public cloud locations such as AWS S3 or OpenStack SWIFT.

Depending on the specific enterprise requirements, the FileCloud solution may implemented multiple different software services such as Filecloud Helper service, Solr service, virus scanner service, Open Office service etc. Moreover, FileCloud may use the enterprise identity services such as Active Directory or LDAP or ADFS services. Any failure on any of these services can impact end user experience.
FileCloud HA

High Availability Architecture

FileCloud solution can be implemented using the classic three tier high availability architecture. The first tier will consists of the load balancer and access control services. Tier 1 will be a web tier made up of load balancers. Tier 2 will be stateless application servers and for FileCloud implementation, this layer will consist of Apache nodes and helper services. Tier 3 will be the database layer. Any other dependencies such as Active Directory or Data servers are not addressed here.  The advantage this architecture is separation of stateless components from state full components allowing great flexibility in deploying the solution.
AD tiers

Tier 1 – Web Tier

Tier 1 is the front end of the deployment and act as the entry point to all external clients. The components in Tier 1 are stateless and primarily forward the request to the webservers in tier 2. Scaling of the web tier can be done by adding and removing load balancer instances since they are stateless. Each webserver node is capable of handling any request. This layer can also be configured to do SSL offloading allowing lighter weight communication between Tier1 to Tier2. This layer can also be configured to provide simple affinity based on source and destination addresses. The traffic will be forwarded to healthy application server nodes.  This layer also monitors available application servers and will automatically distribute the traffic depending on the load.

Tier 2 – Application Servers

Tier 2 in FileCloud deployment consists of the following services

  • Apache servers
  • FileCloud helper
  • Antivirus service
  • Memcache service
  • Open Office service

The apache servers in FileCloud do not store any state information and are therefore stateless. They however do cache data for faster performance (such as convert and cache documents for display). They primarily execute application code to service a request. All state specific data is stored in database tables and therefore are stateless. If an application server node fails, the request can be handled by a different application server node (provided the clients retry the failing request). Capacity can be increased or reduced (automatically or manually) by adding or removing apache server nodes.

FileCloud helper service provides additional capabilities such as indexed search, NTFS permission retrieval etc.  FileCloud Helper is a stateless service and therefore can be added or removed as needed.

Similar to FileCloud helper service, the Antivirus service is also a stateless service providing antivirus capability to FileCloud. Any file that is uploaded to Filecloud is scanned using this service.

Memcache service is an optional service that is required for local storage encryption. This service is also stateless and is required only if local storage encryption is required. This service is also started in same node as the Apache service.

Open office service is an optional service that is required for creating document file previews in browser. This server is stateless and is started in the same node as the Apache server.

Tier 3 – Database Nodes

Tier 3 consists of state full services. This consists of the following services

  • MongoDB servers
  • Solr Servers

The High availability for each of these servers varies depending on the complexity of the deployment. The failure of these services can have limited or system wide impact. For example, MongoDB server failure will result in FileCloud solution wide failure and is critical, while FileCloud helper server will only impact a portion of function such as network folder access etc.

MongoDB Server High Availability

MongoDB servers store all application data in FileCloud and provide High Availability using replica sets. The MongoDB replica set configuration provides redundancy and increases data availability by keeping multiple copies of data on different database services. Replication also provides fault tolerance against the loss of a single database server. It is also possible to configure Mongo DB to increase the read capacity. The minimum number of nodes needed for Mongo DB server HA is a 3 node member set (It is possible to also use 2 nodes + 1 arbiter).  In case of primary Mongo DB server node failure, one of the secondary node will failover and will become primary.

The heartbeat time frame can be tuned depending on system latency. It is also possible to setup the Mongo DB replica to allow reads from secondary to improve read capacity.
HA Architecture Primary Secondary

Putting It All Together

The three tier structure for FileCloud component is shown below. The actual configuration information is available in FileCloud support. This provides a robust FileCloud implementation with high availability and extensibility.  As new services are added to extended functionality, the layer can be decided whether or not they are stateless or store state. The Stateless (Tier 2) nodes can be added or removed without disrupting service. Tier 3 nodes will store state and require specific implementation depending on the type of service.
HA Architecture

Alternative to WatchDox – Why FileCloud is better for Business File Sharing?

WatchDoxVsFileCloud

FileCloud competes with WatchDox for business in the Enterprise File Sync and Share space(EFSS). Before we get into the details, I believe an ideal EFSS system should work across all the popular desktop OSes (Windows, Mac and Linux) and offer native mobile applications for iOS, Android, Blackberry and Windows Phone. In addition, the system should offer all the basics expected out of EFSS: Unlimited File Versioning, Remote Wipe, Audit Logs, Desktop Sync Client, Desktop Map Drive and User Management.

The feature comparisons are as follows:

Features WatchDox
On Premise
File Sharing
Access and Monitoring Controls
Secure Access
Document Preview
Document Edit
Outlook Integration
Role Based Administration
Data Loss Prevention
Web DAV
Endpoint Backup
Amazon S3/OpenStack Support
Public File Sharing
Customization, Branding
SAML Integration
Anti-Virus
NTFS Support
Active Directory/LDAP Support
Multi-Tenancy
API Support
Application Integration via API
Large File Support
Network Share Support Buy Additional Product
Mobile Device Management
Desktop Sync Windows, Mac, Linux Windows, Mac
Native Mobile Apps iOS, Android, Windows Phone iOS, Android
Encryption at Rest
Two-Factor Authentication
File Locking
Pricing for 20 users/ year $999 $3600

From outside looking-in, the offerings all look similar. However, the approach to the solution is completely different in satisfying enterprises primary need of easy access to their files without compromising privacy, security and control. The fundamental areas of difference are as follows:

Feature benefits of FileCloud over WatchDox

Unified Device Management Console – FileCloud’s unified device management console provides simplified access to managing mobile devices enabled to access enterprise data, irrespective of whether the device is enterprise owned, employee owned, mobile platform or device type. Manage and control of thousands of iOS and Android, devices in FileCloud’s secure, browser-based dashboard. FileCloud’s administrator console is intuitive and requires no training or dedicated staff. FileCloud’s MDM works on any vendor’s network — even if the managed devices are on the road, at a café, or used at home.

Amazon S3/OpenStack Support Enterprise wanting to use Amazon S3 or OpenStack storage can easily set it up with FileCloud. This feature not only provides enterprise with flexibility to switch storage but also make switch very easily.

Embedded File Upload Website Form – FileCloud’s Embedded File Upload Website Form enables users to embed a small FileCloud interface onto any website, blog, social networking service, intranet, or any public URL that supports HTML embed code. Using the Embedded File Upload Website Form, you can easily allow file uploads to a specific folder within your account. This feature is similar to File Drop Box that allows your customers or associates to send any type of file without requiring them to log in or to create an account.

Multi-Tenancy Support – The multi-tenancy feature allows Managed Service Providers(MSP) serve multiple customers using single instance of FileCloud. The key value proposition of FileCloud multi-tenant architecture is that while providing multi-tenancy the data separation among different tenants is also maintained . Moreover, every tenant has the flexibility for customized branding.

NTFS Shares Support – Many organizations use the NTFS permissions to manage and control the access permissions for internal file shares. It is very hard to duplicate the access permissions to other systems and keep it sync. FileCloud enables access to internal file shares via web and mobile while honoring the existing NTFS file permissions. This functionality is a great time saver for system administrators and provides a single point of management.

Conclusion

Based on our experience, enterprises that look for an EFSS solution want two main things. One, easy integration to their existing storage system without any disruption to access permissions or network home folders. Two, ability to easily expand integration into highly available storage systems such as OpenStack or Amazon S3.

WatchDox neither provides OpenStack/Amazon S3 storage integration support nor NTFS share support. On the other hand, FileCloud provides easy integration support into Amazon S3/OpenStack and honors NTFS permissions on local storage.

With FileCloud, enterprises get one simple solution with all features bundled. For the same 20 user package, the cost is $999/year, almost 1/4th of WatchDox.

Here’s a comprehensive comparison that shows why FileCloud stands out as the best EFSS solution.

Try FileCloud For Free & Receive 5% Discount

Take a tour of FileCloud

A Primer on Windows Servers Disaster Recovery

windows server recovery
In this primer, we’re going to explore some of the best ways to actively restore your Windows Server with minimal impact. Though basic, the following technical pointers will help you with faster Windows servers disaster recovery.

1. RAM and HARD DISK Check

Blue screens are Windows’ way of telling you of some hardware failure such as with a faulty RAM, etc. Before taking any immediate action such as with a software repair option, it is important to run a thorough ram and hard disk check. To analyze issues with blue screens, you can resort to the Blue Screen View tool with auto-USB loading options. If experiencing blue screens, define behavior for windows restart with Control Panel-> System and Security-> System-> Advanced System Settings. Go to Startup and Recovery-> Settings-> Disable Automatically Restart option from System Failure. Choose Automatic Memory dump/Small memory dumpto let BlueScreenView parse memory.dmp file generated. Further errors for hard disk can be checked from Windows Logs-> System-> Event Viewer.

2. Boot Manager Failure

Boot manager failure leads to server loading failures. A Win Server DVD or repair technician can help here. Another solution is to access boot manager through the command prompt and take necessary steps for reactivating it. To overwrite master boot record (from the beginning of the disk), you can use the command bootrec /fixmbr. To view OS not currently listed, input command bootrec /scanos. To reinstate systems in the boot manager, use bootrec /rebuildbcd which reinstalls earlier systems integrated with boot manager. After this, input bootrec /fixboot to create a bootmgr log again. Beyond this, input commands bootsect /nt60 SYS followed by bootsect /nt60 ALL in the command line to repair the boot manager further.

3. Windows Startup Failure

Startup failures result from system files displacement after a crash, which leads to the Server booting up but not launching. One option is to do a system restore and select an earlier restore point. Another option is to open elevated command prompt, input sfc /scannow and allow Windows to scan and restore accordingly.

4. Restoring Server Backup

If installed through server Manager, a Server that is backed up on an external drive can restore data completely. Win Server Manager also offers Win Server Backup Feature to launch backup from tools menu or searching wbadmin.msc from Startup Menu. Block based backups are generated as a result, although it is possible to select particular partitions from the Backup Schedule wizard as well. To start full back (restorable via computer repair option on installation DVD), use wbadmin start sysrecovery/systemstatebackup from command-line. Use wbadmin start backup –allCritical –backupTraget:<insert_disk_of_choice>  -quiet. This backup can then be used to restore from in case of system failures. Boot Win Server through DVD, and then select Repair Your Computer option from Troubleshoot -> System Image Recovery.

5. Hardware Restore

Windows Server 2008 and Win Server 2012 has options to restore system backups from different hardware sources if you select the Bare Metal Recovery option. In this you need to utilize the Exclude disks option which lets you select a disk that is not required during restore operations, e.g. a disk with data rather than OS files is suitable for this. Select Install Drivers if you wish to backup drivers within your recovery data file so as to install it as well during a complete system restore from an initial point of backup. Advanced options are also available to provide options such as automatic system restore after disk defect verification and server restore, etc.

6. Active Directory Backup & Restore

The native backup program within the Win Server OS is sufficiently useful for backing up active directory services and restoring the same. It can not only create a back up of the directory but save all associated data necessary for functioning. To run backup, enable System State and System Reserved option and then back up all the data. In order to restore your Actve Directory, start domain controller and press F8 until the boot menu appears (may vary depending on the model and make of computer in use). In boot option, select Directory Services Restore Mode, log into the applications for Active Directory Restore mode, then complete restore. To boot domain controller into restore mode, input data: bcdedit/set safeboot dsrepair. If in Directory Services restore mode, set bcdedit /deletevalue safeboot to boot normally. Input shutdown t 0 –r to reboot.

7. Active Directory Cleanup

In DND manager/server, look into Properties for Name Servers tab then remove the service but be careful not to remove host entry. Ensure the domain controller is not explicitly registered as such, then remove AD services (e.g. VPN, etc.). If global catalog exists on the server, configure a different one with same deails from AD sites and services snap-in tool  and then go to Sites -> Servers -> Right click NTDS settings -> Properties -> uncheck Global Catalog from General tab. To downgrade domain controller, use PowerShell Uninstall ADDSDomainController cmdlet, use –force if you wish to remove it completely. Metadata can be modified from ntdsutil-> metadata cleanup -> connections. After cleanup, delete domain controller from site of assignation. Go to Snap In -> Domain Controller -> Select Delete. Check NTDS settings from AD to reassure it’s not registered with replication partner (remove if required).
windows server 2003

8. Active Directory Database Rescue

Go to Directory Services Restore Mode, insert call ntdsutil, active instance ntds, then choose files. Input <integrity>, quit to leave file maintenance. Data analysis launched by semantic database analysis command from CMD can give a detailed report if you keep verbose on. Enter go fixup, to start up the diagnostic tool to repair database. Quit and restart with command quit ntdsutil.

9. Backup for Win Exchange

Begin with Select Application under Select Recovery Type, navigate to Exchange Option, View Details to see backups. The backup is current if checkbox Do Not Perform a Roll-Forward appears at this stage. For Roll Forward Recovery, transaction logs created during backup are required as Exchange uses these to write in the database and accomplish recovery. Enabling the Recovr to Original Location option lets you restore all databases to original locations. Beyond the system restore, the backup is integrated with database and can also be manually moved back.

Author: Rahul Sharma

Image courtesy: Salvatore Vuono, freedigitalphotos.net

Alternative to Pydio – Why FileCloud is better for Business File Sharing?

FileCloudVsPydio

FileCloud competes with Pydio for business in the Enterprise File Sync and Share space(EFSS). Before we get into the details, I believe an ideal EFSS system should work across all the popular desktop OSes (Windows, Mac and Linux) and offer native mobile applications for iOS, Android, Blackberry and Windows Phone. In addition, the system should offer all the basics expected out of EFSS: Unlimited File Versioning, Remote Wipe, Audit Logs, Desktop Sync Client, Desktop Map Drive and User Management.

The feature comparisons are as follows:

Features Pydio
On Premise
File Sharing
Access and Monitoring Controls
Secure Access
Document Preview
Document Edit
Outlook Integration
Role Based Administration
Data Loss Prevention
Web DAV
Endpoint Backup
Amazon S3/OpenStack Support
Public File Sharing
Customization, Branding
SAML Integration
Anti-Virus
NTFS Support
Active Directory/LDAP Support
Multi-Tenancy
API Support
Application Integration via API
Large File Support
Network Share Support
Mobile Device Management
Desktop Sync Windows, Mac, Linux Windows, Mac, Linux
Native Mobile Apps iOS, Android, Windows Phone iOS, Android
Encryption at Rest
Two-Factor Authentication
File Locking
Pricing for 100 users/ year $2999 $1772

From outside looking-in, the offerings all look similar. However, the approach to the solution is completely different in satisfying enterprises primary need of easy access to their files without compromising privacy, security and control. The fundamental areas of difference are as follows:

Feature benefits of FileCloud over Pydio

Document Quick Edit – FileCloud’s Quick Edit feature supports extensive edits of files such as Microsoft® Word, Excel®, Publisher®, Project® and PowerPoint® — right from your Desktop. It’s as simple as selecting a document to edit from FileCloud Web UI, edit the document using Microsoft Office, save and let FileCloud take care of other uninteresting details in the background such as uploading the new version to FileCloud, sync, send notifications, share updates etc.

Embedded File Upload Website Form – FileCloud’s Embedded File Upload Website Form enables users to embed a small FileCloud interface onto any website, blog, social networking service, intranet, or any public URL that supports HTML embed code. Using the Embedded File Upload Website Form, you can easily allow file uploads to a specific folder within your account. This feature is similar to File Drop Box that allows your customers or associates to send any type of file without requiring them to log in or to create an account.

Unified Device Management Console – FileCloud’s unified device management console provides simplified access to managing mobile devices enabled to access enterprise data, irrespective of whether the device is enterprise owned, employee owned, mobile platform or device type. Manage and control of thousands of iOS and Android, devices in FileCloud’s secure, browser-based dashboard. FileCloud’s administrator console is intuitive and requires no training or dedicated staff. FileCloud’s MDM works on any vendor’s network — even if the managed devices are on the road, at a café, or used at home.

Device Commands and Messaging – Ability to send on-demand messages to any device connecting to FileCloud, provides administrators a powerful tool to interact with the enterprise workforce. Any information on security threats or access violations can be easily conveyed to the mobile users. And, above all messages are without any SMS cost.

Amazon S3/OpenStack Support Enterprise wanting to use Amazon S3 or OpenStack storage can easily set it up with FileCloud. This feature not only provides enterprise with flexibility to switch storage but also make switch very easily.

Multi-Tenancy Support – The multi-tenancy feature allows Managed Service Providers(MSP) serve multiple customers using single instance of FileCloud. The key value proposition of FileCloud multi-tenant architecture is that while providing multi-tenancy the data separation among different tenants is also maintained . Moreover, every tenant has the flexibility for customized branding.

Endpoint Backup: FileCloud provides ability to backup user data from any computer running Windows, Mac or Linux to FileCloud. Users can schedule a backup and FileCloud automatically backs up the selected folders on the scheduled time.

Conclusion

The preference for enterprises will depend on whether to rely on Pydio whose focus is split between trying to be open source and commercial with their enterprise offering or FileCloud with the only focus to satisfy all enterprise’s EFSS needs with unlimited product upgrades & support at a very affordable price.

Here’s a comprehensive comparison that shows why FileCloud stands out as the best EFSS solution.

Try FileCloud For Free & Receive 5% Discount

Take a tour of FileCloud

Architectural Patterns for High Availability

As the number of mission-critical web-based services being deployed by enterprise customers continues to increase, the need for a deeper understanding of designing the optimal network availability solutions has never been more critical. High Availability (HA) has become a critical aspect in the development of such systems. High Availability simply refers to a component or system that continuously remains operational for a desirable amount of time. Availability is generally measured relative to ‘100 percent operations’; however, since it is nearly impossible to guarantee 100 percent availability, goals are usually expressed in the number of nines. The most coveted availability goal is the ‘five nines’, which translates to 99.999 percent availability – the equivalent of less than a second of downtime per day.
Five nines availability can be achieved using standard commercial quality software and hardware. The design of high availability architectures is largely based on the combination of redundant hardware components and software to manage fault correction and detection without human intervention. The patterns below address the design and architectural consideration to make when designing a highly available system.

Server Redundancy

The key to coming up with a solid design for a highly available system lies in identifying and addressing single points of failure. A single point of failure simply refers to any part whose failure will result into a complete system shutdown. Production servers are complex systems whose availability is dependent on multiple factors, including hardware, software and communication links; each of these factors is a potential point of failure. Introducing redundancy is the surest way to address single points of failure. It is accomplished by replicating a single part of a system that is crucial to its function. Replication guarantees that there will always be a secondary component available to take over in the event a critical component fails. Redundancy relies on the assumption that they system cannot simultaneously experience multiple faults.
The most widely known example of redundancy is RAID-Redundant Arrays of Inexpensive Disks, which utilizes the combined use of multiple drives. Server redundancy can be achieved through a stand-by form also referred to as active-passive redundancy or through active-active redundancy where all replicas are concurrently active.

  • Active-Passive Redundancy

An active-passive architectural pattern consists of at least two nodes. The passive server (failover) acts as a backup that remains on standby and takes over in the event the active server gets disconnected for whatever reason. The primary active server hosts production, test and development applications.
The secondary passive server essentially remains dormant during normal operation. A major disadvantage of this model is that there is no guarantee that the production application will function as expected on the passive server. The model is also considered a relatively wasteful approach because expensive hardware is left unused.
active_passive_high_availability_cluster

fig 1.1

  • Active-Active Redundancy

The active-active model also contains at least two nodes; however, in this architectural pattern, multiple nodes are actively running the same services simultaneously. In order to fully utilize all the active nodes, an active-active cluster uses load balancing to distribute workloads across the nodes in order to prevent any single node from being overloaded. The distributed workload subsequently leads to a marked improvement in response times and throughput.
The load balancers uses a set of complex algorithms to assign clients to the nodes, the connections are typically based on performance metrics and health checks. In order to guarantee seamless operability, all the nodes in the cluster must be configured for redundancy. A potential drawback for an active-active redundancy is that in case one of the nodes fails, client sessions might be dropped, forcing them to re-login into the system. However, this can easily be mitigated by ensuring that the individual configuration settings of each node are virtually identical.
active_active_high_availability_cluster_load_balancer

fig 1.2

  • N+1 redundancy

An N+1 redundancy pattern is sort of a hybrid solution between active-active and active-passive; it is sometimes referred to as parallel redundancy. Despite the fact that this model is mostly used as UPS configuration, it can also be applied for high availability. An N+1 architectural pattern basically introduces 1 slave (passive) for N potential single point of failures in a system. The slave remains in standby mode and waits for a failure to occur in any of the N active parts. The system is therefore granted the capability of handling failure in one out of N components with compromising performance.
n+1 redundancy

fig 2.1

Data Center Redundancy

While a datacenter may contain redundant components, an organization may also benefit from having multiple datacenters. Factors such as weather, power failure or even simple equipment failure may cause an entire datacenter to shut down. In this scenario, replication within the datacenter will be of very little use. Such an unplanned outage can be a significantly costly affair for an enterprise. When failures on a data center level are considered, the need for a high availability pattern that includes multiple servers becomes apparent.
It is important to note that establishing multiple data centers in geographically distinct locations, and buying physical hardware to provide redundancy within the datacenters, is extremely costly. Additionally, setting up is a time-consuming affair, and may seem too difficult to achieve in the long-run. However, high purchase, set-up and maintenance costs can be mitigated by employing the use of IaaS (Infrastructure as a Service) providers.
data center redundancy

fig 3.1

Floating IP Address

A floating IP address can be worked into a high availability cluster that uses redundancy. The term ‘floating’ is used because the IP address can be moved from a one droplet to another droplet within the same cluster in an instance. This means the infrastructure can achieve high availability by immediately pointing an IP address to a redundant server. Floating IPs significantly reduce downtime by allowing customers to associate an IP address with a different droplet. A design pattern that has provisions for floating IPs makes it possible to establish a standby Droplet, which can receive production traffic at moment’s notice.

Author: Gabriel Lando

 

fig 1.1 and fig 1.1 courtesy of  hubspot.net

fig 2.1 courtesy of webworks.in

fig 3.1 courtesy of technet.com

The Hottest Mac Tools For The New-Age IT Manager

Mac tools

Handling IT needs, integrating your workspace and managing your Macs can be a breeze with the help of these essential tools.

MAC-tracker

Need to track all the Mac computers on your system? No problem! Just install the MACtracker and let it do your job. It aggregates information on a variety of accessory units such as your mouse, printer, Wi-Fi cards, even scanners, while keeping track of technical specifications of your mac systems. Without it, it’s safe to say that most IT managers would be at a loss on how to best upgrade or utilize their machines.

Apple’s Disk Utility Tool and Software Restore

When it comes to monolithic cloning, the Apple Disk Utility Tool and ASR (software restore, accessible only from the command line) is the way to go. It offers a choice between both GUI and Diskutil tools (with command lines), to help clone systems for easy configuration of multiple computers on the same network, while allowing system administration as well. Both of these tools can be improved upon with the use of accessory tools such as Carbon Copy Cloner or Blast Image Config (for image capture, deployment, and ASR session setups).

Property List Editor

Property list editors such as PlistEdit Pro for Mac are an essential tool for network admins on any IT panel, particularly for situations that demanding editing system or application preferences. The GUI tool that allows editing of XMP .plist preference files is available for both Mac and Windows servers. However, if you would prefer doing these modifications from within an app and then transferring the .plist files resulting from it, you could choose a Preference Setter for Mac-like application, geared towards viewing or editing preference files on OS X platforms.

NetInstall and NetRestore

These features of the Mac OS X Server were conceptualized on the basis of a “NetBoot” system, with allowance for servers to host boot volumes and direct booting from networks. NetInstall, configured as a utility and admin tool for booting OS X installers, can perform pre- as well as post-installation tasks (binding, installation, partitioning, etc). On the other hand, NetInstall works a little like the ASR to deploy specific images/offer image selections from available database. By the way, AutoCasperNBI is a great tool which can help you automate the process of creating NetBoot images.

WiFi Explorer

This inexpensive tool by Adrian Granados functions as a wireless network scanning tool, helping diagnose or troubleshoot connectivity/performance related issues. Not only can it detect channel conflicts, but also take care of configurational or signal overlap problems. WiFi Explorer, with its clean and simplistic UI, can be a great tool to have in the arsenal for sorting out any network related issues on your systems. For additional help in the network department, you could also install the Angry IP Scanner which can resolve IP address hostnames, determine MAC addresses, scan ports, and report back on existing subnet IP connections.

TextWrangler

Config. files are a headache for most IT managers, so is it any wonder that there are some amazing tools to take care of these for you? TextWrangler, for example, is a great free application that highlights the lines relevant to you. It also integrates a search option for Find and Replace features. Whether you need to make changes to UNIX configuration files on the Mac, or have corrupted processing files that your systems can no longer read, TextWrangler is the one-stop solution for simplifying tasks that traditionally take a very long time and a lot of patience.

Apple Remote Desktop

The remote desktop option from Apple can be a little pricey (at a single-license $299 package), but it’s a worthy investment for IT admins. It has the capability to report on, identify or virtually track minute details including application usage, hardware inventories, and user access. Unsurprisingly, it is one of the most essential tools on the list of must-haves in this category. With the ability to monitor the use of remote Mac computer including overall status, have access to troubleshooting shares,  remote (and completely hidden) control of systems, or even global messaging alert systems – it has a great feature set for you to consider. Of course, you could also use other applications such as CorD for remote access, but the ARD is the most stable OS X desktop management system out there – not just for remote assistance, but software distribution and management of systems.

AutoDMG

This is a useful tool for administrators looking to procure a fresh clean boot image for OS X. The system works on the basis of taking up an OS X Installer, building a clean system image, and then suitable deployment with the help of additional software like the DeployStudio. All you need is an installer to create an installer package for all the computers under your control, which makes it a useful tool to have for system managers and admins.

Active Directory Suites

OS X comes with a built-in Active Directory client for joining Active Directory domains, allowing single sign-on options. Of course, you can also use a dual-directory setup from Apple, joining the Open and Active directories for secure access to resources. With Apple’s Profile Manager Feature you have the choice of both iOS device management and MAC client management without the hassle of opting for directory services. Apple’s Active Directory tools are good, but not perfect – no client management support beyond basic passwords, no DFS browsing, etc. can be severely crippling.

To expand your access and management of your devices, you might consider other options such as the PowerBroker Identity Services Open-Ed, and Centrify Express, which has the capability of broader authentication and access abilities. If you desire to integrate your client management capabilities without having to go through complex dual directory setups or extensions of schemata, the Direct Control and Enterprise Editions respectively are a good option.

Author: Rahul Sharma

Image Courtesy: KROMKRATHOG, freedigitalphotos.net

Top Ten Monitoring Tools for System Admins

Local area network administrators, network admins, or system administrators, are the individuals within a company that oversee the performance of the organization’s networks. These professionals are expected to gather and assess information from network users so that they might identify and fix problems. Fortunately, admins do not have to struggle alone, there are some extremely valuable tools available today that can make the lives of system admins far easier. Here are the top ten tools.

  1. Microsoft Network Monitor

This is a packet analyzer that provides admins with the means to capture, view and assess network traffic. It’s especially handy for troubleshooting problems with applications and network issues. Some of the main features include support for over 100 Microsoft and public propriety protocols, capture sessions, and more. Moreover, Microsoft Network monitor is surprisingly easy to use. Simply choose which adapter to bind to and from the main window and then click on new capture, to initiate a new tab.

  1. Pandora FMS

Pandora is a network monitoring, performance monitoring and availability management service that can be used to watch your communications, applications and servers. It has a detailed correlation system for events that allows users to design alerts that are based on events taken from different sources. It also ensures that administrators are alerted before any issues begin to escalate too far.

  1. Splunk

Splunk is a data analysis and collection platform that allows system admins to gather, monitor and analyze data that has been taken from a number of different sources within your network, such as your devices, services, and event logs. You can create alerts that will notify you when something goes wrong, or use the extensive search function to make the most of any data that you do collect. Additionally, Splunk also supports the installation of apps to extend functionality within the system.

  1. Nagios

Nagios is a great tool for network monitoring which helps to ensure that all applications, critical systems and services are consistently up and running. It comes with features such as event handling, reporting and alerting. With the Nagios core, you can implement plugins that will allow you to further monitor metrics, applications and services, as well as add-ons for graphs, data visualization, load distribution and database support. The free version of Nagios is generally a good option for smaller organizations and can monitor as many as seven nodes at once.

  1. BandwidthD

BandwidthD oversees the IP/TCP network usage in your business, as well as displaying the data that has been gathered in various forms, such as tables and graphs, throughout disparate time periods. Each protocol, such as UDP or HTTP, will be color-coded to ensure easy reading. What’s more, this service can run discretely in the background without disrupting your normal activities. It is east to download and install. Once the program is up and running, give it a few moments to monitor your network traffic.

  1. EasyNetMonitor

This tool is incredibly lightweight and simple for those who want to monitor remote and local hosts, to determine whether they are active or not. It is especially useful when it comes to monitoring critical servers from a desktop, and provides immediate notifications via popups and log files if a specific host does not respond to a ping. Once you’ve added the machines that you wish to monitor to the system, remember to configure your notification setting, and the ping delay time.

  1. Fiddler

Fiddler is a tool for web debugging that can capture HTTP traffic as it moves between specific computers and through the internet. This tool allows you to carefully evaluate any outgoing and incoming data, as well as giving you the means to modify responses and requests before they hit your browser. The service also gives you detailed information regarding your HTTP traffic, meaning that it can be used to test your website performance and your web application security.

  1. Angry IP Scanner

IP Scanners are important tools, and the Angry IP Scanner is a free standalone application that allows you to scan ports and IP addresses. It is used to find out which hosts are active, as well as obtaining information about them, including their host name, ping time, MAC address and so on. By going into the Tools tab, you can decide which information you want to collect from any scan.

  1. NetXMS

NetXMS is a multiplatform network for monitoring and management, that provides performance monitoring, event management, altering, reporting and graphing for your complete infrastructure IT model. The main features of this service include support for database engines and operating systems, distributed network monitoring, business analysis tools and auto-discovery. The program also allows you to run a management console or web-based interface if necessary. Once you’ve downloaded and logged into XMS, you should go into the Server configuration window and change settings according to the requirements of your network. Then, you’ll be able to run the network discovery option which will cause NetXMS to automatically find devices that exist on your network.

  1. Xirrus Wi-Fi Inspector

The Xirrus Wi-Fi inspector can be used to search for networks in your area, as well as controlling, troubleshooting and managing various connections. Xirrus verifies Wi-Fi coverage, locates Wi-Fi enabled devices and identifies any rogue access points around your business. What’s more, the Xirrus program comes equipped with quality tests, speed tests and tests for connection efficiency. Once you have launched the inspector and chosen an adapter, a list of Wi-Fi connections should be displayed in your Network pane.

 Author: Rahul Sharma

10 Important steps after deploying Windows Server 2012

configuration steps

Here’s something that you probably know – Microsoft has now discontinued support for 32 bit processors on their Windows servers. For Windows Server 2012 to function smoothly, the hardware requirements need to be a 64 bit processor. Having fast processors will provide increased speed and more memory space. After picking the right hardware and installing the 2012 edition of the Windows Server, do you know the most important steps after that? Here are the top 10 steps.

  1. Change Computer Name: Post deployment, you’ll be logged as the administrator by default. On the Server Manager Box that would already be open, click on the left side of the pane and choose the option – ‘Local Server Category’. Next, on the right side of the pane under the Properties column, select the name of the device displayed next to the Computer Name option. Make sure that the Computer Name Option is on the System Properties Box that would be displayed. Select the Change button. In the Computer Name option under the field-Computer Name. Click on OK once done and click OK again on the information box that pops up.
  2. In System Properties Section, select Remote Tab: Then, choose the ‘Allow Remote Connections to Computer’ radio button. A warning box would be displayed. Click ‘OK’.  Additionally, you can disable option to run your computer run connections from remote desktop. To do this, unselect the checkbox to allow the Windows Server 2012 to accept the Remote Connections from servers. Once this is completed, click on Close to save. A confirmation box would pop up. Click on Restart Later in that box.
  3. Integrate to the Network-  Next, from the Server Manager section under the Properties window, select the IPv4 address designated. In the Network Connections section that opens, right-click on the NIC symbol that would be integrated to the network. To fill in the address, a list would be displayed. Go to Properties from the list that appears on screen. On the Properties box that opens, there will be a list of options. Double click on the IP Version 4 option. In the box that comes up, select the radio button which asks you to use the IP pathway. Then, fill the enabled IP address.
  4. Enter DNS Specific Address: In case you are running a DNS server, populate it with the IP Address. Generally, if it is the initial domain installation in the network, then no DNS Servers would be available. However, if you intend upon introducing a new DNS Server in the Active Domain Controller itself, then you need to populate it with the same IP pathway used before. Once this stage is complete, click on OK.
  5. Unsubscribe TCP/IPv6 option to save space: Coming to the NIC Properties section, save the modifications made by clicking on OK. Additionally, you can also unsubscribe to the TCP/IPv6 option in order to avoid extra processing and memory usage before selecting the same. Now, you can exit the Network Connections section.
  6. Modify Time and Date: On the Server Manager Section, check for the time zone and see whether it is accurate and click on it to make the necessary changes. Open the Date and Time section and under the Timezone section, click on the Change Timezone button. From the dropdown list that appears, select the correct Timezone as per the requirements of your geographical region. Select OK on all the boxes which are opened to revert to the Server Manager Section.
  7. Windows Update Configuration: Configuring the Windows Update settings is critical to protect your sever. To begin, click on ‘Not Configured’ next to Windows Update. A screen would appear. Click on Turn on Automatic Updates in this screen. This will ensure that Windows will look for updates that are yet to be applied on the system and install them automatically. You can also customize the time and periods in which such updates appear as some updates require the entire system to be restarted. To change the settings, go the left side of the window pane and click on Change Settings
  8. Update Servers: Admins often need tools to help the servers match the visualizations of the digital environment they foster. The Windows Server 2012 has the necessary tools already installed but for other variants of the servers, they need to be downloaded.
  9. Firewall: If you prefer to disable host-based firewalls, change the settings by turning on the Domain in the Windows Firewall Configuration page. On the left hand side of the window, click on Turn Windows Firewall On/Off to effect the necessary changes and select the Radio Button next to Turn off Windows Firewall next to each network to disable the same for all the networks.
  10. Anti-Virus installation: This is an important step in ensuring the security of your servers. If you don’t have a preferred anti-virus solution, you can download a free trial versions to begin with.

Once the entire process is completed, close the Server Manager Window and restart the computer to allow the changes made to take effect. Wait till the system restarts and once it does, you’ll see that the Windows Server 2012 is up and ready for functioning on your computer. This might not be the full list of steps but these gives some basic steps for you to get started.

Author: Rahul Sharma

Image Courtesy:  Jumpe, Freedigitalphotos.net

A Dozen Windows Server 2012 Tricks To Make You A Great Admin

tips and tricks

Windows Server 2012 is much advanced than its predecessors. Understanding its capabilities can make every admin’s job a lot easier and make them look like a true wizard. Here are some tricks to help you learn more about Windows Server 2012.

Take advantage of new & improved server management system

One of the easily noticeable things about the 2012 edition is the Server Manager. Microsoft combined the server roles installer and the features installer to save users the hassles of having to install them twice. New server roles can be assigned simply by clicking on Manage and then Add server. The new server manager is intelligent enough to group the server roles based to the appropriate server and displays the management tools for the server and the tools for editing them, all in the same window.

Team up the network adapters

Teaming network adapters together is an effective way to either increase the availability of server or to increase the speed and performance, and this functionality has been incorporated into the 2012 server core. It allows combining Ethernet connections via compatible network cards together without the hassles of adding any specific tools. To team network adapters together, simply enter the server manager, find the Local server menu, and find the compatible adapters you can team together and link them by right clicking and entering the features menu.

Do a lot more with your iSCSI protocol

The new Windows server 2012 allows users to assign roles to virtual hard drives and set them up as internet small computer system interface (iSCSI) targets over the network. To enable the feature, assign the roles in the server manager under the Files and iSCSI tab and set up the size and configuration of the virtual hard disk and its access properties.

Enable Remote Server Administration 

In addition to the all-powerful server manager, Windows Server 2012 also offers remote server administration tools or RSAT that can be used to control the connected servers from a Windows 8 operating system. The function can be activated by downloading the .msu tool files from support and adding the local server through your server manager control menu.

Pick right interface environment

The new graphical interface of the Windows Server 2012 lacks many of the features available with previous editions. Even applications such as IE are missing from it. But the server core retains all the command-line management tools ensuring complete functionality. But you can optimize the graphical interface from the server manager and also choose which functionalities to retain and which ones to remove. Essentially you have three options, they are the desktop experience feature, the graphic tools and infrastructure, and the graphic shell feature. You can choose to either keep all of them or a selected few.

Set up the basic configurations

Many of the basic configurations which needed to be settled during installation can now be handled easily, thanks to the server manager. You can change most of the configuration by simply navigating to the Local server link in the server manager menu. Also you can change the security options of Internet explorer for hassle free downloads right from the server manager as well.

Get complete control using virtual domain controller

Need for having a physical domain controller to enable the Hyper-V cluster just to host the connection and USN rollback due cloning of domain controller’s VM used to be a huge headache, but in the 2012 server edition all these little problems have been effectively eliminated and pose no threats to the active directory. However, still there are few restrictions on cloning the DC, such as the requirement for presence of two DCs, with one containing the PDC emulator and so on. Simply add the DCcloneconfig.xml file to the source active directory and make sure the appropriate DC is switched off. Then create the virtual machine import the server using PowerShell to initiate the cloning process.

Activate Replication of Hyper V

Windows server 2012 allows users to replicate virtual hard disks and domain controllers without the requirement of clusters. To configure the replication, ensure the option is enabled in the Hyper V settings panel, and define server from which the replicas need to be attended. Then simply activate the replication method for the virtual server by right clicking and selecting the enable option. You can also choose to schedule the replication for a later time.

Start from Hyper V failover

Though not frequent, failovers can occur when operating with Hyper V replicates. So it’s better to create a scheduled failover and restore point such that the manager starts replication from a known target source VM. To schedule a failover for a replication, navigate to the appropriate replicate in the Hyper V manager and click on failover in the pop-up menu and enable the action by choosing a restore point.

Configure virtual server backup

Backing up is always a good idea, and the Windows Server 2012 edition with Veeam offers the easiest way to do it. Admins can back up both virtual and normal servers, integrate the Hyper-V clusters, and even backup individual items instantly with no downtime using the Veeam backup too.

Familiarize PowerShell to control servers

The PowerShell application gains an all new functionality upgrade allowing admins to list, control, manipulate and prevent replication of servers. Discover all the active servers with the commandlet Get-ADReplicationUpToDatenessVectorTable* | sort Partner,Server | ft Partner,Server,UsnFilter. Check the status of active director replication, view individual sites and domain controllers and lot more using the PowerShell commandlets.

Configure the DHCP failover

Similar to the Hyper V failover setup, the DHCP can also be configured to start off from fail safe and allows connection of two IpV4 servers. To enable the failover, navigate to the DHCP server console, locate the DHCP server you want to configure and enable the failover properties. Enter the details of the available fail-safe server and create the link between the two. Once configured the properties will be displayed on the failover properties tab.

Author: Rahul Sharma

Image Coutesy: David Castillo Dominici, FreedigitalPhotos.net

The State of the Containers-as-a-Service Market

state of caas

Containers-as-a-service took off in 2014 and strategically etched through the cloud market to become one of the biggest buzz words in the industry. Enterprises that have leveraged this new technology are consistently raining praises on it, prompting other organizations to consider implementing it in their overall cloud architecture. It has proven to be exactly what many cloud users have been waiting for- a solution that effectively abstracts operating systems to enable servers execute different applications simultaneously by distributing resources accordingly.

While many tech experts and CIOs are of the opinion that CaaS is just getting started and will subsequently grow exponentially to supplement what organizations are already getting from VMs, others believe that the market is not too promising and will possibly stagnate after VM users adequately adopt CaaS due to its isolation capabilities.

So, what is the actual state of the market? Will the excitement continue translating to CaaS migrations? Who are the dominant CaaS providers, and what features do they have over other service providers?

Statistical Data

Infonetics Research, which is currently part of IHS Inc, recently did an in-depth research on cloud computing by surveying a wide range of service providers on their current services and future projections. They subsequently published “Cloud Service Strategies: Global Service Provider Survey“, a study which favors hybrid and CaaS as two of the fastest growing cloud solutions- 82% of service providers are already drawing up plans to implement them in the future.

The researchers further analyzed top challenges to CaaS service providers and found that customers are particularly wary of security. Therefore, by solving technical security challenges CaaS adoption is expected to improve exponentially. Other strategies which will predictably continue improving the numbers include:

  • Providing off-premise CaaS by bundling it with network connectivity
  • Providing comprehensive packages containing CaaS and other critical cloud services according to overall user preferences.

CaaS Providers and Solutions

CaaS, unlike most other cloud applications, is relatively new and distributed by just a small fraction of service providers- who currently enjoy absolute market dominance.

Currently, containers are widely leveraged as a framework for hosting web applications which are developed on top of java, .Net or other technological stacks with these 8 infrastructural components:

  • Deployment automation service
  • Database service
  • Containers/Computational services for API, Web and application services
  • Load Balancer service
  • Static Content and Resource service (for static pages, JS, images, audio and video)
  • Content Delivery Network
  • Firewall and Security
  • DNS and Discovery service

While some users adopt these services from container frameworks or cloud platforms, others simply build them entirely from the ground base on proprietary or open source software.  Some of the providers who distribute these services along with CaaS include:

  • Google Cloud Platform
  • Microsoft Azure
  • Digital Ocean
  • Joyent
  • Rackspace
  • Amazon Web Services

Although a significant number of cloud providers have already tried out containers, they are yet to strategize on production-grade use. As a result, they shy away from CaaS, leaving the entire market to just these providers. Even though they are few and seemingly effectively distribute and manage CaaS, they have one major drawback (which also applies to the two leading ones, Amazon Web Services and Google Cloud Platform)-They are yet to fully integrate with Docker API, which cripples their security and network connection management between containers on a single host machine.

Docker

If you’ve been a keen follower of container news, you’ve probably heard the word “Docker” being mentioned severally in the context of containers. So, what is it and how does it affect the market?

Being an open source container technology, Docker offers its users unparalleled features which have proven to be better and faster compared to hypervisors.  As a platform for running, shipping and developing applications, it significantly shortens the writing to running code cycle by facilitating faster deployment and testing. This is achieved through a combination of application management and deployment tools, workflows and lightweight container virtualization. With Docker, users can also separate applications from their infrastructures and treat their infrastructures like managed applications.

Developers largely prefer it because it supports the development of applications in all languages and toolchains. Additionally, it delivers superior applications which are portable and compatible with a wide range of systems including QA servers, Windows and OS X servers.

The Docker hub also grants developers more than 13,000 applications to kick start and manage their app development process. As app development proceeds, developers can collaborate with their counterparts through private and public repositories, plus take advantage of automation features by automating their build pipeline.

Sysadmins, on the other hand, like Docker not only for its ability to track dependencies and changes, but also due its standard environment for app development.  They abstract away differences in underlying infrastructure and OS distributions by “dockerizing” the application platform.

Docker has also proven to facilitate convenient flexibility among sysadmins. By standardizing on it as the deployment unit, sysadmins greatly benefit from workload elasticity. Additionally, its light weight facilitates quick and simple scale downs and ups in response to demand changes. Overall, Docker has significantly helped sysadmins efficiently and rapidly run and deploy a wide range of applications on a many different types of infrastructures. It’s therefore considered as one of the prime pillars of CaaS.

With such developments on a new technology, the CaaS is considered to be fairly promising, particularly to cloud service providers who are currently considering their chances of joining the CaaS bandwagon. It could be a worthwhile endeavor for a strategic and calculative company which needs to take advantage of the expansive market and gain early dominance.

Author: Davis Porter

Image Coutesy: