Archive for the ‘IAAS’ Category

When Does AWS GovCloud Make Sense?

 

With GovCloud, AWS has successfully managed to revolutionize the game by providing an extensive and surefire way to not only implement but also manage business technology infrastructure. By providing services based on their own back-end technology infrastructure, which they have spent over a decade perfecting, AWS guarantees one of the most reliable, cost-efficient and scalable web infrastructures. GovCloud was launched in 2011 to satisfy stringent regulatory requirements for local, state and federal governments. Its efforts to meet regulatory standards and increase feature consistency between its public sector and commercial solutions has led to the addition of dozens of new services and nine new private-sector regions across the planet. This enables the IT departments within agencies to reap similar benefits from cloud computing enjoyed by all other AWS users, such as improved scalability and agility and greater alignment of costs.

Amazon explains that GovCloud tackles specific regulatory and compliance requirements such as the International Traffic in Arms Regulations (ITAR) that regulates how defense-related data is stored and managed. In order to guarantee that only designated individuals within the United States have access, GovCloud segregates the data both physically and logically. AWS GovCloud is not limited to the government agencies; the region is also available to vetted organizations and contractors who operate in regulated industries. For example, government contractors have to secure sensitive information.

When Does AWS GovCloud Make Sense?

I. High Availability Is Important to Mission Critical Applications

Building a highly available, reliable infrastructure on an on-premise data center is a costly endeavor. AWS offers services and infrastructure to build fault-tolerant, highly available systems. By migrating applications and services to AWS GovCloud, agencies not only benefit from the multiple features of cloud computing but also instantly reap improvements in the availability of their applications and services. With the right architecture, agencies get a production environment with a higher availability level, without any additional processes or complexity.

Some of the services GovCloud users can access to get this easy out-of-the-box redundancy, durability and availability include: EC2 coupled with auto-scaling – for scalable capacity computing; VPC – to provision private isolated AWS sections; Elastic Load Balancing (ELB) – to automatically distribute incoming application traffic across multiple EC2 instances; direct connect – to establish a private connection between an AWS GovCloud region and your data-center; and Elastic Beanstalk – to deploy and scale web apps and services.

II. Big Data Requires High-Performance Computing

User productivity and experience are key considerations, and both hinge on the performance of applications in the cloud. Government agencies typically amass huge sets of data that carry crucial insights. AWS GovCloud allows you to spin up large clusters of compute resources on-demand, while only paying for what you use and obtaining the business intelligence required to fulfill your missions and serve your citizens. Additionally, GovCloud avails low-cost and flexible IT resources, so you can quickly scale any big data application, including serverless computing, Internet of Things (IoT) processing, fraud detection, and data warehousing. You can also easily provision the right size and type of resources you require to power your big data analytics applications.

III. High Data Volume Means Higher Storage and Backup Needs

A major consideration when migrating to the cloud is secure, scalable storage. For government organizations, this need is amplified, not only because of the volume of data that needs to be stored, but also because of the sensitive nature of said data. AWS provides scalable capacity and direct access to durable and cost-effective cloud storage managed by U.S. persons, while satisfying all security requirements. GovCloud users have access to multiple storage options, ranging from high-performance object storage to file systems attached to an EC2 instance. AWS also offers a native scale-out shared file storage service, Amazon EFS, that gives users a file system interface and file system semantics. Amazon Glacier and S3 provides low-cost storage options for the long-term storage of huge data sets.

Customers can have the information stored in Redshift, Glacier, S3 and RDS automatically encrypted with a symmetric-key encryption standard that utilizes 256-bit encryption keys. Additionally, using very simple approaches, IT systems can be backed up and restored at a moment’s notice.

IV. Critical Applications Should Scale With User Demand

Predictable workloads may require reserved instances during spikes, and such payloads need on-demand resources. AWS utilizes advanced networking technology built for scalability, high availability, security and reduced costs. Using advanced features such as elastic load balancing and auto-scaling, GovCloud users can easily scale on demand. Auto-scaling enables government agencies to maintain application availability by dynamically scaling your EC2 capacity up or down depending on the specified conditions. Amazon Elastic Cloud Compute (EC2) provides re-sizable, secure compute capacity in the cloud. It is built to make web scale computing simpler, enabling users to efficiently and quickly scale capacity as computing requirements change.

In Closing

As the number of government organizations moving to the cloud continues to rise, these organizations will require a platform for compliance and risk management – a place where confidential, sensitive or even classified data and assets remain secure. GovCloud provides a quick way for government agencies to host and update cloud data and applications so that contractors and employees can focus on service delivery rather than managing server infrastructure.

Government organizations can take full advantage of GovCloud and all that it has to offer via content collaboration software. FileCloud on AWS GovCloud is an ideal solution for government agencies that want complete control and security of their files.
Click here to learn more about FileCloud on AWS GovCloud.

 

Author: Gabriel Lando

FileCloud Empowers Government Agencies with Customizable EFSS on AWS GovCloud (U.S.) Region

FileCloud, a cloud-agnostic Enterprise File Sharing and Sync platform, today announced availability on AWS GovCloud (U.S.) Region. FileCloud is one of the first full-featured enterprise file sharing and sync solutions available on AWS GovCloud (U.S.), offering advanced file sharing, synchronization across OSs and endpoint backup. With this new offering, customers will experience the control, flexibility and privacy of FileCloud, as well as the scalability, security and reliability of Amazon Web Services (AWS). This solution allows federal, state and city agencies to run their own customized file sharing, sync and backup solutions on AWS GovCloud (U.S.).

“Having FileCloud available on AWS GovCloud (U.S.) provides the control, flexibility, data separation and customization of FileCloud at the same time as the scalability and resiliency of AWS,” said Madhan Kanagavel, CEO of FileCloud. “With these solutions, government agencies can create their own enterprise file service platform that offers total control.”

Government agency and defense contractors are required to adhere to strict government regulations, including the International Traffic in Arms Regulations (ITAR) and the Federal Risk and Authorization Management Program (FedRAMP). AWS GovCloud (U.S.) is designed specifically for government agencies to meet these requirements.

By using FileCloud and AWS GovCloud (U.S.), agencies can create their own branded file sharing, sync and backup solution, customized with their logo and running under their URL. FileCloud on AWS GovCloud offers the required compliance and reliability and delivers options that allow customers to pick tailored cloud solutions. FileCloud is a cloud-agnostic solution that works on-premises or on the cloud.

“FileCloud allows us to set up a secure file service, on servers that meet our clients’ security requirements,” said Ryan Stevenson, Designer at defense contractor McCormmick Stevenson. “The easy-to-use interfaces and extensive support resources allowed us to customize who can access what files, inside or outside our organization.”

Try FileCloud for free!

Alternative to WatchDox – Why FileCloud is better for Business File Sharing?

WatchDoxVsFileCloud

FileCloud competes with WatchDox for business in the Enterprise File Sync and Share space(EFSS). Before we get into the details, I believe an ideal EFSS system should work across all the popular desktop OSes (Windows, Mac and Linux) and offer native mobile applications for iOS, Android, Blackberry and Windows Phone. In addition, the system should offer all the basics expected out of EFSS: Unlimited File Versioning, Remote Wipe, Audit Logs, Desktop Sync Client, Desktop Map Drive and User Management.

The feature comparisons are as follows:

Features WatchDox
On Premise
File Sharing
Access and Monitoring Controls
Secure Access
Document Preview
Document Edit
Outlook Integration
Role Based Administration
Data Loss Prevention
Web DAV
Endpoint Backup
Amazon S3/OpenStack Support
Public File Sharing
Customization, Branding
SAML Integration
Anti-Virus
NTFS Support
Active Directory/LDAP Support
Multi-Tenancy
API Support
Application Integration via API
Large File Support
Network Share Support Buy Additional Product
Mobile Device Management
Desktop Sync Windows, Mac, Linux Windows, Mac
Native Mobile Apps iOS, Android, Windows Phone iOS, Android
Encryption at Rest
Two-Factor Authentication
File Locking
Pricing for 20 users/ year $999 $3600

From outside looking-in, the offerings all look similar. However, the approach to the solution is completely different in satisfying enterprises primary need of easy access to their files without compromising privacy, security and control. The fundamental areas of difference are as follows:

Feature benefits of FileCloud over WatchDox

Unified Device Management Console – FileCloud’s unified device management console provides simplified access to managing mobile devices enabled to access enterprise data, irrespective of whether the device is enterprise owned, employee owned, mobile platform or device type. Manage and control of thousands of iOS and Android, devices in FileCloud’s secure, browser-based dashboard. FileCloud’s administrator console is intuitive and requires no training or dedicated staff. FileCloud’s MDM works on any vendor’s network — even if the managed devices are on the road, at a café, or used at home.

Amazon S3/OpenStack Support Enterprise wanting to use Amazon S3 or OpenStack storage can easily set it up with FileCloud. This feature not only provides enterprise with flexibility to switch storage but also make switch very easily.

Embedded File Upload Website Form – FileCloud’s Embedded File Upload Website Form enables users to embed a small FileCloud interface onto any website, blog, social networking service, intranet, or any public URL that supports HTML embed code. Using the Embedded File Upload Website Form, you can easily allow file uploads to a specific folder within your account. This feature is similar to File Drop Box that allows your customers or associates to send any type of file without requiring them to log in or to create an account.

Multi-Tenancy Support – The multi-tenancy feature allows Managed Service Providers(MSP) serve multiple customers using single instance of FileCloud. The key value proposition of FileCloud multi-tenant architecture is that while providing multi-tenancy the data separation among different tenants is also maintained . Moreover, every tenant has the flexibility for customized branding.

NTFS Shares Support – Many organizations use the NTFS permissions to manage and control the access permissions for internal file shares. It is very hard to duplicate the access permissions to other systems and keep it sync. FileCloud enables access to internal file shares via web and mobile while honoring the existing NTFS file permissions. This functionality is a great time saver for system administrators and provides a single point of management.

Conclusion

Based on our experience, enterprises that look for an EFSS solution want two main things. One, easy integration to their existing storage system without any disruption to access permissions or network home folders. Two, ability to easily expand integration into highly available storage systems such as OpenStack or Amazon S3.

WatchDox neither provides OpenStack/Amazon S3 storage integration support nor NTFS share support. On the other hand, FileCloud provides easy integration support into Amazon S3/OpenStack and honors NTFS permissions on local storage.

With FileCloud, enterprises get one simple solution with all features bundled. For the same 20 user package, the cost is $999/year, almost 1/4th of WatchDox.

Here’s a comprehensive comparison that shows why FileCloud stands out as the best EFSS solution.

Try FileCloud For Free & Receive 5% Discount

Take a tour of FileCloud

Alternative to Varonis Datanywhere – Why FileCloud is better for Business File Sharing?

VaronisDatanywhereVsFileCloud

FileCloud competes with Varonis Datanywhere for business in the Enterprise File Sync and Share space(EFSS). Before we get into the details, I believe an ideal EFSS system should work across all the popular desktop OSes (Windows, Mac and Linux) and offer native mobile applications for iOS, Android, Blackberry and Windows Phone. In addition, the system should offer all the basics expected out of EFSS: Unlimited File Versioning, Remote Wipe, Audit Logs, Desktop Sync Client, Desktop Map Drive and User Management. Let’s look how FileCloud is a better Alternative to Varonis Datanywhere for business file sharing.

The feature comparisons are as follows:

Features Varonis Datanywhere
On Premise
File Sharing
Access and Monitoring Controls
Secure Access
Document Preview
Document Edit
Outlook Integration
Role Based Administration
Data Loss Prevention
Web DAV
Endpoint Backup
Amazon S3/OpenStack Support
Public File Sharing
Customization, Branding
SAML Integration
Anti-Virus
NTFS Support
Active Directory/LDAP Support
Multi-Tenancy
API Support
Application Integration via API
Large File Support
Network Share Support
Mobile Device Management
Desktop Sync Windows, Mac, Linux Windows, Mac, Linux
Native Mobile Apps iOS, Android, Windows Phone iOS, Android, Windows Phone
Encryption at Rest
Two-Factor Authentication
File Locking
Pricing for 750 users/ year ~$30199 ~$39000

From outside looking-in, the offerings all look similar. However, the approach to the solution is completely different in satisfying enterprises primary need of easy access to their files without compromising privacy, security and control. The fundamental areas of difference are as follows:

Feature benefits of FileCloud over Varonis Dataanywhere

Document Quick Edit – FileCloud’s Quick Edit feature supports extensive edits of files such as Microsoft® Word, Excel®, Publisher®, Project® and PowerPoint® — right from your Desktop. It’s as simple as selecting a document to edit from FileCloud Web UI, edit the document using Microsoft Office, save and let FileCloud take care of other uninteresting details in the background such as uploading the new version to FileCloud, sync, send notifications, share updates etc.

Embedded File Upload Website Form – FileCloud’s Embedded File Upload Website Form enables users to embed a small FileCloud interface onto any website, blog, social networking service, intranet, or any public URL that supports HTML embed code. Using the Embedded File Upload Website Form, you can easily allow file uploads to a specific folder within your account. This feature is similar to File Drop Box that allows your customers or associates to send any type of file without requiring them to log in or to create an account.

Unified Device Management Console – FileCloud’s unified device management console provides simplified access to managing mobile devices enabled to access enterprise data, irrespective of whether the device is enterprise owned, employee owned, mobile platform or device type. Manage and control of thousands of iOS and Android, devices in FileCloud’s secure, browser-based dashboard. FileCloud’s administrator console is intuitive and requires no training or dedicated staff. FileCloud’s MDM works on any vendor’s network — even if the managed devices are on the road, at a café, or used at home.

Device Commands and Messaging – Ability to send on-demand messages to any device connecting to FileCloud, provides administrators a powerful tool to interact with the enterprise workforce. Any information on security threats or access violations can be easily conveyed to the mobile users. And, above all messages are without any SMS cost.

Amazon S3/OpenStack Support Enterprise wanting to use Amazon S3 or OpenStack storage can easily set it up with FileCloud. This feature not only provides enterprise with flexibility to switch storage but also make switch very easily.

SAMLFileCloud supports SAML (Security Assertion Markup Language) based web browser Single Sign On (SSO) service that provides full control over the authorization and authentication of hosted user accounts that can access FileCloud Web based interface.

Multi-Tenancy Support – The multi-tenancy feature allows Managed Service Providers(MSP) serve multiple customers using single instance of FileCloud. The key value proposition of FileCloud multi-tenant architecture is that while providing multi-tenancy the data separation among different tenants is also maintained . Moreover, every tenant has the flexibility for customized branding.

Endpoint Backup: FileCloud provides ability to backup user data from any computer running Windows, Mac or Linux to FileCloud. Users can schedule a backup and FileCloud automatically backs up the selected folders on the scheduled time.

Conclusion

Its a no-brainer. FileCloud hands down beats Varonis Datanywhere in feature set and value at 1/2 the price.

Here’s a comprehensive comparison that shows why FileCloud stands out as the best EFSS solution.

Try FileCloud For Free & Receive 5% Discount

Take a tour of FileCloud

How to Monitor and Control AWS Costs

aws cost
As the leading cloud service provider, AWS has attracted a significant number of organizations prioritizing on cost efficiency through the cloud. Although this has largely been effective for most companies, there are still a couple that are operating on a higher budget than they earlier anticipated. They may have successfully reduced their overall costs- but they’re still yet to develop strategies to further minimize them. Such organizations are yet to fully realize the costs benefits of adopting cloud solutions.

If you’re part of such a company, you may be pleased to learn that all hope is not lost. AWS has a couple of strategies which could be employed to reduce your computing costs and subsequently control them effectually- but still maintain optimal computing performance. Here are some of the most efficacious tips to monitor and control AWS costs:

Keeping Up With New Instance Types

Amazon are famous for the attention they give their users. They consistently review their user’s comments and consequently improve their products to release new instance types according to their respective customer needs.  Of course it’s inadvisable to blindly adopt new instance types- but definitely worth it if potential cost savings outweigh the cost of adopting new solutions. It’s therefore prudent to review each new instance type according to its potential cost savings and overall computing benefits before considering it in your cloud strategy.

Although some don’t impact costs, a significant number of new instance types have a potential to further reduce your cloud costs. In 2014 for example, Amazon released new T2 instances which create “CPU Credits” during low-processing periods to automatically utilize during busy periods. This low-cost stable processing approach has proven to be particularly ideal for applications with rare spikes like development tools and small databases.

Leveraging Spot Instances

To cheaply operate and run background processes, you should consider leveraging spot instances. They allow you to dictate the maximum amount you’d be comfortable spending on the instances, consequently facilitating effective price control particularly during off-peak hours.

The main disadvantage of this feature however, is the fact that your instances may be prematurely terminated when the price is exceeded. Your processes could be stopped even when the jobs are 90% complete, resulting in cost and resource wastage since you’ll be compelled to restart the processes from scratch. The best strategy to prevent this is building an architecture with dynamically changing bid prices which don’t exceed on-demand prices. That means setting up a spot instance only auto-scaling group- CloudWatch will regulate the entire process by scaling up the groups within request parameters when price meets a bid. This setup should further be combined with a second on-demand auto-scaling group, with an ELB serving as a link between the two groups. Through it, requests will be efficiently shifted between on-demand and spot groups.

Reserved over On-Demand Instances

In a bid to reduce their overall cloud computing costs, many businesses have preferred reserved instances over on-demand instances. They are acquired for a period of one to three years, through an upfront or partial payment system. In some cases, users have managed to reduce their cloud computing costs by more than 50% by shifting from on-demand to reserved instances.

Although it may sound promising, it’s not all a bed of roses- there are a couple of complications which come with this option. For instance, it’s fairly difficult to correctly predict the amount of instances you’ll need over the next couple of years, especially if you’ve not been an avid user of AWS. Only the most experienced cloud users can accurately do so.

Adopting Monitoring Tools

The most fundamental step in cloud cost assessment and subsequent control is collecting accurate information on your regular costs vis-à-vis resource usage. Informed strategies can only be drawn up from comprehensive facts on regular cloud usage. Amazon has simplified this process by granting access to a wide range of tools that you could use to collect historical and current information on your cloud usage. Some of these third party applications additionally provide cost saving tips according to user behavior and cloud architecture. Two of the most prominent ones include;

  • Trusted Advisor: Although it provides fault tolerant, security and availability recommendations, Trusted Advisor’s main function is to recommend tools and opportunities which would significantly reduce your cloud spending. It monitors user CPU usage and has the capability of reducing your cloud costs by 20-50%
  • Cloudcheckr: This is the ideal tool if you need to track any unusual costs which may appear on your overall AWS bill. It uses sophisticated tools to assess your cloud environment and report results in detailed billing analysis and cost maps to give you a vivid picture of the underlying cloud operations and related costs. Consequently, standard policies are enforced and developers are alerted in cases resources are run outside that configuration.

In the year 2014, Amazon added Cost Explorer in the list to help users further monitor and manage their cloud computing costs. It keeps track of and displays regular reports on daily and monthly spend, monthly cost by linked account, and monthly cost by service.

By collectively combining these features in assessing, monitoring and controlling your AWS cloud costs, you’ll be in a better position to make significant savings which may even run into millions of shillings, depending on your company size. Such amounts may alternatively be invested in other business departments to improve your overall operations, service delivery and ultimately trigger exponential business growth.

Author: Davis Porter

Image Courtesy: Natara, Freedigitalphotos.net and Amazon

How to Use AWS for Disaster Recovery

A disaster is undoubtedly one of the most predominant risks when running a businesses. It refers to any event or phenomenon which interrupts normal businesses processes or finances. In most cases, disasters are triggered by human errors, physical damages resulting from natural occurrences, power outages, networks outages, and software or hardware failure.

To prevent such occurrences and minimize potential damages in case of one, many organizations invest a significant amount of time and resources in strategizing and preparing their company entities. In addition to training employees to handle disasters, companies are also required to implement adequate restoration measures in case of complete system failures. If your company has a typical traditional physical environment, the most effectual strategy of protecting it is duplicating the infrastructure on a secondary platform to ensure spare capacity in case of a disaster- That’s where cloud disaster recovery comes in. According to a 2014 Forrester Research Report, about 19% of organizations have already adopted cloud disaster recovery to cushion themselves against potential damages. A significant majority of the respondents who hadn’t yet implemented it claimed that they are already drawing up plans to do so.

As the most popularly used cloud service, AWS has invested a lot of resources in disaster recovery as a strategy for improving user experience and staying ahead of its competitors. With an infrastructure that is consistently maintained, AWS is always capable of kicking in to support your operations in case of a disaster. Additionally, it’s highly scalable with a pay-as-you-go plan, which opens it up to all types of businesses regardless of their disaster management budgets. To help you comprehend how you can use AWS for disaster recovery, here are some of the main features and their relevance to AWS Disaster Recovery:

Deployment Orchestration

As an organization, you can significantly boost your recovery capability by investing in post-startup software installation/configuration and deployment automation processes. Some of the tools that you could use include:

  • AWS OpsWorks: Built as an application management service, AWS OpsWorks facilitates operation of different types of applications and considerably eases deployment processes in case of a disaster. The service grants users tools necessary for creating an environment based on a series of layers which are configured as application tiers.
  • AWS ElasticBeanstalk: This is a flexible service critical for scaling and deploying a wide range of services and applications built on Docker, Ruby, Python, Node.js, PHP, .NET, and Java.
  • AWS CloudFormation: This allows you to easily build and provision a set of related AWS resources in a predictable and orderly fashion.

Database

Just like Deployment Orchestration, there are three AWS database services which could be leveraged as you create a sustainable disaster recovery framework:

  • Amazon Redshift: This cost effective, fully-managed, fast, petabyte-scale database service is particularly ideal for the preparation phase of your entire disaster recovery strategy. It’s efficacious in data analysis and can be used to duplicate your entire data warehouse and subsequently store it in Amazon s3.
  • Amazon DynamoDB: Just like the former, this NoSQL data warehouse service can be effectively leveraged in the preparation phase to duplicate data to Amazon s3 or DynamoDB within another region. It’s fully managed, fast, and comes with single digit, millisecond latency.
  • Amazon Relational Database Service: Just as its name suggests, this is a user-friendly service optimized for setting up, scaling and operating relational cloud databases. It can used in the recovery phase to execute the production database, or in the preparation phase to store vital data in a running database.

Networking

Managing and modifying network settings is imperative if you need to smoothly shift to a secondary system in case of a disaster. Some of the primary AWS networking features and services that are effectual in this include:

  • Amazon Direct Connect: This service eases the process of building a dedicated network connection between Amazon Web Services and your organization. In case of a disaster, this strategy increases bandwidth throughput, reduces network costs and provides a better, more persistent network experience compared to internet-based solutions.
  • Amazon Virtual Private Cloud: This service allows you to create an isolated, private AWS cloud section where you can manage and operate resources within a determined virtual network. In case of a disaster, you can efficiently use it to push your existing network typology to the cloud.
  • Elastic Load Balancing: ELB is built to split applications and subsequently spread them across different EC2 instances. It’s capable of simplifying the implementation of your disaster recovery plan by pre-locating the load balancer, subsequently revealing its DNS name.

Regional Bases

To safeguard their data, many organizations choose to store their primary backups on sites located far away from their main physical environments. If an earthquake or a large scale computer malware hit the United States for example, businesses with secondary servers positioned outside the country would have a better chance of recovering than ones that don’t.

Amazon Web Services has servers spread out across the globe to cater to such clientele. You can therefore choose to place your disaster recovery data in a separate region from where your primary system is positioned. Some of the regions include Asia Pacific, EMEA and Americas. Due to the sensitivity of government data, there are also special regions which are only applicable to government organizations and China.

With these features, AWS has undoubtedly proven to be one of the most efficient disaster recovery service providers in the market. This list however is incomprehensive- there are many other features which are implemented depending on a user’s disaster recovery strategy. For a fully optimized disaster recovery framework, an organization should consult an expert to analyze its potential risks to subsequently draft a comprehensive disaster recovery plan with all the requisite AWS features.

Author: Davis Porter

Self-Hosted Enterprise File Share and Sync Solution FileCloud Now Available Via the Microsoft Azure Marketplace

FileCloud_Azure

AUSTIN, TEXAS – Aug 05, 2015 – CodeLathe, a leading provider of private and customized cloud solutions, announced today that FileCloud is now available through the Azure Marketplace. The availability of FileCloud for Azure will allow businesses to host FileCloud in the Azure infrastructure. Customers will experience FileCloud’s superior control over enterprise data and its industry leading customization capabilities bolstered by Azure’s scalability, and resiliency.

FileCloud empowers corporate IT departments, and managed service providers to create their own secure file sharing and sync platform. Since FileCloud is a pure-play software solution, businesses can choose to deploy it on premise or on public cloud IaaS services depending on their needs.

“FileCloud is the ideal file sharing, and sync platform for both private companies and public sector organizations who deal with sensitive data,” said Madhan Kanagavel, CEO of CodeLathe. “By bringing FileCloud to the Azure Marketplace, now our enterprise customers get the best of both worlds: more control, security-enhanced, and customization of FileCloud; and the scalability, and resiliency of the Azure infrastructure.”

“Customers have told us that data security and control are critical for an enterprise file sharing platform,” says Nicole Herskowitz, Senior Director of Product Marketing, Microsoft Azure. “The addition of FileCloud to the Azure Marketplace offers our customers security-enhanced file sharing solution where enterprises have more control over its business data, and how it is stored and shared.”

The important differentiators between FileCloud and the other file sharing, and sync solutions include: the ability to self-host on-premise or on the cloud; and its unique data security implementation that monitors and fixes data leakage, which is a critical component for clients in financial, government, and health sectors. Global 2000 companies trust FileCloud, across all industries in 55 countries around the world.

Pricing for FileCloud starts at US$999/year for 20 licenses. Additional user licenses cost just US$25 per user per year.

For more information or to try FileCloud on Azure, please visit https://www.getfilecloud.com/filecloud_azure/ or call 1-888-571-6480.

8 Free Windows Server Admin Tools As Good As Their Paid Alternatives

free win server tools

With Windows Server, administrators have access to a wide range of tools that can help them manage, configure and troubleshoot Windows Servers and domains, securely. Even the most seasoned admins sometimes don’t know how helpful and effective some of these tools can be. Not only can these tools allow admins to manage a Windows server more effectively, they can also reduce hours of downtime.

Most of the so-called free applications are either outdated or just useless, especially when it comes to the enterprise IT needs. Any IT admin, who has worked with packages like Hyper-V, Exchange Server, SharePoint, and SQL Server, understands that convenience and features come at a price. However, here’s a list some free or free-to-try Windows Server admin tools that are worth their weight in gold when it comes to providing a competent alternative to the paid versions of Microsoft’s server products.

  1. Hyper-V Server 2008 R2

Hyper-V add-on for Windows Server 2008 has proven itself to be a great way to provide admins server virtualization within their system environment. But all this functionality doesn’t come cheap, unless you’re talking about Microsoft’s free version of Hyper-V, the Hyper-V Server 2008 R2. This free version is a stand-alone product that only features the famed hypervisor, virtualization components and the Windows Server driver model, making for a compact, non-nonsense package.

Don’t let the “free” tag fool you because even in its most basic state, the Hyper-V Server 2008 R2 offers admins all the critical features that they need to perform virtualization, including live migration, host clustering, flexible memory support, octa-core processor support, among other. But being free, the tool has its limitations. This version lacks two notable features, namely the application failover and guest virtualization rights. On the offset, there’s no need to jump to the Enterprise or Standard versions just yet because this free version of Hyper-V can still add a lot of value to your existing IT environment.

  1. Sysinternals Suite

The Sysinternals Suite is ranked among the top free Windows Server admin tools, especially among newbies. In fact, most IT admins who are new to the Windows Server platform might not even be aware of this versatile tool, or might just not have had enough exposure to it. This free Server Suite features an equally impressive range of tools that aid in performing tasks like managing open files, monitoring active TCP network connections, and even managing your active processes. The best part about Sysinternals is that you can run most of its tools directly via Microsoft’s website, without having to install anything. All these features, coupled with no out of pocket expenses, make this tool a must-have for any Server admin.

  1. EasyBCD

Prior to the launch of the Windows Vista operating system and Windows Server 2008, when Microsoft shifted to using the Boot Configuration Data (BCD) boot environment, working on old boot.ini files was a fairly simple procedure. The problem was that BCD, despite making booting way more secure, also made it a literal headache to manage because of its clandestine and somewhat obsolete command-line BCDedit tool. But the EasyBCD admin tool from NeoSmart Technologies gives admins an easy-to-use graphical editor that they can use for their Windows BCD boot files.

  1. Remote Desktop Enabler

The Windows Remote Desktop is a vital and nearly indispensable tool that allows admins to remotely manage their troubleshooting issues. The Remote Desktop tool by IntelliAdmin lets administrators enable the RDP remotely. It is however, useful to note that you will need to have enabled the remote desktop management option on the computing device that you wish to remotely access. And that is easier said than done!

  1. Wake-On-LAN

Another great free Windows Server admin tool is the Wake-On-LAN tool from SolarWinds. As the name implies, it allows admins to send data packets to networked computers that have Wake-On-LAN enabled in via their BIOS. This useful tool makes your networked PCs boot up just the way they would if you had pressed the power button. For this useful application to work properly, you’re going to have to input your MAC, as well as TCP/IP addresses of the system you need to boot.

  1. Exchange Remote Connectivity Analyzer

This is the perfect free tool for anyone who has installed Exchange server and needs a way to test their remote connectivity. This tool is ideal if you want to test your server’s ability to send and receive emails, or perform more comprehensive tests like those for mobile connectivity, RPC over HTTP connectivity, and even auto-discovery. To use this tool, all you need to do is choose the desired test, input some vital statistics, and you’ll be testing remote connectivity on the fly, without needing any software installation. The Exchange Remote Connectivity Analyzer (ExRCA) will perform its function and will report back in case something has failed, and will also tell you why.

  1. SharePoint Foundation 2010

The latest offering from SharePoint Services, the SharePoint Foundation is the latest free version of an array of fresh features for your SharePoint Server. Despite the free tag, this software application is absolutely feature-packed. Although the Enterprise and Standard versions will obviously offer much more,  SharePoint Foundation tool’s features might be good enough to satisfy your needs.  It features all of SharePoint’s key elements, including document libraries, workspaces, wikis, blogs, and so on.

  1. Microsoft Assessment and Planning Toolkit

The Microsoft Assessment and Planning Toolkit is by far the best host-free tool that branches out into your server environment and takes inventory all your systems through a network-wide automated discovery option. It’s also a great testing tool, allowing admins to test the success of a Windows 2000 Server migration or extract SQL user information. This free tools strength lies in its ability to inventory computer systems and evaluate Windows 7 and Office 2010 start-up options, within your server’s environment.

Author: Rahul Sharma

image courtesy: Stuart Miles/ freedigitalphotos.net

Containers vs Hypervisors

hypervisor vs containers

Ever since the introduction of CaaS and subsequent extensive adoption by enterprises, there has been a debate regarding containers vs. hypervisors. While some view containers as a revolution that is gradually phasing out hypervisors, others believe that the latter technology is here to stay and cannot be replaced by containers. So, what is the truth? Are containers an improvement of hypervisors? What are the merits and demerits of both?

To get answers, we need to define each technology separately and comprehend its benefits and weaknesses:

Hypervisors

Since hypervisor is a form of virtualization, it’s important to first define virtualization before digging further into the details on hypervisors.

Virtualization was introduced to primarily optimize hardware utilization by overlaying one operating system on another. Each of the systems consequently shares hardware resources to support underlying processes.

Hypervisors based virtualization tries to achieve the same goals through a different strategy- instead of simply sharing the underlying hardware resources, hypervisors emulate them on top of existing virtual and physical hardware. An operating system is then created to manage these resources, hence making it OS agnostic. In other words, with a windows system based hypervisor running on underlying physical hardware, you can create another system running on virtual resources and install Linux on it. The vice-versa could also be implemented.

The base operating system achieves this by modifying underlying physical hardware resources to fit the processing requirements of the guest operating system. Hypervisors manage the process by controlling the amount of resources allocated to guest operating systems. Since they sit between the actual physical hardware and guest operating systems, hypervisors are also referred to as virtual machine monitors or VVMS.

Benefits

First, hypervisors are a favorite to enterprises which need to leverage idle resources by fully utilizing them. Imagine an organization using a physical server with 1G INC card, 8 core processor and 10GB RAM to support an ftp server for its agents and an internal website. Of course such resources would be excessive for these processes, which require smaller servers with less capabilities. As a result, the hardware resources are underutilized and remain idle for a significant amount of time.

The most effectual method solving such a problem is adopting several hypervisors to virtualize the physical resources and dedicate them accordingly. A fraction of hypervisors are dedicated to support the ftp server and internal website, with the rest being freed up for other processes- hence optimizing resource utilization.

Secondly, the installation of both host and guest operating systems in hypervisors is easy and doesn’t require extensive expertise. Some hypervisors however, like Xeon, do not run on host operating systems, but rather on bare metal. They only utilize host operating systems as control interfaces. Others like QEMU achieve platform level virtualization by simulating different machine architectures, contrary to hypervisors like Virtualbox, which don’t employ this strategy.

Finally, they are incredibly secure compared to containers and can manage additional operating systems- which of course require more resources.

Drawbacks

Although simulations are intended to optimize resource utilization, hypervisors largely slow down their servers. It occurs due to several CPU and memory managers within the guest and host operating systems. The best way to boost performance in such cases is through paravirtualized hardware, where a new driver and virtual device is built for the guest.

Hypervisors also fail in providing for complete process isolation. As a result, all VM resources are directed to a single process. They are therefore unsuitable for extensive app testing, a process which requires individual process isolation to prevent the transmission of bugs to other processes.

Containers

Although both are forms of virtualization, hypervisors virtualize on a hardware level while containers achieve this on an operating system level- by sharing the base operating system’s kernel. They further abstract VMs to facilitate isolation of resources to support different processes concurrently. You can, for instance, run Arch in one container and Dubian in another at the same time without interfering with each other.

Benefits

Since containers sit on the same operating system kernel, they are lighter and smaller compared to hypervisors. A base operating system can therefore support containers more efficiently and effectively than hypervisors. This means that they can run on lower spec hardware than hypervisors, which often require extensive, high performance supporting hardware.

By isolating application environments, containers achieve better resource utilization than hypervisors. Each application uses its own set of resources without affecting the overall performance of the server. They are therefore ideal for enterprises which concurrently run multiple processes on single servers.

Drawbacks

Although they are widely considered as a revolution to cloud computing, containers have their own set of drawbacks. First, they can only run on namespace and cgroups, both of which are Linux kernel features. That makes them incompatible with other operating systems like Windows and Mac OS. Due to this huge disadvantage, both Windows and Mac OS are reportedly developing systems of integrating containers within their servers.

Secondly, containers are less secure and more vulnerable compared to hypervisors. By accessing only a couple of namespaces through libcontainers and leaving out the rest of the kernel subsystems, containers make it easy for hackers to crack through their operating systems.

Conclusion

Since both containers and hypervisors have their set of benefits and drawbacks, the most sustainable architectures include both systems in their framework. By leveraging both according to their features and application suitability, you stand to benefit more compared to an organization that focuses on just one of them. Containers are therefore not replacing hypervisors, but rather complementing their capabilities.


Author: Davis Porter

Image Courtesy: twobee, freedigitalphotos.net

How Containers-as-a-Service Increases Efficiency

CaaS  efficiency

First, it was Software-as-a-service, then came Infrastructure-as-a-Service, and now Containers-as-a-Service is proving to be a significant phenomenon in today’s cloud services market. So, what exactly is it and how is it increasing efficiency in small, medium and large enterprises?

To fully define its efficiency, we first have to comprehend inter-resource interaction during the pre-CaaS era:

Pre-CaaS Era

Before Containers-as-a-Service was firmly established as a type of IaaS, enterprises principally used virtual machines to run applications on the cloud. Although they were fairly effective, they had one significant disadvantage- it took all VM resources just to run a single application. Enterprises which still entirely rely on virtual machines dedicate entire operating systems to execute and manage simple applications, which literally require just a fraction of their respective virtual machines’ resources.

How CaaS Solves This Problem

Most enterprises rely on a set on networked applications aligned towards central objectives. With some of them running simultaneously, an organization that entirely relies on virtual machines needs to dedicate a lot of resources just to support their operations- since each application instance requires its own operating system. Consequently, such enterprises ultimately leverage several VMs to run even the simplest applications, which affects their system efficiency and overall performance.

CaaS solves this problem by simplifying such complex architectures. Instead of networking numerous virtual machines to support simultaneous applications, containers use a virtualization system which allocates resources according to size and application requirements.

Although traditional hypervisors have been supporting several operating systems, each OS acts as if it entirely controls its respective machine. CaaS has broken this up to application level by facilitating virtualization on Solaris, BSD and Linux, which allows apps to act as if they entirely control their respective operating systems. They therefore come with their own configuration settings, system libraries, applications, files, processes, memory, IP addresses and can be rebooted- consequently granting enterprises the privilege of running multiple applications on one virtual machine operating system.

How Caas Has Improved Operations

Caas has incredibly improved resource utilization by allocating memory, CPU and drive space according to individual application requirements. Organizations have consequently reduced resource redundancy and made significant cost savings on budgets- which would otherwise have been spent on acquiring additional machines and cloud resources.

The introduction of CaaS has also made a significant impact on hardware virtualization.  Enterprises are now running as many containers on their hardware as virtual machines, allowing them to further split their processes even without relying on virtualized cloud resources.

By further abstracting operating systems and virtualized resourced, CaaS has also completely eliminated boot time, subsequently considerably boosting application speed. Enterprises are now executing their processes faster and more efficaciously, ultimately improving overall operations and service delivery.

The isolation of applications additionally contributes to improved enterprise operations by enabling smaller app sandboxes, which are critical for testing applications. New applications and processes can now be introduced and executed in containers without spreading the risk to other applications on the same server. This is particularly critical considering increased cloud security risks due to imminent threats orchestrated by hackers.

The Ramifications

According to a performance study, Linux is a way more preferable cloud operating system compared to Windows, largely because it boots up to 9 times faster than the latter. Since it runs on small configurations which utilize less resources including disk space and memory, it’s also considered to be a cheaper alternative to its main competitors. With CaaS coming into the picture, it’s expected that Linux will continue dominating, and ultimately cement itself as a market leader ahead of its competitors.

The demonstration of cloud applications through virtual machines has always been relatively complicated due to long start up processes, which usually take a couple of minutes. Fortunately, CaaS has managed to completely eliminate this problem through quick booting, subsequently forming distributed applications’ new base unit as an alternative to threads. This new feature is additionally facilitated by containers’ loose coupling and isolation capabilities, which have proven to be significantly more efficient than threads. One of the most predominant services that are currently employing this technology is Google chrome, which dropped threads for process isolation in a bid to improve overall reliability, a critical factor for a robust cloud application.

Just like other cloud elements, CaaS is not static, but rather dynamic and increasingly evolving due to new developments. It’s expected to continue developing in a similar manner to the evolution of java bytecode-based sandbox applications from java virtual machines. Enterprise JaveBeanz and J2EE Web containers have already evolved into improved container forms and similar isolation concept evolution is expected to proceed to other containers.

The evolution process has also spread to operating systems since CaaS is currently being developed to be compatible with a host of other OS. There’s already growing CaaS interest which has sparked new implementations including CloudFoundry’s Warden, Heroku’s Dyno and Google’s Imctfy- subsequently joining the likes of Solaris Jones, BSD Jails, OpenVZ, Ixc and Docker. Windows and MacOs are also gradually making their way into the league after developing application sandbox concepts, which share a couple of similarities with containers.

By improving overall application efficiency, CaaS has significantly boosted the acceptance and adoption of cloud solutions within enterprises. It has therefore proven to be one of the major milestones of cloud evolution, and is expected to further make impacts which will eventually cement it as the one of the prime cloud elements.

Author: Davis Porter

Image Courtesy: vectorolie, freedigitalphotos.net