Archive for the ‘PAAS’ Category

Alternative to WatchDox – Why FileCloud is better for Business File Sharing?

WatchDoxVsFileCloud

FileCloud competes with WatchDox for business in the Enterprise File Sync and Share space(EFSS). Before we get into the details, I believe an ideal EFSS system should work across all the popular desktop OSes (Windows, Mac and Linux) and offer native mobile applications for iOS, Android, Blackberry and Windows Phone. In addition, the system should offer all the basics expected out of EFSS: Unlimited File Versioning, Remote Wipe, Audit Logs, Desktop Sync Client, Desktop Map Drive and User Management.

The feature comparisons are as follows:

Features WatchDox
On Premise
File Sharing
Access and Monitoring Controls
Secure Access
Document Preview
Document Edit
Outlook Integration
Role Based Administration
Data Loss Prevention
Web DAV
Endpoint Backup
Amazon S3/OpenStack Support
Public File Sharing
Customization, Branding
SAML Integration
Anti-Virus
NTFS Support
Active Directory/LDAP Support
Multi-Tenancy
API Support
Application Integration via API
Large File Support
Network Share Support Buy Additional Product
Mobile Device Management
Desktop Sync Windows, Mac, Linux Windows, Mac
Native Mobile Apps iOS, Android, Windows Phone iOS, Android
Encryption at Rest
Two-Factor Authentication
File Locking
Pricing for 20 users/ year $999 $3600

From outside looking-in, the offerings all look similar. However, the approach to the solution is completely different in satisfying enterprises primary need of easy access to their files without compromising privacy, security and control. The fundamental areas of difference are as follows:

Feature benefits of FileCloud over WatchDox

Unified Device Management Console – FileCloud’s unified device management console provides simplified access to managing mobile devices enabled to access enterprise data, irrespective of whether the device is enterprise owned, employee owned, mobile platform or device type. Manage and control of thousands of iOS and Android, devices in FileCloud’s secure, browser-based dashboard. FileCloud’s administrator console is intuitive and requires no training or dedicated staff. FileCloud’s MDM works on any vendor’s network — even if the managed devices are on the road, at a café, or used at home.

Amazon S3/OpenStack Support Enterprise wanting to use Amazon S3 or OpenStack storage can easily set it up with FileCloud. This feature not only provides enterprise with flexibility to switch storage but also make switch very easily.

Embedded File Upload Website Form – FileCloud’s Embedded File Upload Website Form enables users to embed a small FileCloud interface onto any website, blog, social networking service, intranet, or any public URL that supports HTML embed code. Using the Embedded File Upload Website Form, you can easily allow file uploads to a specific folder within your account. This feature is similar to File Drop Box that allows your customers or associates to send any type of file without requiring them to log in or to create an account.

Multi-Tenancy Support – The multi-tenancy feature allows Managed Service Providers(MSP) serve multiple customers using single instance of FileCloud. The key value proposition of FileCloud multi-tenant architecture is that while providing multi-tenancy the data separation among different tenants is also maintained . Moreover, every tenant has the flexibility for customized branding.

NTFS Shares Support – Many organizations use the NTFS permissions to manage and control the access permissions for internal file shares. It is very hard to duplicate the access permissions to other systems and keep it sync. FileCloud enables access to internal file shares via web and mobile while honoring the existing NTFS file permissions. This functionality is a great time saver for system administrators and provides a single point of management.

Conclusion

Based on our experience, enterprises that look for an EFSS solution want two main things. One, easy integration to their existing storage system without any disruption to access permissions or network home folders. Two, ability to easily expand integration into highly available storage systems such as OpenStack or Amazon S3.

WatchDox neither provides OpenStack/Amazon S3 storage integration support nor NTFS share support. On the other hand, FileCloud provides easy integration support into Amazon S3/OpenStack and honors NTFS permissions on local storage.

With FileCloud, enterprises get one simple solution with all features bundled. For the same 20 user package, the cost is $999/year, almost 1/4th of WatchDox.

Here’s a comprehensive comparison that shows why FileCloud stands out as the best EFSS solution.

Try FileCloud For Free & Receive 5% Discount

Take a tour of FileCloud

8 Free Windows Server Admin Tools As Good As Their Paid Alternatives

free win server tools

With Windows Server, administrators have access to a wide range of tools that can help them manage, configure and troubleshoot Windows Servers and domains, securely. Even the most seasoned admins sometimes don’t know how helpful and effective some of these tools can be. Not only can these tools allow admins to manage a Windows server more effectively, they can also reduce hours of downtime.

Most of the so-called free applications are either outdated or just useless, especially when it comes to the enterprise IT needs. Any IT admin, who has worked with packages like Hyper-V, Exchange Server, SharePoint, and SQL Server, understands that convenience and features come at a price. However, here’s a list some free or free-to-try Windows Server admin tools that are worth their weight in gold when it comes to providing a competent alternative to the paid versions of Microsoft’s server products.

  1. Hyper-V Server 2008 R2

Hyper-V add-on for Windows Server 2008 has proven itself to be a great way to provide admins server virtualization within their system environment. But all this functionality doesn’t come cheap, unless you’re talking about Microsoft’s free version of Hyper-V, the Hyper-V Server 2008 R2. This free version is a stand-alone product that only features the famed hypervisor, virtualization components and the Windows Server driver model, making for a compact, non-nonsense package.

Don’t let the “free” tag fool you because even in its most basic state, the Hyper-V Server 2008 R2 offers admins all the critical features that they need to perform virtualization, including live migration, host clustering, flexible memory support, octa-core processor support, among other. But being free, the tool has its limitations. This version lacks two notable features, namely the application failover and guest virtualization rights. On the offset, there’s no need to jump to the Enterprise or Standard versions just yet because this free version of Hyper-V can still add a lot of value to your existing IT environment.

  1. Sysinternals Suite

The Sysinternals Suite is ranked among the top free Windows Server admin tools, especially among newbies. In fact, most IT admins who are new to the Windows Server platform might not even be aware of this versatile tool, or might just not have had enough exposure to it. This free Server Suite features an equally impressive range of tools that aid in performing tasks like managing open files, monitoring active TCP network connections, and even managing your active processes. The best part about Sysinternals is that you can run most of its tools directly via Microsoft’s website, without having to install anything. All these features, coupled with no out of pocket expenses, make this tool a must-have for any Server admin.

  1. EasyBCD

Prior to the launch of the Windows Vista operating system and Windows Server 2008, when Microsoft shifted to using the Boot Configuration Data (BCD) boot environment, working on old boot.ini files was a fairly simple procedure. The problem was that BCD, despite making booting way more secure, also made it a literal headache to manage because of its clandestine and somewhat obsolete command-line BCDedit tool. But the EasyBCD admin tool from NeoSmart Technologies gives admins an easy-to-use graphical editor that they can use for their Windows BCD boot files.

  1. Remote Desktop Enabler

The Windows Remote Desktop is a vital and nearly indispensable tool that allows admins to remotely manage their troubleshooting issues. The Remote Desktop tool by IntelliAdmin lets administrators enable the RDP remotely. It is however, useful to note that you will need to have enabled the remote desktop management option on the computing device that you wish to remotely access. And that is easier said than done!

  1. Wake-On-LAN

Another great free Windows Server admin tool is the Wake-On-LAN tool from SolarWinds. As the name implies, it allows admins to send data packets to networked computers that have Wake-On-LAN enabled in via their BIOS. This useful tool makes your networked PCs boot up just the way they would if you had pressed the power button. For this useful application to work properly, you’re going to have to input your MAC, as well as TCP/IP addresses of the system you need to boot.

  1. Exchange Remote Connectivity Analyzer

This is the perfect free tool for anyone who has installed Exchange server and needs a way to test their remote connectivity. This tool is ideal if you want to test your server’s ability to send and receive emails, or perform more comprehensive tests like those for mobile connectivity, RPC over HTTP connectivity, and even auto-discovery. To use this tool, all you need to do is choose the desired test, input some vital statistics, and you’ll be testing remote connectivity on the fly, without needing any software installation. The Exchange Remote Connectivity Analyzer (ExRCA) will perform its function and will report back in case something has failed, and will also tell you why.

  1. SharePoint Foundation 2010

The latest offering from SharePoint Services, the SharePoint Foundation is the latest free version of an array of fresh features for your SharePoint Server. Despite the free tag, this software application is absolutely feature-packed. Although the Enterprise and Standard versions will obviously offer much more,  SharePoint Foundation tool’s features might be good enough to satisfy your needs.  It features all of SharePoint’s key elements, including document libraries, workspaces, wikis, blogs, and so on.

  1. Microsoft Assessment and Planning Toolkit

The Microsoft Assessment and Planning Toolkit is by far the best host-free tool that branches out into your server environment and takes inventory all your systems through a network-wide automated discovery option. It’s also a great testing tool, allowing admins to test the success of a Windows 2000 Server migration or extract SQL user information. This free tools strength lies in its ability to inventory computer systems and evaluate Windows 7 and Office 2010 start-up options, within your server’s environment.

Author: Rahul Sharma

image courtesy: Stuart Miles/ freedigitalphotos.net

Containers vs Hypervisors

hypervisor vs containers

Ever since the introduction of CaaS and subsequent extensive adoption by enterprises, there has been a debate regarding containers vs. hypervisors. While some view containers as a revolution that is gradually phasing out hypervisors, others believe that the latter technology is here to stay and cannot be replaced by containers. So, what is the truth? Are containers an improvement of hypervisors? What are the merits and demerits of both?

To get answers, we need to define each technology separately and comprehend its benefits and weaknesses:

Hypervisors

Since hypervisor is a form of virtualization, it’s important to first define virtualization before digging further into the details on hypervisors.

Virtualization was introduced to primarily optimize hardware utilization by overlaying one operating system on another. Each of the systems consequently shares hardware resources to support underlying processes.

Hypervisors based virtualization tries to achieve the same goals through a different strategy- instead of simply sharing the underlying hardware resources, hypervisors emulate them on top of existing virtual and physical hardware. An operating system is then created to manage these resources, hence making it OS agnostic. In other words, with a windows system based hypervisor running on underlying physical hardware, you can create another system running on virtual resources and install Linux on it. The vice-versa could also be implemented.

The base operating system achieves this by modifying underlying physical hardware resources to fit the processing requirements of the guest operating system. Hypervisors manage the process by controlling the amount of resources allocated to guest operating systems. Since they sit between the actual physical hardware and guest operating systems, hypervisors are also referred to as virtual machine monitors or VVMS.

Benefits

First, hypervisors are a favorite to enterprises which need to leverage idle resources by fully utilizing them. Imagine an organization using a physical server with 1G INC card, 8 core processor and 10GB RAM to support an ftp server for its agents and an internal website. Of course such resources would be excessive for these processes, which require smaller servers with less capabilities. As a result, the hardware resources are underutilized and remain idle for a significant amount of time.

The most effectual method solving such a problem is adopting several hypervisors to virtualize the physical resources and dedicate them accordingly. A fraction of hypervisors are dedicated to support the ftp server and internal website, with the rest being freed up for other processes- hence optimizing resource utilization.

Secondly, the installation of both host and guest operating systems in hypervisors is easy and doesn’t require extensive expertise. Some hypervisors however, like Xeon, do not run on host operating systems, but rather on bare metal. They only utilize host operating systems as control interfaces. Others like QEMU achieve platform level virtualization by simulating different machine architectures, contrary to hypervisors like Virtualbox, which don’t employ this strategy.

Finally, they are incredibly secure compared to containers and can manage additional operating systems- which of course require more resources.

Drawbacks

Although simulations are intended to optimize resource utilization, hypervisors largely slow down their servers. It occurs due to several CPU and memory managers within the guest and host operating systems. The best way to boost performance in such cases is through paravirtualized hardware, where a new driver and virtual device is built for the guest.

Hypervisors also fail in providing for complete process isolation. As a result, all VM resources are directed to a single process. They are therefore unsuitable for extensive app testing, a process which requires individual process isolation to prevent the transmission of bugs to other processes.

Containers

Although both are forms of virtualization, hypervisors virtualize on a hardware level while containers achieve this on an operating system level- by sharing the base operating system’s kernel. They further abstract VMs to facilitate isolation of resources to support different processes concurrently. You can, for instance, run Arch in one container and Dubian in another at the same time without interfering with each other.

Benefits

Since containers sit on the same operating system kernel, they are lighter and smaller compared to hypervisors. A base operating system can therefore support containers more efficiently and effectively than hypervisors. This means that they can run on lower spec hardware than hypervisors, which often require extensive, high performance supporting hardware.

By isolating application environments, containers achieve better resource utilization than hypervisors. Each application uses its own set of resources without affecting the overall performance of the server. They are therefore ideal for enterprises which concurrently run multiple processes on single servers.

Drawbacks

Although they are widely considered as a revolution to cloud computing, containers have their own set of drawbacks. First, they can only run on namespace and cgroups, both of which are Linux kernel features. That makes them incompatible with other operating systems like Windows and Mac OS. Due to this huge disadvantage, both Windows and Mac OS are reportedly developing systems of integrating containers within their servers.

Secondly, containers are less secure and more vulnerable compared to hypervisors. By accessing only a couple of namespaces through libcontainers and leaving out the rest of the kernel subsystems, containers make it easy for hackers to crack through their operating systems.

Conclusion

Since both containers and hypervisors have their set of benefits and drawbacks, the most sustainable architectures include both systems in their framework. By leveraging both according to their features and application suitability, you stand to benefit more compared to an organization that focuses on just one of them. Containers are therefore not replacing hypervisors, but rather complementing their capabilities.


Author: Davis Porter

Image Courtesy: twobee, freedigitalphotos.net

Alternative to Novell Filr – Why FileCloud is better for Business File Sharing?

FileCloudVsNovellFilr

FileCloud competes with Novell Filr for business in the Enterprise File Sync and Share space(EFSS). Before we get into the details, I believe an ideal EFSS system should work across all the popular desktop OSes (Windows, Mac and Linux) and offer native mobile applications for iOS, Android, Blackberry and Windows Phone. In addition, the system should offer allf the basics expected out of EFSS: Unlimited File Versioning, Remote Wipe, Audit Logs, Desktop Sync Client, Desktop Map Drive and User Management.

The feature comparisons are as follows:

Features sharefile
On Premise
File Sharing
Access and Monitoring Controls
Secure Access
Document Preview
Document Edit
Outlook Integration
Role Based Administration
Data Loss Prevention
Web DAV
Endpoint Backup
Amazon S3/OpenStack Support
Public File Sharing
Customization, Branding Limited
SAML Integration Under Development
Anti-Virus
NTFS Support
Active Directory/LDAP Support
Multi-Tenancy
API Support
Application Integration via API
Large File Support
Network Share Support
Mobile Device Management
Desktop Sync Windows, Mac, Linux Windows, Mac
Mobile OS Compatibility iOS, Android, Windows Phone iOS, Android, Windows Phone
Pricing for 100 users/ year $3000 $4500

From outside looking-in, the offerings all look similar. However, the approach to the solution is completely different in satisfying enterprises primary need of easy access to their files without compromising privacy, security and control. The fundamental areas of difference are as follows:

Feature benefits of FileCloud over Novell Filr

Embedded File Upload Website Form – FileCloud’s Embedded File Upload Website Form enables users to embed a small FileCloud interface onto any website, blog, social networking service, intranet, or any public URL that supports HTML embed code. Using the Embedded File Upload Website Form, you can easily allow file uploads to a specific folder within your account. This feature is similar to File Drop Box that allows your customers or associates to send any type of file without requiring them to log in or to create an account.

Document Quick Edit – FileCloud’s Quick Edit feature supports extensive edits of files such as Microsoft® Word, Excel®, Publisher®, Project® and PowerPoint® — right from your Desktop. It’s as simple as selecting a document to edit from FileCloud Web UI, edit the document using Microsoft Office, save and let FileCloud take care of other uninteresting details in the background such as uploading the new version to FileCloud, sync, send notifications, share updates etc.

Unified Device Management Console – FileCloud’s unified device management console provides simplified access to managing mobile devices enabled to access enterprise data, irrespective of whether the device is enterprise owned, employee owned, mobile platform or device type. Manage and control of thousands of iOS and Android, devices in FileCloud’s secure, browser-based dashboard. FileCloud’s administrator console is intuitive and requires no training or dedicated staff. FileCloud’s MDM works on any vendor’s network — even if the managed devices are on the road, at a café, or used at home.

Device Commands and Messaging – Ability to send on-demand messages to any device connecting to FileCloud, provides administrators a powerful tool to interact with the enterprise workforce. Any information on security threats or access violations can be easily conveyed to the mobile users. And, above all messages are without any SMS cost.

Multi-Tenancy Support – The multi-tenancy feature allows Managed Service Providers(MSP) serve multiple customers using single instance of FileCloud. The key value proposition of FileCloud multi-tenant architecture is that while providing multi-tenancy the data separation among different tenants is also maintained . Moreover, every tenant has the flexibility for customized branding. MSPs who are interested in becoming FileCloud partners click here

Customization & Branding – FileCloud can be customized extensively to reflect their brand. Some of the customizations include Logos, Labels, Email Templates, UI Messages and Terms Of service. However, Accellion’s kiteworks customization is very limited to header images displayed on the login and registration pages.

Amazon S3/OpenStack Support: Enterprise wanting to use Amazon S3 or OpenStack storage can easily set it up with FileCloud. This feature not only provides enterprise with flexibility to switch storage but also make switch very easily.

Conclusion

Based on our experience, enterprises that look for an EFSS solution want 3 main things. One, easy integration to their existing storage system without any disruption to access permissions or network home folders. Two, ability to easily expand integration into highly available storage systems such as OpenStack or Amazon S3. Three, ability to truly customize their self-hosted EFSS solution with their company branding.

Novel Filr neither provides OpenStack/Amazon S3 storage integration support nor extensive customization/branding capability. On the other hand, FileCloud provides easy integration support into Amazon S3/OpenStack and extensive customization/branding capabilities.

Here’s a comprehensive comparison that shows why FileCloud stands out as the best EFSS solution.

Try FileCloud For Free & Receive 5% Discount

Take a tour of FileCloud

Virtual Machines vs Containers: Are Containers Replacing Virtual Machines?

container as service vs. VM

 

Are virtual machine users actually shifting to container technology? Will containers eventually inevitably replace virtual machines?

To definitively answer this question, it’s critical to first comprehend how both technologies affect servers.

Virtual Machines: Just as its name suggests, a virtual machine is a physical hardware abstraction with a complete server hardware stack, from virtualized CPU to virtualized storage, network adapters and BIOS. Of course all these virtualized resources are managed by an operating system, which generally boots faster compared to a standard physical server.

Containers: Containers work on a much smaller scale compared to virtual machines. The abstraction is done on the operating system, contrary to the entire hardware stack abstraction on virtual machines. They consequently utilize fewer resources compared to virtual machines and allow users to pack and run multiple applications on a single server. Of course they may seem like much improved server abstraction technologies, but are they actually an alternative to virtual machines?

Why Containers are Seemingly Overtaking Virtual Machines

Containers are largely considered more effective compared to virtual machines because of their system resource efficiency. While virtual machines literally utilize all the server resources in running even simple processes, containers zero in only on the necessary resources. A small portion of the server resources is dedicated to running and handling a single process and the rest is freed up to handle other applications. This not only boosts system efficiency, but also allows users to make significant savings on costs which would have been spent on paying for additional servers for extensive multiple processes.

Due to the abstraction of the operating system, containers facilitate faster boot up processes compared to virtual machines. A standard virtual machine may take about a minute to boot and verify its resources, while a container achieves this in just a fraction of a second. This makes them particularly ideal for sensitive processes which depend on speed and efficiency.

Contrary to virtual machines, containers allow users to package applications as one-command line, registry stored, singularly-addressable components. Through this feature, app deployment is simplified and made less error-prone compared to virtual machines.

If you purely compared the two just by these factors, you’d probably be convinced that containers are indeed taking over. However, although significant, such advantages are only a drop in the ocean of cloud computing. There are other additional factors which if assessed critically, potentially place virtual machines higher than containers.

Why Virtual Machines May Be Here to Stay

 Security is undoubtedly one of the prime cloud computing concerns. Containers, unfortunately, are significantly disadvantaged to virtual machines when it comes to this. As a Red Hat Senior Security Engineer puts it, “containers simply do not contain”. Their technology is severely vulnerable compared to virtual machines. Take the example of the most prominent container technology, Docker, which primarily utilizes libcontainers. To work with Linux, LCs access 5 namespaces- Shared Memory, Host Name, Mount, Network and Process- but leave out a significant number of vital Linux kernel subsystems. (Including file systems under /sys, Cgroups,  and SELinux)

With such a vulnerability, any user with SuperUser privileges could easily crack an operating system. All a hacker needs to do is crack into an account with such features or configure one to SuperUser standards.

Fortunately, for container users, all is not lost. Although you’ll break a sweat, there ways of getting around such vulnerabilities to secure your system. Among the top remedies is configuring network namespace to connect with particular private intranets only; trigger container processes to only write container specific file systems; mount /sys files as read only, etc. Overall, these measures are intended to configure containers as server applications to secure them.

The fact that many containerized applications are available online introduces another security risk. A significant number of these come embedded with malware which launches immediately after installation, potentially harming your entire system.

The Actual State of Affairs

Since each technology has its own set of advantages and disadvantages, it’s safe to conclude that both are here to stay. Although people are fairly excited about containers, they’ll never fully replace virtual machines particularly because each has distinct purposes. If you’d want to run several applications and consequently need increased flexibility, you’d rather leverage a virtual machine. If, on the other hand, you plan to execute several copies of an application, you’d be better off with containers.

Additionally, if you’re comfortable with getting locked to a single operating system, you should consider using containers. They usually restrict users to specific operating system versions.  If you’d prefer flexibility in terms your operating system, you’d be fine with a virtual machine because of their compatibility with any operating system.

If you are a little bit of both, you’d rather use a hybrid system of both containers and virtual machines. This setup is particularly preferable in most organizations since it grants both containers and virtual machine privileges. The only challenge would be managing both architectures within a single infrastructure. Fortunately, there are solutions like Stratoscale which allow enterprises to efficiently achieve this.

With containers-VM collaborations producing promising results, experts predict that they’ll progressively grow and combine to form a cloud portability nirvana. Containers are therefore not replacing virtual machines, but rather complementing them.

Author: Davis Porter

 Image Courtesy: Master isolated images, freedigitalphotos.net

Introduction to Containers-as-a-Service

container as service

Recently, there has been a lot of buzz regarding containers in the cloud world. If you’re in the IT world, you’ve probably witnessed all the excitement regarding this new technology. A significant number of people feel that it will entirely revolutionize how operating systems interact with both hardware and software in the cloud. Others feel that it’s exactly what the cloud has needed all along to unlock its full potential.

So, what exactly is Containers-as-a-Service? How is it being leveraged in the cloud? Do its benefits warrant the buzz and excitement?

To comprehend the whole concept, you need to first define its roots…

Where it All Started

Over the years, the IT industry has enjoyed a lot of pivotal breakthroughs, all aimed at improving performance and service delivery. A significant number of these breakthroughs in the last 10 years have been on virtualization, with each new technology geared at reducing time to value and boosting overall resource utilization. The public cloud, along with API-based administration and multi-tenancy, fueled the improvement of these core goals and with time, users were able to effectively utilize single cores out of physical machines in their processes. In as much as this was largely perceived as ‘efficient’, it created one significant problem- virtualization of entire servers even during the execution of simple processes. Could it further be broken down to grant users the exact resources they needed without virtualizing entire machines?

Fortunately, with the motivation to come up with cheaper, faster software which could execute tasks at a much smaller scale, Google took up the challenge. They rallied their teams to abstract further to enable finer grained control. To implement this, they built cgroups, added it to the Linux kernel, and optimized it to develop a smaller, separate execution parameters called containers. They were basically simplified and virtualized operating systems which they primarily utilized in powering all of their applications.

In a couple of years, the technology grew to be picked up by Docker, who additionally developed interoperable format for container applications. Google is therefore the brains behind CaaS, while Docker further developed it into a much adaptable format.

The Linkage With Paas and Iaas

Caas has introduced a whole new perspective by forming an intermediate layer between Paas by Iaas, and consequently changing the historical order and interaction between the two.

Infrastructure as a Service has been primary aimed at granting users access to flexible raw assets. Platform as a Service on the other hand, gives lockdown experiences optimized for special use cases. In addition to operating systems, they form the three logical server layers. While the former represents hardware assets including physical and virtual ones, the latter delivers application runtime. In simple terms, IaaS users get NICs, hard drives, CPUs and Ram while PaaS is centered on management environments for Python, Ruby, Java, etc.

So, what do you do when you need a generic framework to efficaciously handle processes on different scales? That’s where CaaS comes in.  As PaaS delivers process runtime and IaaS provides the critical hardware, CaaS merges the two to grant you a flexible platform.

The Prime Benefits

CaaS has generated buzz because of its significant benefits especially its increased efficiency compared to hypervisors (in system resource terms). It achieves efficiency by eliminating all the unnecessary hardware resources to leave you with just a tiny portion of what you actually need to comfortably run your application. The rest of the hardware resources is directed to other simultaneous processes. Consequently, users utilize their servers more efficiently by running 4-6 times the number of applications compared to virtual machines.

Secondly, CaaS has largely simplified the deployment of apps by packaging them as one-command line, registry-stored, singularly addressable deployable components. What makes this even better is the fact that it can be remotely executed from anywhere.

The abstraction of operating systems through CaaS has considerably affected the booting process. Instead of waiting for the entire computer to boot in a minute or so, your resources are availed to you in just 1/20th of a second. This fundamentally improves process efficiency and speed.

Implications

CaaS has essentially impacted open source software applications by improving composability. It eliminates a considerable amount of boilerplate, specialized, error prone work by lowering the risk through containers housing compact scripted applications. Of course to smoothly achieve this, developers have to dedicate a lot of time and resources in installing and configuring njinx, node.js, RabbitMQ, GlusterFS, Hadoop, MOngoDB, memcached, MySQL collectively in single boxes to provide platforms for their applications.

Another core implication is cost savings during testing. In a standard Virtual machine, testing is usually charged for a minimum of ten minutes to an hour of the computer processing time. This of course translates to very cheap costs on simple, single tests. Problem however, comes in if you’re regularly running hundreds or probably thousands of tests since costs will severely shoot up. On containers however, since you can simultaneously run thousands of tests on the same server, the cost of multiple tests remains equal to a single one.

Finally, CaaS has powered faster and efficient development by granting users the privilege of running several containers on one computer. Although it’s possible to maintain several virtual machines on one computer, their number is always just a fraction of the sum of containers which can be handled.

 

Author: Davis Porter

Image courtesy: Stuart Miles, freedigitalphotos.net

 

Learn more about FileCloud

5 Elements You Should Be Keen On When Selecting a Managed Service Provider

Watch look for
Managed services are increasingly being implemented to support critical operations in businesses and organizations.  Businesses in the United States alone are currently spending more than 13 billion of their IT budgets on these strategies to reduce their overall costs and improve service delivery. Managed services’ benefits are widely acknowledged by business owners, IT architects and CIOs- consequently fueling continued growth which has seen the rate climb to 22% over the past couple of years. In fact, the growth rate is expected to further escalate to 58% by 2018.

Because of the excitement and buzz surrounding managed services, many organizations are now rushing to join the bandwagon and implement them in their operations. This unfortunately, leads to many errors which could potentially reduce the efficacy of their entire managed services framework. To prevent such a scenario, you should implement your services strategically by executing your migration plan in stages.

The first stage, of course, is selecting a stable managed service provider. With a market of thousands of vendors managing different services, you have a wide pool of choices to pick from. To discern the most effectual from the rest, here are 5 key elements you should critically assess:

Comprehensive Solutions

Businesses usually depend on a set of collaborative processes to sustain workflow maintain normal operations. You should therefore focus on holistic vendors with a wide range of services to support each of your business IT processes. A standard comprehensive suite of solutions has application management, end user computing, co-location, storage, virtual infrastructures capabilities, and more. They should also be delivered in flexible multiple systems and single uniform platforms to ensure compatibility with various business IT architectures.

Choosing a comprehensive solutions provider not only grants you access to a wide range of services, but also gives your business the flexibility to scale your services according to emerging trends and needs. You could even experiment on various services to evaluate their suitability within your information technology framework.

Customer Support

Each business is unique and requires special attention to efficiently sustain the processes and service delivery. It’s not advisable to go for a large scale service provider who may not deliver the requisite flexible customer-centric support. A small scale MSP on the other hand, may lack the requisite technology and expertise to offer comprehensive support. A standard medium sized service provider is therefore the most widely preferred choice for businesses seeking an MSP with the relevant technology and expertise to deliver timely, personalized services.

Your MSP should be easily reachable and fittingly responsive, especially on major technological issues affecting your business. The professional relationship built through such interactions should ultimately build a platform for you to receive additional advice on various IT issues, including implementation strategies on suitable solutions. MSPs who are interested in becoming FileCloud partners click here

Security

Security is the single biggest concern for most managed service users. According to IDG’s publication titled ComputerWorld Forecast Study 2015, CIOs are expected to increase their security spending by 46% in a bid to boost their technologies against increasing security threats. Managed services, among other IT architectures, are a primary focus because of their multi-dimensional accessibility, which makes them particularly vulnerable to hackers and malware.

To avoid becoming a security statistic and potentially risk the well-being of your business, you should keenly evaluate all the security features installed by various vendors. A good system should be supported by a robust security framework with dedicated competency and proactive intrusion detection across all the system’s components. For maximum protection, the vendor should additionally have a dedicated team of security experts monitoring all the channels and optimizing system security both virtually and physically. In case of a data breach, disaster management should automatically kick in to restore critical operations and track lost data to avert potential damages.

Globalization

Apart from increased convenience and reduced operation costs, one of the primary reasons of adopting managed services is flexibility- and that includes expansion and growth of a business. As your business grows, you may need to outsource some operations beyond borders or establish overseas branches. Of course the possibility of this depends on the globalization features of your managed service provider.

Your vendor should have an operation fabric and expansive infrastructure that efficaciously operates across borders. This grants you unlimited success to your applications across geographies with stable internet connectivity. Additionally, the vendor’s infrastructure should be self-sufficient to allow switches between servers in case of any system failures- hence giving you uninterrupted access to your services regardless of any technical difficulties experienced by your provider.

Customization

Finally, you should only consider vendors who can deliver exclusively customized solutions tailored according to your business goals, needs and best practices. Since customization is best effected from the design phase, your vendor should have the requisite expertise and technology to implement custom features on your entire IT architecture according to your individual business needs.

Great customization capabilities also come in handy when migrating your processes from your internal system. An efficient vendor should execute the process seamlessly by combining excellent onboarding features with customer support to evaluate your physical machines and subsequently virtualize them. This will not only save you the headache of migrating an already expansive system, but also the costs of hiring an external team of IT architects.

As you make your final choice on MSPs to entrust your business with, go through a couple of user reviews to comprehend what each company offers its customers. A simple statement from a past customer may ultimately save you potential headaches and disappointments of working with an ineffective provider.

 

Author: Davis Porter

Image Courtesy:vectorolie, freedigitalphotos.net

Future Predicted Trends in Managed Services

MSP Tremds

To many observers and IT professionals, managed services are seemingly peaking in 2015. Already, more than 60% of both large and small businesses have integrated some type of managed services in their overall IT strategies.  30% of service providers have experienced increased service use in the last couple of years in payment processing, mobile device management, managed communications, Office365, Software-as-a-service, Hardware-as-a-service, managed network security, remote monitoring and disaster management. This has subsequently increased their profits by 25-100%.

Although many businesses are already adopting managed service solutions, the most intriguing fact is that they are barely leveraging them. According to a CompTIA report titled Trends in Managed Services Operations, the primary reason for this is “uncertainty” surrounding the cloud. Most users choose to advance sparingly by adopting a hybrid cloud strategies, with external services constituting just a small portion. A lot more is expected to unravel in the managed service market in future. Here are the top predicted trends:

Managed Services Will Move Beyond Price Wars

The managed service market is currently largely dominated by price – Providers are consistently engaged in price wars since consumers use it as a critical judgment factor when choosing services. In fact, many consumers have chosen a provider sorely because of “affordable” or “cheap” packages compared to the rest.

Of course this trend is negatively affecting the market because some consumers are blindly attracted to “free” but substandard packages. On the flip side however, it is encouraging aggressive competition between service providers, consequently reducing the cost of adopting managed services.

Although it’s expected to play out for some time, consumers will soon start breaking away from the price battles and start focusing more on quality. They’ll keenly assess the actual specs, features and benefits of individual packages rather than the price. Ultimately, service providers will be forced to shift from price to quality-centered battles to attract more consumers.

Increased Application Portability

Today, an average managed service user has a computer at work, laptop at home, smart phone and probably a tablet. With managed services busking in the glory of portability benefits, it’s expected that providers will proceed developing products which are compatible with all these devices.  In addition to addressing portability issues, providers will concentrate on synchronizing their services and application state across a wide range of devices at the same time.

Over time, applications will not only be compatible with multiple devices, but will also run simultaneously across different platforms. Through this enhanced experience, users will switch freely between devices and proceed utilizing more managed resources.

Optimized IoT

According to Gartner Research, Internet-of-Things is already gaining momentum and will be a significant device feature in the near future. Currently, there are more than 4.9 billion Internet-of-Things enabled devices and the number is expected to grow by 30% on a yearly basis. The biggest beneficiaries, of course, are consumers due to the ability of networking their managed services on their devices.

Unfortunately, despite this excitement, there is a downside- security. According to Experian Data Breach Resolution, hackers are expected to pounce on the IoT network vulnerabilities once it starts going mainstream. It offers new, unexploited channels of hacking into managed service architectures.

To reduce the risk, consumers will of course depend on MSPs who offer innovative, secure services in an industry which is consistently facing increasing data security threats. Therefore, to reap the rewards of an expanding market base, MSPs will have to invest heavily in security, data protection and disaster recovery.

Use of APIs and Integration Tools

Over the years, MSPs have been doing extensive research and development on their products to boost their features and extend platforms to accommodate additional customers. Of course it has proven to be fairly effectual in attracting customers but quite expensive and cumbersome. It takes a lot of research and development to review architectures and adjust infrastructures to suit different customer needs.

Fortunately, MSPs have now found an easy way out- by avoiding all the R&D hard work and opting for APIs. This trend is expected to pick up and gain momentum in the near future, with MSPs pursuing third party integration tools as a better, more efficacious and less costly method of expanding their platforms. Consequently, vendors, ISVs and customers will simply integrate the services into their architectures without extensive development. This will see managed services expand faster and grow in popularity particularly among small and medium sized businesses. MSPs who are interested in becoming FileCloud partners click here

With many more exciting trends expected in the future, these are just but a tip of the iceberg. Predictably, according to the report Managed Services market- Global Forecast 2019”, these trends and technologies will drive the global market to a value of $193.34 billion by the year 2019- with North America being the biggest consumers followed by Asia Pacific. At estimated 2014 global value of $107.17 billion, activities in the managed services market over the subsequent five years will see it grow by a compound rate of 12.5% per annum. With such a promising growth rate, managed service providers should buckle up for the future, and stay on the lookout for developing technologies/strategies to attract the ever-expanding customer base.

Author: Davis Porter

Image Courtesy: 1shots, freedigitalphotos.net

A Game of Stacks : OpenStack vs. CloudStack

Several organizations are now investing in cloud computing because they have realized it has the ability to promote rapid growth while at the same time reducing the speed and costs of application deployment. Enterprises no longer need to carry the heavy burden of maintaining computing resources that are used periodically and left idle most of the time. However; even as the hype around cloud computing continues to grow, there are still numerous cloud related issues that are the source of debate and controversy, especially on the enterprise level. A good example is the CloudStack vs. OpenStack debate.

Open source is usually praised amongst IT professionals mainly because it provides an IT environment with a large community of support. Consumers also love it because it frees them from licensing costs while providing both flexibility and customization. When it comes to open source Infrastructure as a service (IaaS) there are two key players; OpenStack and CloudStack.

Both CloudStack and OpenStack are open source software platforms for IaaS that offer cloud orchestration architectures used to make the management of cloud computing easier and more efficient. This open source cloud squabble began when Citrix, a former OpenStack supporter, announced that it was going to re-establish its own cloud stack  under the Apache foundation. The ensuing battle between the two is of a strategic nature with both trying to become the open source IaaS stack most used for building enterprise private clouds.

One thing remains certain, open source cloud platforms are popular for the same reasons Linux took hold; low cost point of entry and the prospect of application portability. The only way to gauge which cloud stack platform is likely to win this game of stacks, is to take a closer look at both.

 

cloudstack

 

CloudStack

CloudStack is quickly gaining momentum amongst several organizations. Initially developed by Cloud.com, CloudStack was purchased by Citrix then later on released into the Apache Incubator program. It is now governed by the Apache Software Foundation and supported by Citrix. Since the Apache transition, other vendors have also joined the effort by enhancing and adding core capabilities to the core software. The first stable version of CloudStack was released in 2013.

The Good 

  1. Unique Features: The latest version of CloudStack includes commendable features such as storage independent compute and new security features that enable admins to create security zones across different regions. Its features enable day-to-day use and resource availability.
  2. Smooth Deployment: The installation Of CloudStack is quite streamlined. In a normal setup, only one VM would run the CloudStack management server while another VM acts as the de facto cloud infrastructure. From a deployment and testing perspective, the whole platform can be deployed on one physical host.
  3. Scalability: CloudStack has been designed for centralized management and massive scalability; enabling the effective management of numerous geographically distributed servers from a single portal.
  4. Multi Hypervisor support: The CloudStack software supports multiple hypervisors, including Citrix XenServer, Oracle VM, VMware,  KVM and vSphere. On top of that, CloudStack also supports a variety of networking models, like flat networks, VLANs and openflow.
  5. Detailed Documentation: The CloudStack documentation is well structured and one can easily follow it and eventually get something that works.
  6. Interactive Web UI: CloudStack has a polished and advanced web interface that makes it more user friendly.

The Bad

  1. Rigid Installation process and Architecture: CloudStack’s monolithic architecture has posed some challenges one of them being reduced installation flexibility. In some cases, additional knowledge might be required to install it.

The Ugly

  1. Community Support: Since CloudStack is relatively new in the open source IaaS space, it lacks a large community support base and it is not backed as much from the industry. However, this is likely to change considering the fact that CloudStack comes with a refined product coupled with a heavy user adoption.

openstack-logo

OpenStack

OpenStack is an open source IaaS initiative for managing and creating huge groups of virtual private servers in a cloud computing environment. It was initially developed by Rackspace and NASA. With an upwards of 200 companies adopting this platform, it is definitely one of the most popular cloud models out there. OpenStack’s main goal is to support interoperability between cloud services while enabling enterprises to create Amazon-like cloud services within their own data centers.

It is currently under the management of the OpenStack Foundation and is freely available under the Apache 2.0 license. OpenStack consists of a variety of interrelated stack parts that are all tired together to create the OpenStack delivery model.  The popularity of OpenStack has earned it the title of “the Linux of the cloud”

The Good

  1. Hypervisor support: Open Stack provides support for Xen and KVM, with limited support for  VMware ESX, Citrix Xen server and Microsoft Hyper-V. It does not support bare-metal servers and Oracle VM.
  2. Wide integration with storage and Compute technologies:  Constant storage is provided using OpenStack object storage to manage the local disk on compute node clusters. A variety of machine image types such as OVF, VMDK,VDI,VHD, and Raw are managed via the OpenStack image service.
  3. Enhanced Networking Capabilities:  OpenStack has a networking component (Neutron) that has direct integration with OpenFlow and allows higher levels of cloud scaling and multi-tenancy by adopting a variety of software-defined networking technologies into the cloud. Additionally, the OpenStack networking framework contains services like load-balancing features, intrusion detection services (IDS) and firewall technologies. All these features make OpenStack a stack platform capable of great failover and resilience.
  4. Large Community Support: OpenStack is without doubt the most mature stack-based cloud control model. It has the backing of large industry players like Dell, HP, and IBM alongside a long list of contributors.

The Bad

  1. Difficult to Configure and deploy: Since OpenStack is deployed through specific important incubator projects; expertise and time is required to get it up and running. Admins have said that several key components have to be managed from different command line consoles. OpenStack has eight modular components – Image server, Identity service, Dashboard, Networking, Block storage, Open storage, Amazon Web Services and compute compatibility. To some, this encompasses a slightly fragmented architecture; however, the upside of having several modular components is that users can choose which features/projects are required.

The Ugly

  1. Not Enterprise ready: One of the major downsides of OpenStack is the fact that it has not been packaged for enterprise; however, the situation is likely to change considering its large number of contributors.

Author: Gabriel Lando