Archive for the ‘Admin Tools and Tips’ Category

Working from Home and the Threat to Business Continuity

work from home

In January 2020 the World Health Organization declared the outbreak of a new coronavirus disease to a Public Health Emergency of International Concern. As the epidemic’s full implications became apparent, governments across the world begun issuing stay-at-home orders, lockdowns, and travel restrictions in a bid to halt the spread of the COVID-19 coronavirus and prevent the overload of public health systems with patients affected by the disease. This forced many organizations and institutions to rethink their mode of operations. The result was remote work becoming a requirement for many, sometimes overnight.

With the ubiquity of mobile devices in this digital age and most business applications being available via cloud services, migrating workloads from the office to home should be a relatively simple process. However, very few organizations were prepared for large-scale remote work.

For most IT teams, the challenge lies in ensuring their IT infrastructure can handle most, if not all their employees working remotely. But for the sake of the health and safety of their workforce, it’s a challenge they have to rise to while ensuring the continuity of their business operations during an unprecedented pandemic situation that has affected everyone across the globe.

Remote Working is Not a New Phenomenon

Remote work is an already attractive option for employees who prefer greater flexibility. It completely eliminates commuting time for workers with familial obligations; and as the workforce segment that supports aging family members continues to grow, the demand for flexible working arrangements will only rise. However, while remote work is being increasingly demanded by workers and facilitated by technology, according to Gartner, most enterprises (93%) defer to supervisors to decide who works from home and when they do. But due in part to an innate lack of trust, only 56 percent of supervisors actually all their employees to work from home – even if there are supporting company policies in place to facilitate it.

Enterprises that have already invested in the cloud from an infrastructure perspective, or largely rely on Software-as-a-Service (SaaS) apps are naturally at lower risk of experiencing technical difficulties during this time. The inescapable use of remote work for business continuity should signal to all enterprises that it is time to revisit their remote work policies and redesign them for robust use.

While the shift to remote work has accelerated, we believe that employees and organizations will begin to see the advantages of remote work and become better adapted to to it. Though the coronavirus disease has certainly affected work patterns in the short term, there will almost inarguably be far-reaching global technology implications, including increased demand for solutions such as Virtual Desktop Infrastructure.

Infrastructure is Key to WFH Productivity

A lot of the technology currently used to facilitate remote working has been around for decades. For most organizations, the underlying software, hardware, and support infrastructure have been designed to accommodate a small subset of the workforce. Performance, reliability, security, and the availability of applications and data are crucial. With the number of new at-home students and workers in the tens of millions, organizations are rushing to shore up their infrastructure to support new users while ensuring basic performance expectations are met.

Apart from the clear-cut, procurement of the correct tools, licenses, and deploying them quickly across a broad workforce: IT teams must also make sure that their colleagues can reliably utilize the resources. This includes access to basic apps like email, file sync, and sharing, via remote desktop access and other virtual desktop infrastructures.

Naturally, unanticipated stress has been put on remote working technologies, leading to security and bandwidth concerns. Whether or not existing on-premise enterprise setups are able to cope with the sudden, yet prolonged increase in users trying to access the organization’s applications remotely, solely rests on the quality of the network connections available.

Right of the bat, most businesses tried to access how much capacity they’ll require by running one-day tests. Agencies like NASA ran remote networking stress tests to understand what the impact of adding thousands of new remote workers would have on their networking capacities. Teleworkers will have to connect to their data repositories, applications, and other offerings to maintain business continuity. The effective transmission reliability and throughput of a VPN in combination with the internet can easily become a bottleneck.

A VPN is Not the Answer to All Your Problems

Software-as-a-service (SaaS) cloud is becoming mainstream among enterprises. Several applications now run in the cloud, making it easier than ever to leverage and acquire those apps to make enterprises more efficient and agile. The cloud has given rise to a world of mobility where workers can be productive from anywhere since access to applications is no longer tied to physically being in the office. With the coronavirus outbreak triggering a sudden influx in the need for work-from-home precautions, the need for easily accessible applications has never been more apparent.

As it stands, typical enterprise infrastructure involves applications that are hosted and installed within the office, and are only accessible from the confines of the office network. Most file servers are either hosted on Windows or Linux, with the two main types of file systems being NFS (Network File System) on UNIX/Linux and CIFS (Common Internet File System) on Windows. A vast majority of the organizations that have been in operation for a while, typically use CIFS or NFS file systems. An NFS allows remote hosting of resources over a network, by mounting file systems that facilitate the interaction as is if it was local.

CIFS is also cross-platform and folders can be shared over a local network or across the internet. Accessing files on a CIFS network over a VPN via a mobile network is possible, but the connection can be patchy, access to client applications will likely be limited and extremely slow. Whenever a remote worker needs to access an enterprise application or document, they must first turn on the VPN application, which in turn grants them access to information sitting within the company network. For VPN applications to work optimally, they require adequate network throughput and VPN hardware capacity. While this is easy to achieve on systems built for about 20 -30 percent of employees working remotely, it can easily be overloaded when this number skyrockets; regardless of how many licenses are available for the VPN application.

Overloaded VPN concentrators may require massive injections of rules managers and hardware. However, VPN hardware is not the type of thing you can easily pick up at your local BestBuy, it calls for a rigorous procurement and installation process that could end up taking weeks. The VPNs that organizations typically rely on to manage and access critical files have multiple limitations. With full office closures happening overnight, and workers being asked to work from home. Several of those workers are going to turn on their VPN to access business files and applications, only to discover that their connections are down. IT support staff should brace themselves for a flood of calls from frustrated colleagues trying to navigate through the nuances of remote connectivity, support issues will be hard to diagnose remotely because they involve third-party hardware and networks.

Luckily, several apps that once had to be hosted within an office network have since transitioned to the cloud. Productivity applications like Google G-Suite, Microsoft Office 365, and communication platforms like slack have seemingly eliminated the need for VPN applications. Sadly, VPNs are still central to how several enterprise workers access their files and applications. And an overnight migration to a public cloud architecture overnight is not a practical solution for them.

Making the Most Out of What You Have

In the event user or performance, issues arise. There are steps that IT teams can take to reduce the load on networks and various on-premise IT resources. Managing employees’ expectations is important and making them aware of possible degradation in the performance of the services and applications they rely on to complete their tasks may lessen the amount of stress the IT department is under.

To ensure workers remain productive and business continuity is not stifled, IT teams must find a way of enabling mobile access and file sync for data that lives behind a cooperate firewall without the need for a VPN and without re-configuring permissions whilst utilizing existing LDAP or Active Directory authentication. Several enterprises have heavily invested in scalable Network Attached Storage solutions like EMC Isilon and NetApp Filer. Solutions like NetApp provide low latency access to files as network shares via a WLAN or LAN. But they can still be used as part of a remote-work infrastructure with virtual desktop infrastructure applications.

There are no one-size-fit-all software, infrastructure or bandwidth that an organization can purchase to solve this problem by using legacy approaches, including the use of VPNs. Fortunately, the solution is simple. A solution that will allow enterprise data to sit securely on-premise at the office, and still be accessible to users via the cloud and not through a VPN. FileCloud can help remote workers collaborate, access, and share files securely with ease by mapping existing file servers on EMC Isilon and NetApp Filer as network folders and instantly make them available on networks outside the office. FileCloud easily integrates with existing active directory, NTFS file permissions, and network shares; giving employees low latency access to large files without having to recreate complex permissions.

Image courtesy of

Author: Gabriel Lando

Are System Admins Obsolete as Everyone Is Moving to Serverless Infra?


With everything going to the cloud and serverless infrastructure are sysadmin occupation becoming obsolete? what can sysadmins do to stay relevant in IT?

System administration roles are diversifying as system engineers, application engineers, DevOps engineers, DevOps engineers, Virtualization engineers, release engineers, cloud engineers, etc. Because of the scale in cloud computing and an additional layer of Virtualization, infrastructure engineering is managed as a code by using automation tools such as Chef and Puppet. The rise of computing and analytics has given tremendous elasticity and stress to the back-end infrastructure by deploying distributed computing frameworks such as Hadoop, Splunk, etc. Applications are scaling horizontally and vertically across the data centers. The emergence of cloud has shifted the traditional role of a system admin to the cloud engineer but infrastructure design and basic system services such as mail server, DNS, DHCP remain intact.  


  • Learn Linux

If you want to make your career as a Linux system administrator then you need to learn the basics of Linux along with the hands-on practicals. I would recommend you to go for Redhat Certified System Administration full course. The videos are available on Youtube and torrent as well. RHCSA is an entry-level certification that focuses on actual competencies at system administration, including installation and configuration of a Red Hat Enterprise Linux system and attaches it to a live network running network services.

  • Get comfortable with scripting language & automation tools

Bash for everyday scripting, putting things in cron, use it to parse logs. Bash is not limited to it by itself, you want to learn a little sed and awk, and focus a lot on regular expressions. Regular expressions can be used in most languages.

After you have spent a few weeks/months learn python. After a few weeks with python, you will easily see where it makes sense to use bash vs python.

Perl is a good general-purpose language to use, if you deal with a lot of files or platform-independent sysadmin automation, including Solaris & AIX. It’s a bit hard to learn but easy to use

Some of the important automation tools for system admin are

  1. WPKG – The automated software deployment, upgrade, and removal program that allows you to build dependency trees of applications. The tool runs in the background and it doesn’t need any user interaction. The WPKG tool can be used to automate Windows 8 deployment tasks, so it’s good to have in any toolbox.
  2. AutoHotkey– The open-source scripting language for Microsoft Windows that allows you to create mouse macros manually. One of the most advantageous features that this tool provides is the ability to create stand-alone, fully executable .exe files, from any script, and operates on other PCs.
  3. Puppet Open Source – I think every IT professional has heard about Puppet and how it has captured the market during the last couple of years. This tool allows you to automate your IT infrastructure from acquisition to provisioning and management stages. The advantages? Scalability and scope!
  • Stay up to date with the current generation of infrastructure standards & practices


  1. Analytical skills: From designing to evaluating the performance of the network and the systems
  2. People skills: A network and computer systems administrator interacts with people from all levels of the organization.
  3. Technical know-how: Administrators have to work with different kinds of computers and network equipment, so they should be familiar with how to run these
  4. Quick thinking An administrator must be very responsive and must be able to quickly come up with solutions to every problem that pops up.
  5. The ability to multi-task Administrators often deals with different kinds of problems on top of what they usually do.



It’ll be systems administration under a different title like “Cloud Engineer” and do things differently, probably using automation tools and infrastructure code management and deployment.

Coding, automation, and scripting are all very important skills to have now and for the future.

Ultimately someone will need to admin the systems and deal with the operations of the tech stack. So, yes it has a future.  The type of company varies tremendously, any company could use a sysadmin.  It may be an unexciting job of maintaining a local file share and email server, or something challenging like keeping a thousand servers running.

VPN vs VDI vs RDS: Which Remote Access Is Best For You?

As the world is slowly moving to the inevitable way of working from home, most organizations are actively exploring remote work options. While they understand the need of the hour considering the Coronavirus pandemic, security happens to be a prime consideration. As important as ensuring business continuity, is ensuring the safety of the organizational data and processes. Virtual digital workspaces with employees spread across the globe, and yet managing seamless workflows amongst them, leading to consistently better user experiences is the goal.

However, hackers also thrive during such crises as they know that many people may (willingly/unwillingly) compromise on safety aspects to meet the business needs. But, any breach of data, can prove to be a costly affair, apart from the loss of reputation, which takes a long time to overcome, if at all. It is important then, to understand and evaluate the remote work options, and choose wisely. The most popular options considered are Virtual Private Network (VPN), Virtual Desktop Infrastructure (VDI) and Remote Desktop Services (RDS).

What is a VPN?

In an online world, VPN happens to be one of the best ways in which you can ensure the security of your data and applications while working remotely. This is not just about logging in and working securely every day. It also protects you from cyber attacks like Identity thefts when you are browsing the Internet through it. This is simply an added layer of security through an application that ensures your connection to the Internet in general if using a personal VPN or to a designated Server if using your organizational VPN.

So, when you try to connect to the Internet through a VPN, it is taken through a virtual, private channel, which others do not have access to.  And this virtual channel (usually a Server hosting the application) accesses the Internet on behalf of your computer and you are masking your identity and location; especially with hackers who are on the prowl. Many VPN solution providers ensure military-grade encryption and security via this tunnel. Usually, the security encryption differs and based on the need individuals and organizations choose what works well for them.

VPNs came into being in this every concept of enterprises wanting to protect their data over the public as well as private networks. Access to the VPN may be through authentication methods like passwords, certificates, etc. Simply put, it is a virtual point-to-point communication for the user to access all the resources (for which they have requisite permissions) of the Server/network to which they are allowed to connect. One of the drawbacks in this could be the loss in speed due to the encrypted, routed connections.

What is VDI?

This is used to provide endpoint connections to users by creating virtual desktops through a central server hosting. Each user connecting to this server will have access to all resources hosted on the central server, based on the access permissions set for them.  So, each VDI will be configured for a user. And, they will feel as if they are working on a local machine. The endpoint through which the user accesses the VDI can be a desktop, laptop, or even a tablet or a smartphone. This means that people can access what they want, even while on the go.

Technically, this is a form of desktop virtualization aimed at providing each user their own Windows-based system. Each user’s virtual desktop exists within a virtual machine (VM) on the central server. Each VM will be allocated dedicated resources that improve the performance as well as the security of the connection. The VMs are host-based; hence, multiple instances of the VMs can exist on the same server or a virtual server which is a cluster of multiple servers.  Since everything is hosted on the server, there is no chance of the data or identity being stolen or misused. Also, VDI ensures a consistent user experience across various devices and results in a productivity boost.

What is RDS?

Microsoft launched Windows Terminal Services with MS Windows 2008, and this later came to be known as remote desktop services. What it means is that a user will be allowed to connect to a server using a client device, and can access the resources on the server. The client accessing the server through a network is a thin client which need not have anything other than client software installed. Everything resides on the server, and the user can use their assigned credentials to access, control and work on the server as if they are working on the local machine. The user is shown the interface of the server and will have to log off the ‘virtual machine’ once the work is over.  All users connected to the same server will be sharing all the resources of the server. This can usually be accessed through any device, even though working through a PC or laptop will provide the best experience. The connections are secure as the users are working on the server, and nothing is local, except the client software.

The Pros and Cons of each

When considering these three choices of VPN, VDI, and RDS, many factors come into play. A few of these that need to be taken into account are:

  1. User Experience/Server Interface – In VDI, each user can work on their familiar Windows system interface so that it increases the comfort factor. Some administrators even allow users to customize their desktop interface to some extent, giving that individual desktop feel which most users are accustomed to. This is not the case in RDS wherein each user of the Server is given the same Server interface, and resources are shared among them. There is a very limited choice of customization available, and mostly all users have the same experience. Users will have to make do with the Server flavor of the Windows systems rather than the desktop flavor that they are used to. The VPN differs from either of these in that it only provides an established point to point connection through a tunnel and processing happens on the client system, as opposed to the other two options.
  2. Cost – If cost happens to be the only consideration, then VPN is a good choice to go with. This is because users can continue to use their existing devices with minimal add-ons or installations. An employee would be able to securely connect to their corporate network and work safely, without any eavesdropping on the data being shared back and forth. The next option is the RDS the cost of which will depend on a few other factors. However, RDS does save cost, time and money, with increased mobility, scalability, and ease of access, with no compromise on security. VDI is the costliest of the three solutions as it needs an additional layer of software for implementation. Examples of this software are VMware of Citrix which helps run the hosted Virtual Machines.
  3. Performance – When it comes to performance, VDI is a better solution, especially for those sectors that rely on speed and processing power like the graphics industry. Since the VDI provides dedicated, compartmentalized resources for each user, it is faster and makes for a better performance and user satisfaction. VPN connections, on the other hand, can slow down considerably, especially depending on the Client hardware, the amount of encryption being done, and the quantum of data transfer done. RDS performance falls in between these two options and can be considered satisfactory.
  4. Security – Since it came into being for the sake of ensuring the security of the corporate data when employees work outside the office, VPN does provide the best security in these three remote work options. With VDI and RDS, the onus on ensuring security lies with the administrators of the system, in how they configure and implement the same. But, it is possible to implement stringent measures to ensure reasonably good levels of security.
  5. End-user hardware – Where VDI and RDS are considered, end-user hardware is not of much consequence, except in using to establish the connection. In these cases, it is the Server hardware that matters as all processing and storage happen on it. But in ensuring VPN connections, end-user hardware configurations are important as all processing happens on this after establishing the secure connection. VDI offers access to clients for Windows, Mac and at times, even for iPhone and Android. RDS offers clients for Windows and Mac; however, a better experience is delivered with Windows.
  6. Maintenance – VPN systems usually require the least maintenance once all the initial setup is done. VDI, however, can prove to be challenging, as it requires all patches and updates to be reflected across all VMs. RDS needs lesser maintenance than VDI, but more than that of VPN systems. At best, RDS will have to implement and maintain a few patches.

The Summary

Looking at the above inputs, it is obvious that there is no common solution that can be suggested for businesses. Each enterprise will have to look at its existing setup, the number of employees, the business goals, the need for remote work, the challenges therein, and then decide, which factor needs to be provided more weightage. If the number of employees is less, perhaps VPN or RDS may be the better way to go. But, if your need is of better performance owing to the graphics kind of work, then you must look at the VDI option. VDI may be the way to go if you have a large number of employees as well.

System Admin Guide To Dev Ops


DevOps is another buzzword that has made waves in the IT industry in the last few years. Earlier, we used to have System administrators, and traditionally, they only used to configure, monitor and maintain the systems. These would include the critical systems, including the servers, and system and one of the key SLAs for System administrator performance would be system downtime and overall performance levels. This role was mostly removed from the development process, and clarity was there on their responsibilities.

However, when the nature of software development evolved from the traditional waterfall to the current Agile models, all roles associated with it had to evolve. Thus happened DevOps, which in a way, is a complex role, with overlapping responsibilities. This is a combination of development and operations and this is a culture and not a technology. Due to the nature of the Agile development, the DevOps role and responsibilities are intertwined closely with that each phase of the Agile development model. So, there are no clear lines drawn as in the earlier case of System Admin roles.

A DevOps person has to be active from the design phase of the software project. The responsibilities include the creation of a development pipeline, Quality Assurance, as well as traditional System Admin activities. There are overlapping activities for the whole team. This improves coordination within the team to ensure aggressively, Agile delivery timelines are met. There is seamless coordination among the team members, and possibly, a developer may have to perform a production activity and so on. In smaller organizations, there may not be a separate DevOps person or role, and it may well be the developer/tester/System Administrator who does the same.

The Demarcations

So, in a way, while the traditional System Administrator role would be limited to systems and their configurations, a DevOps role would be also involved in the deployment of the software and the operations therein. Unlike clearly defined expectations of a System Administrator role, here the challenges are more and complex. The opportunities to learn and grow and are also equally in abundance.

Many developers move into this role when they get into deployment and operations. So do System Administrators who can code, and also understand the various development phases and nuances. System Administrators may not necessarily have a holistic view of the technical environments they work in, as DevOps engineers do. For DevOps personnel, it is a must, and they always have to stay up-to-date with the challenges thrown in, by upgrading their competencies to meet the same.

The DevOps Role

As mentioned above, a DevOps role would need to have a good hold of the complete picture of a software project. Hence, they need to know what happens during the design phase, and what is planned in the pipeline at every scheduled sprint and so on. They may have to don multiple hats by collaborating with developers and other IT operations staff, to ensure efficient implementation of application development and delivery processes. This would mean a thorough understanding of software development and all associated processes. The other expected competencies could be knowledge of version control tools, application build, test and deployment processes including automation, server configuration, and monitoring, etc.

DevOps personnel will be involved in Continuous:

  • Planning
  • Code Inspection
  • Integration
  • Testing
  • Builds
  • Deployment
  • Monitoring
  • Improvement
  • Innovation

Hence, it is a given that they would be highly technical and aware of the emerging industry trends as these activities rely heavily on tools. An ability to quickly understand, analyze and manage the operational challenges arising across the multiple phases of software development and deployment, is a great asset. The mindset and approach of a DevOps person would be at a much higher level than that of a traditional System Administrator. Their roles are more complex, challenging, constantly evolving, and highly critical in software teams.

Key Skills

Considering that the role is a holistic one, it is obvious that they DevOps people would need to possess a set of skills, apart from being able to pick up new ones on the go. Typically the skills that are required for a DevOps role are:

  • System/Application/Resource/Database Administration
  • Configuration Management
  • Scripting/Automation
  • A good understanding of software development processes
  • Continuous Integration/Continuous Deployment (CI/CD)
  • Cloud Computing

The first three in this list are skills, which competent System Administrators would have, depending on the nature of the assignments they have handled. The last three are the typical new age DevOps related skills which they have to pick up if they want to make the transition to a DevOps role. There is a slight chance that certain System Administrators may not have done any scripting at all. In such a scenario, scripting and automation may also have to be picked up.

Also, picking up on configuration management tools like Puppet, Ansible, Chef will help as they help in configuring and automating processes and applications at a high scale. At times, DevOps people may also have to learn to integrate build management tools like Gradle and Maven, into the CI platforms, for better agility. Version control is a must and this is where GitHub can help.

Why New Skills?

Since DevOps is not an isolated role and is deeply integrated into the software development and deployment process, a good understanding of the same is imperative. This helps them get involved, analyze and contribute productively at all stages including the deployment stage.

The CI/CD pipeline is a very important aspect of a DevOps role and hence this is a new skill that will have to be necessarily picked up. In the earlier waterfall model, the software used to get deployed only after it is full and final. Today, though, the Agile model works differently; developers can continuously update the software changes into a centralized repository. Automated builds and tests are enabled in this repository, making it possible for the deployment of the various versions of the software.

Many tools allow this continuous integration and deployment, and a few of the CI tools that are popular are Jenkins, Buddy, Bamboo, TeamCity, Travis CI, GitLab CI, Codeship, etc. These tools are designed to help minimize the application downtime during the continuous integration and deployment of the application.

There are continuous monitoring tools as well that can help with tracking the performance of the various aspects of the ecosystem, with logs, alerts, and notifications. Lansweeper, CloudWatch, Stackdriver, Snort, SolarWinds, AppDynamics, New Relic, BigPanda, PagerDutyand Nagios are a few examples.

Cloud Computing

Most organizations have already moved to the Cloud or are considering the same. Cloud ensures high availability and convenient access from anywhere in the world, at all times. So, with more and more apps being deployed to the Cloud, a good understanding of Cloud Computing also is a must for a DevOps person. Also, they would need to configure servers and services on these Cloud hosts. So understanding and being able to efficiently implement Software as a Service (SAAS), Platform as a Service (PAAS), and Infrastructure as a Service (IAAS) is a key competency. This may involve learning how applications are containerized, deployed and supported on Cloud infrastructure.

Cloud Computing in combination with the DevOps culture, provides enterprises with an unbeatable advantage in terms of agility, scaling, and consistency in code quality and application performance.  The various Cloud models of Public, Private and Hybrid, make available immense cost-effective resources to organizations, making the cultural change to DevOps possible. Cloud resources also enable better collaboration and communication among the various team members, making it possible for them to work seamlessly across global teams. A whole lot of the aspects needed for CI/CD, like high availability, security, fault tolerance, on-demand infrastructure, measurement and monitoring, resource management and configuration, provisioning, and Data Governance is built into most Cloud Solutions.

So, the combination of Cloud Computing and DevOps becomes a rather potent one; one that has the power to enable organizations to emerge victorious in their quest to beat the competition, in the time to market as well as ensuring quality products. The improvement in productivity is just another bonus gained in the bargain.

Retention in Record Management Software

Records management software are computer programs designed to systematically control records within an organization. Such software can help manage records in any format, and many programs have advanced capabilities for managing electronic records. However, evaluating, selecting, and purchasing records management software depends on several factors. No one system is right for all users, and the system you choose should fit the size and complexity of your organization.

But records management systems also serve a more general function: they greatly simplify the many workflow processes required to create, distribute and maintain accurate records. They have this in common with (the more general-purpose) document management software and for this reason, there are many similarities between the two.

As mentioned above, records are a very specific type of document that can serve as legal proof or evidence. Think of it like squares and rectangles: a record is a type of document, but not all documents are records. As such, records are often necessary in order to prove compliance with regulations and laws.

In some industries, compliance must be shown at periodic intervals. For example, a food distributor uses records to demonstrate compliance with food safety regulations and may need to do so every year or every quarter, as mandated by local regulations.

There are many general functions that you should look for in records management software like

  • Help features: These can include user-friendly online tutorials, easy-to-understand error messages, and support and training by the vendor
  • Menus and commands: These should be easy to understand and should be organized in a logical way
  • Speed and accuracy: You should be able to enter and retrieve data quickly, with reduced retrieval times
  • Generation of standard reports: The software should generate reports easily and print them out as they are seen on the screen
  • Ease of use: The software itself should be user-friendly; you should be able to use it the day it is installed
  • Customization: You should be able to customize the software to meet the specific needs of your organization without sacrificing the benefits of standard practices
  • Ability to manage records throughout their life cycle: Many organizations purchase software that manages records throughout their life cycle—from creation and active use to inactive use and disposition. Be sure to purchase software that meets your records life cycle requirements.
  • Ability to manage records in all formats: The software should help you manage records in any format, if necessary, including paper, electronic, micrographic, and audio-visual records
  • Free-text searching across fields: Any text maintained by the software should allow free-text or keyword searching across fields


Retention in Record Management

Companies face legal and regulatory requirements for retaining records. Each company’s specific requirements should be detailed in a Record Retention Policy and accompanying Record Retention Schedule. These provide both a legal basis for records compliance as well as a consensus on which records should be saved for how long they must be kept. And, equally important, when they should be deleted (destroyed).

Records retention policies and procedures should include:

  • Archiving policies for various document types. In other words, how long these different document types stay “active” before they can be sent to the library archive.
  • A destruction policy for setting how long information is retained before it is destroyed. This avoids destroying vital documents prematurely, or not at all. Retaining everything indefinitely is not good records management.
  • Procedure for capturing the proper information for regulatory reporting. It is critical for organizations to assess their current state of preparedness to determine how well they can safely and efficiently respond to an e-discovery request or governmental inquiry.


Some of the Major players in Record Management and Document handling includes

  1. Alfresco Content Services-  Alfresco Content Services provides open, flexible, scalable Enterprise Content Management (ECM) capabilities. Content is accessible wherever and however you work and easily integrates other business applications. Alfresco puts sophisticated technology to work to automate the records management process, making end-to-end processes happen automatically and invisibly. Using the system is easy, and compliance “just happens,” with little or no user intervention.
  • Easy-to-set rules and metadata automatically drive what needs to be declared as a record and where it should be filed in the File Plan, while records remain easily accessible by sanctioned users
  • Powerful rules and business processes help file, find, review and audit records, saving time for users and administrators
  • Configurable File Plans provide effortless control over retention schedules for review, hold, transfer, archive and the destruction of records
  • Supports records declaration directly from your desktop
  1. OpenText Records Management (formerly Livelink ECM – Records Management) delivers records management functions and capabilities to provide full lifecycle document and records management for the entire organization. This product allows your organization to file all corporate holdings according to the organizational policies, thereby ensuring regulatory compliance and reducing the risks associated with audit and litigation. OpenText Records Management provides options for classifying information quickly and easily. Classify information interactively, with a single click, automatically inherit retention schedules and classifications by moving records en masse into folders, classify records based on roles or business processes or auto-classify content. Further, increase efficiency through the automatic import of retention policies and other data into OpenText Records Management. OpenText Records Management maps record classifications to retention schedules, which fully automates the process of ensuring records are kept as long as legally required, and assuredly destroyed when that time elapses. When a retention schedule expires, final decisions can be made to destroy the object, retain it for a period of time, or keep it indefinitely
  2. FileCloud – FileCloud retention policies deliver control and compliance for the files and their folder groupings in the Cloud. Retention policies allow administrators to automate some processing related to protecting data and help secure digital content for compliance and enhancing the management of digital content for other internal reasons. Without the right systems within your cloud solution to discover and essentially preserve the sensitive content, the time and costs spent on litigation and handle legal cases can quickly spiral out of control. FileCloud retention policies are created and attached to stored files and folders. These special policies allow administrators to define the conditions that enforce a set of restrictions on how each file or folder can be manipulated. For example, administrators can create a Retention Policy that disables a user’s ability to delete or edit any of the files and folders named in the policy. To resolve the issue of conflicting policies, FileCloud ranks retention policies by what best protects and retains the digital content.
  • Admin HoldOutranks all other policies and prevents any update or delete of digital content for an indefinite period of time.
  • Legal HoldFreezes digital content to aid discovery or legal challenges. During a legal hold, file modifications are not allowed.
  • RetentionIdentifies digital content to be kept around for an unlimited amount of time before being deleted or released.
  • ArchivalMoves and stores old organizational content for the long term. No Deletion is allowed until a specified time period is reached. After this time, content gets moved to a specific folder.
  • Trash RetentionCan be configured for automatic and permanent deletion of all files in the Trash bins or to expire with no actions
  1. Box – Box allows customers to configure automated policies, known as retention policies, to control the preservation and deletion schedules of their enterprise documents. Retention policies enable the business to maintain certain types of content in Box for a specific period of time and to remove content from Box that is no longer relevant or in use after a specific period.

Box Governance offers to support metadata-driven retention polices, where retention policies can be applied to individual files based on custom metadata. This also enables customers to configure retention policies at the file level in addition to at the global and folder levels.

Admins and Co-Admins who have permission to manage policies can create any type of retention policy, including a metadata-driven retention policy.

  1. Microsoft Sharepoint Online – Records Management is one of the key components of an Enterprise Content Management (ECM) System such as Microsoft SharePoint. There are a couple of ways in which you can manage records in SharePoint and SharePoint Online. You can create a dedicated Records Center Site that serves as an Archive, and documents are copied to the archive based on the retention policy. Another option is to manage records “in place” and this is where you can leave a document in its current location on a site, declare it as a record, and apply the appropriate security and retention properties. SharePoint Online can be used to create a complete records management system using its “off-the-shelf” capabilities. Planning the structure of the record center and the libraries that will be contained within it is a key consideration, but fundamentally the process is, at its core, quite simple:
  • Content Types for the documents.
  • Policy Rules for moving documents to the Records Center and the Exceeds Retention Records Center.
  • Content Organizer Rules for distribution to the correct Library in each Record Center.
  • Library List Views for automated approval notifications.

Top 10 Virtual Data Room Providers

Things nowadays are getting virtual. Companies have been able to shift a large amount of their workload to decentralized locations, from remote work teams to crowd-sourced marketing tasks.

For companies handling large amounts of confidential or third party information, managing these data is no exception to the virtual revolution. Virtual data rooms come in for enterprises or to any companies dealing with sensitive data.

What is Virtual Data Rooms and why do organizations need it?

Virtual data rooms are neutral environments in which multiple parties can interact and collaborate. You can think of it as a secure online repository for document storage and distribution. It is typically utilized during the due diligence process preceding a merger or acquisition to review, share and disclose company documentation.

Virtual data rooms have increasingly replaced physical data rooms traditionally used to disclose and share documents. With the globalization of businesses and increased scrutiny to reduce costs, virtual data rooms are an attractive alternative to physical data rooms. Virtual data rooms are widely accessible, immediately available and more secure. As security concerns grow and incidents with breaches increase, VDR providers are developing more sophisticated and reliable databases.

The greatest advantage of a VDR is the peace of mind that comes from knowing that your confidential information such as financial and HR information, intellectual property, your clients’ legal documents will not be seen by third parties unless you’ve permitted them to do so. VDRs capabilities to upload large volumes of documents, tracks and audits the user and document activity and sets specific user permissions that are vital for facilitating efficient and secure document sharing.

Tips for Choosing a VDR

  • Consider the security ratings and certifications
  • Evaluate the security of the promised third-party integration’s
  • Review the User Interface
  • Test the platform and Customer support make sure that it is convenient for you
  • Flexibility – the provider should give you a platform that can be accessed via a desktop or mobile device. It should also be flexible enough to be deployed on a cloud-based or premise depending on the individual needs of your business.

Benefits of VDR

  • Security: Email, FTP and other unsecured cloud storage are no longer secure avenues for data sharing. VDR is the best method of ensuring your information is kept secure and out of reach from pervasive hackers.
  • Environment-Friendly: It’s paperless!
  • Efficiency: VDR lets you have control over the document. It helps you monitor who has access to the documents. You can have all communications encrypted and recorded and in a single place.
  • Cost-Effective: Reduced cost on travel and delay with the courier service.
  • Transparency and Appearance: VDRs help you to understand your clients, and investors better. When granting access to documents in a VDR, you gain access to all of the activities regarding that document.

Top Vendors of VDR

1. Ansarada is an AI-powered VDR solution which specializes in material information platform for material events. You also get features such as its own deal assistant app, AiQ bidder engagement, and due diligence Q&A. Its interactive assistant is developed to provide immediate answers to your questions anytime and anywhere.

2. Box is a virtual data room by It is a platform built on the simple file sharing system for individual users. However, the VDR service on Box platform provides a more sophisticated set of permissions and security measures. This makes Box VDR better suited to sharing sensitive business documents and collaborating on projects with team members geographically located around the globe.

The key features of this data room provider include full customization with logos and company colors, easy installation, access expiry and limitations on previewing, uploading, and printing documents, as well as detailed tracking possibilities.

3. Citrix Sharefile is a file sharing service built for business-class, real-time collaboration. It automates workflows for providing feedback, requesting approvals, co-editing, and getting legally e-binding signatures to accelerate productivity. It id developed for personal and professional usage. This tool is packed in an easy-to-use platform for all types of users, even those that aren’t tech-savvy.

One of the tool’s strongest points is its robust third-party integration. Data sharing is secure and flexible. It offers a variety of security capabilities such as granular permissions, multi-level access authentication, and SSL/TLS 256-bit encryption. You can manage your security settings on your own through extended customization with an internally monitored and password-protected document archive.

4. Codelathe’s FileCloud can create Virtual Data Room (VDR) that makes document management and collaboration as effective just the way you want it. FileCloud’s easy, secure, and regulated access to your documents online, with functionality to share, review, download, comment, and archive them can prove to be a major success driver. You can securely store all kinds of documentation related to licenses, permits, contracts, intellectual property, proposals, plans, blueprints, and financial statements.

With FileCloud, setting up the VDR is simple, quick, and convenient. Once it’s set up, you can start adding people, uploading a document, tracking activity, and audit.

FileCloud powered Virtual Data Room has the potential to become the collaboration engine to drive your mega projects. The availability of critical documents with requisite levels of access privileges makes due diligence quicker and more efficient. FileCloud delivers wholesome data security and privacy, enabling you to manage complex and grand projects, without worrying about documents. File encryption and access control ensure there’s no unauthorized data access to any document.’

5. Digify is designed with a focus on protecting and tracking sensitive files, giving you control and peace of mind over confidential documents that you share online. It scales to the needs of small business to enterprise. Your data room is secure with features such as rights management, watermarking, file tracking and encryption.

If you have different sets of users, Digify helps you manage access permission at varying degrees. You can restrict forwarding, revoke access or make your files private or public. You can also set rules for downloading, so you can decide which users can save locally or print a confidential document.

6. Egnyte is a VDR designed for content collaboration, data protection, and infrastructure modernization. Beyond being a file-sharing solution, it enables seamless collaboration between your business by enabling you to access the system on any device using other devices in any app you prefer. It ensures compliant data governance across the program by enabling you to monitor usage patterns and access controls. This ensures you stay on top of private and sensitive information on your organization.

7. Firmex is an enterprise virtual data room widely used for a variety of processes including compliance, litigation, and diligence. It is highly secure, utilizing document control and DRM such as custom permissions, dynamic watermarks, lock-down files and document expire. This makes it an ideal platform for exchanging, transmitting and collaborating with sensitive documents. The system has specific solutions for investment banking, biotech, government, energy, and legal industries.

Designed for non-technical users, it has drag-and-drop ease to set up your projects quickly or move documents by bulk. It helps you to organize the files easily with a directory listing and data room index-which can be exported to PDF or Excel. A large organization with tons of documents will appreciate this feature best.

8. Google Drive is one of the most popular cloud storage systems with robust office-suite collaboration functionality. It includes a word processor, presentation, and spreadsheet program. At its core, it is an online storage service with a generous storage space you can use, especially if you’re using Google productivity apps and files such as images, videos, and documents. Individual users readily get a free 15GB storage and basic sharing capacity which you can use for regular files or office documents. Its collaborative office-suite is also a good cloud-based solution with its programs called Docs (word documents), Slides (powerpoint presentations), Sheets (spreadsheets), Forms and Drawings.

9. iDeals Virtual Data Room is an amazing VDR option, with all the functionality you could expect from an industry-leading virtual data room provider. The emphasis is on facilitating due diligence in any industry, including M&A, real estate, biotech, and technologies. With many competing vendors requiring users to download plug-ins or other software, such as Flash or Java, iDeals works consistently across all web browsers, operating systems, and devices with no plugins required. Full mobile compatibility ensured by dedicated apps for IOS and Windows.

Secure, powerful, flexible yet intuitively accessible with a great user experience and multi-language 24/7 customer support, iDeals VDR platform is available either as a web service or integrated with the enterprises existing infrastructure.

10. V-Rooms operates in the virtual data room market for more than ten years and is repeatedly awarded for the high-quality services it provides. Its VDRs are designed exclusively for simplifying the life of deal-makers: data management, storage, and exchange occur in a highly protected and functional environment. The data undergoes strong 256-bit encryption and is accessible only for those who passed the two-step authentication process. Digital rights management allows room administrators to protect the documents and prevent their misuse: they control document saving and printing, withdraw access to the document even if the document was already downloaded. Regular activity reports keep the administrators updated about the recent activity of the users and help them to identify the most active and interested visitors. Being secure, VDRs by V-Rooms remain simple and convenient.


■ Consider VDRs when looking for temporary or permanent mechanisms to support highlycontrolled sharing of proprietary documents in highly regulated environments, in external-facing processes, or for collaboration with third parties.

■ Consider VDRs when the extended collaboration use case involves a joint venture or a consortium of partners, or requires centralized governance, privacy, compliance, audits trails and e-discovery.

■ Ensure IT is involved in the VDR selection process. Do not work around them as their help is key in VDR selection and configuration.

■ In other cases, especially when the cost is constrained, it is worth considering less-costly CCP alternatives — recognizing that a general-purpose tool will need customization to support specific UIs and capabilities.

digital workspace

Top 10 Board Portal Vendors

A board portal is a tool that facilitates secure digital communication between members of a board of directors. Board portals are made for boards and for good governance. Board portals also typically include messaging features, voting tools, meetings, meeting minutes, agenda features and other tools to help make communication as seamless as possible.

There were two factors that propelled the market development of the board portal. The first was a cadre of progressive directors who, enthusiastic about technology and tired of bulky board books, advocated electronic access to meeting materials. The second driver was the passage of the 2002 Sarbanes-Oxley act. This major piece of legislation, written in response to the accounting scandals of that time, threw a spotlight on board portals as a vehicle to drive governance. The “electronic board book” that emerged provided rudimentary online access — but not much more.

The true challenge of any board collaboration solution is adoption. The system you implement can only be useful if your board members like to use it. Adoption should be considered when selecting and reviewing tools. When implementing a solution, allow time for board members to get familiar with the system and to develop the habit of using it.

What to Look for in Board Portal Technology

With most board portal software performing similar functions, it’s very important to evaluate service, pricing and track record with organizations similar to yours.

  1. Adaptability: If the app you choose isn’t adopted, it’s a wasted investment that may leave you less secure than you were before. For an application to be embraced by your board, it needs to be accessible from a multitude of devices, easy-to-use and accompanied by live support at any time. Having the ability to read and annotate board materials as you could do with a hard copy is comfortable and functional; an electronic board book should be as simple to flip through as paper, but with lots of additional advantages
  2. Pricing: When comparing, the most glaring difference you will find among board portals is in the price. From the top-of-the-line to the very basic, there are meeting solutions available at varying degrees of affordability. After all, for most organizations, the solution of choice will still have to fit within their allotted budget.
  3. Features: Beyond the basic features such as event scheduling and online board books, board portals can differ in other aspects such as the availability of tools for document management, reporting, presentations, or collaboration. Will the nature of your meetings require complex permission and privacy settings? Multiple presenter roles? As each board portal will have its strengths over others, consider whether these strengths are in the areas that are most critical to your company’s needs.
  4. User Interface: Another aspect to take into consideration is the user interface of the board portal. This is particularly important if your company has executives and board directors who are not so tech-savvy and may have difficulty adapting to a digital solution. Before any savings or productivity gains are made, the board portal’s design should first and foremost be intuitive enough to make navigating around easy for users
  5. Security: Each board portal will have its own approach to securing data. Study the security safeguards that each board portal utilizes, and find the one that you are most comfortable with. Evaluate whether their security measures and data storage practices are compliant with your organization’s standards. For example, if your company deals with sensitive information and does not want to risk storing data on the cloud, you should consider on-premise hosting. However, not all board portals can provide you with that option.

Top Board Portals

  1. Admin control: Admin control offers web-based solutions as well as a separate iPad and iPhone app for secure collaboration and easy sharing of documents in business processes such as board and management work, due diligence, capital injections and stock exchange listings.
  2. Aprio: From package prep to online collaboration, Aprio board portal software is easy-to-use, affordable, and backed by the friendliest support in the business. Aprio is an all-in-one board portal that helps organizations manage board meetings and board communication, with top security and service. Aprio serves credit unions, financial institutions, crown corporations, non-profits, and government organizations across North America. All board members, management and staff can view and access documents and materials immediately, from anywhere, and at the same time.
  3. Boardable: The software is meeting-centric, helping groups schedule meetings, build and share agendas, draft and finalize minutes, vote digitally, and securely store and share documents. Boardable also centralizes all board content and communications, integrates with existing email, file-sharing, and calendar platforms, and streamlines routine tasks. A convenient mobile app improves the board member experience with on-the-go access to meeting details, documents, discussion, and tasks.
  4. BroadPAC: BoardPAC is an award-winning paperless Board Meeting governance automation software, which fosters an easy to use and convenient platform for conducting board meetings and other high-level senior management meetings via iPad & tablets. With both on-premise and cloud storage options, supported across all platforms. BoardPAC is offered with the highest data security management certification by ISO 27001 and with unlimited 24×7 support and training.
  5. FileCloud by CodeLathe: FileCloud offers secure content sharing portal for boards. Strong content security and segregation of data from the rest of the systems is important for board portals. Further, board members and top executives should be able to access the content securely from tablets and mobile phones before, during and after the meeting. FileCloud provides exactly. FileCloud can be integrated with enterprise data repositories, file servers or cloud storage systems like AWS S3.FileCloud can be deployed on the infrastructure of your choice: On-premise, public cloud and also available as SaaS service.
  6. Confluence: Confluence is an open and shared workspace that connects people to the ideas and information they need to build momentum and do their best work. Unlike document and file-sharing tools, Confluence is open and collaborative, helping you create, manage, and collaborate on anything from product launch plans to marketing campaigns. Find work easily with dedicated and organized spaces, connect across teams, and integrate seamlessly with the Atlassian suite or customize with apps from our Marketplace.
  7. Convene: Convene, the board management software used to compile and distribute papers, generate minutes and, carry forward action items for meetings. Award-winning usability makes it the choice for listed companies, SMEs, banks, governments, healthcare & non-profits in more than 80 countries. Developed by Azeus Systems, with over 25 years of experience in IT development. Convene’s network & document encryption provided by a SOC2-compliant security system ensures your users’ safety on any device
  8. Diligent: Diligent Boards is designed to be a flexible tool, with a robust set of features that are designed to reflect how board members, executives, and administrative teams actually work. Diligent also offers several capabilities that are not found on other tools, including Preview to Updates, Note Saver, Smart Sync, and Quick Build. Award-winning service is provided to all Diligent Boards users; Diligent provides unlimited training and support
  9. eShare’s Broadpacks: BoardPacks is a board portal that fully digitizes an established system of agendas, meeting packs, surveys and decision making that were previously paper-based. They help clients improve their governance and the effectiveness of their board by putting the information they need at their fingertips including board papers, training information, risks and decisions.
  10. OnBoard: OnBoard is a cloud-based board meeting management solution designed for board member collaboration before, during and after meetings.

Users can compile and coauthor board books and materials. Updates to materials save automatically and are instantly synced across users’ devices—including desktops, laptops, tablets (Android, iPad, Kindle Fire, and Windows Surface) and smartphones (Android and iPhone). Notes and annotations can be typed directly into materials and are also searchable. OnBoard includes functionality for remote data swipe, surveys, and multi-board support.

digital workspace

What is a Record? What is Records Management?

Record is a document or content that an organization need to keep as an evidence for an important transaction, activity or a business decision for regulatory, compliance and governance purposes. Not all documents are Records. Only a subset of documents that an organization need to preserve as an evidence are called as Records.

What is Records Management?

The ISO 15489-1: 2016 standard defines records management as “the field of management responsible for the efficient and systematic control of the creation, receipt, maintenance, use and disposition of records, including the processes for capturing and maintaining evidence of and information about business activities and transactions in the form of records”

What is Records Retention Schedule?

A records retention schedule is a document that identifies and describes an organization’s records and the lengths of time that each type of record must be retained. To give and idea ,the following section shows the general record keeping requirements of Texas State for statutory purposes. Every organization can have thier own set of record keeping requirements and records retention schedule which are dictated by industry and government compliance requirements.

      Wage and hour laws (FLSA) – while some payroll records need be kept only two years, most must be kept for at least three years under the federal law (FLSA); to be safe, keep all payroll records for at least three years after the date of the last payroll check.
      Unemployment compensation – keep all records relating to employees’ wages and other compensation, as well as all unemployment tax records, for at least four years.
      Family and Medical Leave (FMLA) – keep all payroll, benefit, and leave-related documentation for at least three years after conclusion of the leave event.
      I-9 records – keep all I-9 records for at least three years following the date of hire, or for one year following the employee’s date of last work, whichever point is reached last.
      New Hire reporting – report all new hire information within 20 days of hire.
      Hiring documentation – under EEOC rules, all records relating to the hiring process must be kept for at least one year following the date the employee was hired for the position in question; if a claim or lawsuit is filed, the records must be kept while the action is pending.
      Disability-related records (ADA) – keep all ADA-related accommodation documentation for at least one year following the date the document was created or the personnel action was taken, whichever comes last.
      Benefit-related information (ERISA and HIPAA) – generally, keep ERISA- and HIPAA-related documents for at least six years following the creation of the documents.
      Age-discrimination documentation (ADEA) – keep payroll records for at least three years, and any other documents relating to personnel actions for at least one year, or during the pendency of a claim or lawsuit.
      OSHA records – keep OSHA-related records for at least five years.
      Hazardous materials records – keep these for at least thirty years following the date of an employee’s separation from employment, due to the long latency period for some types of illnesses caused by exposure to hazardous materials.
      State discrimination laws – keep all personnel records for at least one year following an employee’s last day of work.
      IRS payroll tax-related records – keep these records for at least four years following the period covered by the records.

In the financial industry, Registered broker-dealers are subject to a variety of record-keeping requirements enforced by the U.S. Securities and Exchange Commission and self-regulatory agencies such as the Financial Industry Regulatory Authority (FINRA). SEA Rules 17a-3 and 17a-4, specify minimum requirements with respect to the records that broker-dealers must make, how long those records and other documents relating to a broker-dealer’s business must be kept and in what format they may be kept. FileCloud’s Financial Services Compliance white paper shows the retention periods of various records that Registered broker-dealers need to preserve as per FINRA rules.

How FileCloud can help you in managing your enterprise records?

FileCloud offers powerful record management and governance features that allows organizations to create flexible retention and archival policies to meet any compliance requirements. The following screenshots show how one can create retention schedules for different types of records.

Retention Policies

Create Retention Policies

digital workspace

A System Administrator’s Guide to Containers

Anybody who’s a part of the IT industry will have come across the word “container” during the course of his/her work. After all, it’s one of the most overused terms right now, which also indicates different things for different people depending on the context. Standard Linux containers are nothing more than regular processes running on a Linux-based system. This process category is separate from other process groups thanks to Linux security limitations, resource limitations, and namespaces.

Identifying the Right Processes

When you boot one of the current crop of Linux systems and view a process with cat /proc/PID/cgroup, it immediately becomes known that this process occurs in a cgroup. Once you take a closer look at /proc/PID/status, you begin to notice capabilities. Then you can view SELinux labels by checking out /proc/self/attr/current. Also, seeing /proc/PID/ns gives you a listed view of namespaces the process is currently in.

Thus, if a container gets defined as a process that has resource constraints, namespaces, and Linux security constraints, it can be argued that each process on the Linux system is present in a container. This is precisely the reason why Linux is often considered containers and vice versa.

Container Components

The term “container runtimes” refer to tools used for modifying resource limitations, namespaces, and security, and also for launching the container. The idea of “container image” was initially introduced by Docker, and pertains to a regular TAR file comprised of two units:

  • JSON file: This determines how the rootfs must be run, including the entrypoint or command required to run in rootfs once the container starts, the container’s working directory, environment variables to establish for that particular container, along with a couple of other settings.
  • Rootfs: This is the container root filesystem which serves as a directory on the system resembling the regular root (/) of the OS.
  • What happens is, Docker starts to “tar up” the rootfs while the JSON file develops the base image. The user is now able to install extra content in the rootfs, thereby forming a fresh JSON file, and then tar the variation between the actual image and the new picture with the updated JSON file. Thus, a layered image is created.

Building Blocks of Containers

The tools that are commonly used for forming container images are known as container image builders. In some cases, container engines are responsible for this task, but numerous standalone tools can also be found for creating container images. These container images or tarballs are taken by Docker, and then moved to a web service. This enables them to be later pulled by Docker, which also develops a protocol for pulling them and dubs the web service as container registry.

The term “container engines” are programs capable of pulling container images from the container registries and then reassembling them onto the container storage. If that’s not all, container engines are also responsible for launching container runtimes.

The container storage is generally a COW or copy-on-write layered filesystem. Once the container image gets pulled down from the container registry, the first thing that needs to be done is untar the rootfs so it can be placed on disk. In the event that multiple layers are present in the image, every single layer gets downloaded and then stored on a separate layer of the COW filesystem. This means every single layer contains a separately stored layer, which increases sharing for the layered images. Container engines tend to support multiple kinds of container storage, such as zfs, btrfs, overlay, aufs, and devicemapper.

Once the container engine has completed downloading the container image to the container storage, it must form a container runtime configuration. This runtime configuration is a combination of input from the user or caller as well as the content from container image specification. The container runtime configuration’s layout as well as the exploded rootfs are often standardized by the OCI standards body.

The container engine releases a container runtime that is capable of reading the container runtime specification, modifying the Linux cgroups in the process along with Linux namespaces and security limitations. Afterward, the container command gets launched to form the PID 1 of the container. By now, the container engine is able to relay stdout/stdin back the caller while gaining control over the container.

Please keep in mind that several of the container runtimes get introduced for using various parts of the Linux so the containers can be isolated. This allows users to run containers with KVM separation. They are also able to apply hypervisor strategies. Due to the availability of a standard runtime specification, the tools may be launched by a single container engine. Even Windows may use the OCI Runtime Specification to launch Windows containers.

Container orchestrators are a higher level. These tools help coordinate the execution of containers on various different modes. They interact with the container engines for managing containers. Orchestrators are responsible for telling container engines to start the containers and wire networks together. They can monitor the containers and introduce ones as the load expands.

Benefits of Containers

Containers provide numerous benefits to enable the DevOps workflows, including:

  • Simpler updates
  • A simple solution for consistent development, testing and production environments
  • Support for numerous frameworks

When the user writes, tests and deploys an application within the containers, the environment stays the same at various parts of the delivery chain. This means collaboration between separate teams becomes easier since they all work in the same containerized environment.

When software needs to be continuously delivered, it requires application updates to roll out on a constant, streamlined schedule. This is possible with containers as applying updates becomes easier. Once the app gets distributed into numerous microservices, every single one gets hosted in a different container. If a part of the app gets updated by restarting the container, the rest of it remains uninterrupted.

When performing DevOps, it helps to have the agility to switch conveniently between various deployment platforms or programming frameworks easily. Containers provide the agility since they are comparatively agnostic towards deployment platforms and programming languages. Nearly any kind of app may be run inside the container, irrespective of the language it’s written in. What’s more, containers may be moved easily between various kinds of host systems.

Concluding Remarks

There are plenty of reasons why containers simplify the DevOps. Once system administrators understand the basic nature of the containers, they can easily use that information when planning a migration at the organization.

Why It’s Time For Businesses to Acknowledge the Hidden Costs of BI

Many companies are implementing business intelligence (BI) software for smarter data use, but few are achieving the desired ROI results. Why? Because the organizations hardly understand the actual costs of BI. They are focusing primarily on license metrics while comparing BI analytics tool and platform vendor expenses, not realizing that it’s only a fraction of the total ownership costs.

So, failure comes from evaluating the “sticker price” of the BI solution and comparing it with direct returns from data analysis. The cost of the BI tool or that of the present situation is underestimated. To know more about the hidden expenses of business intelligence, read below:

Scaling Costs



Businesses cut corners wherever they can, often buying low-end data tools that work fine for a year or two, but then fall short while handling the growing data quantity. Moreover, new IoT devices and other new technology increase the complexity and number of data sources.

Thus, the cheap tool you bought earlier is a short-term solution; it is not a permanent answer to your data requirements. However, by the time you realize this, you’ve already wasted considerable resources purchasing licenses, allotting training hours, and making your employees reliant on this tool.

No wonder they have zero interest in knowing more about the use of a different software tool. As a result, companies must grudgingly spend their extra resources for implementing a new, full-fledged business tool. The alternative would be to keep on working with a subpar tool that does not fulfil all their requirements.

Buying Into the Hype

A big mistake that organizations make early on is buying into the business intelligence hype – investing in the latest, most advanced BI tool just because they’ve been told they need one, rather than choosing one on the basis of its problem-solving capabilities. There’s no doubt that business intelligence is of value to companies, but spending money on a BI tool arbitrarily doesn’t do any good. You must first identify the issue you’re outlining or hoping to solve; otherwise, you will never achieve the desired results.

First, think about your business problem along with what your company hopes to achieve. Have a clear idea of the capabilities required to solve those issues or fulfil that goal. Select a BI tool that adheres to those requirements. This way, you will avoid spending money on something that has no place in your organization.

Integration Expenses

One of the key considerations in a cost-benefit estimation of BI tools is whether the current ecosystem can support the software. Another is, whether it can serve as a standalone solution for data analytics or join the cluster of other programs to be of any real value to the company. What’s important is that you understand the number of moving parts the analytical value chain contains. First, you’ve got to connect the raw sources of data and then perform ETL and data cleanse before analysis.

Most market BI tools perform only the last stage, using flashy dashboards and graphics to hide how every important backend task is delegated to an IT professional or a separate tool. Thus, you need to carefully understand the functionality you’re going to get from the desired software. Tools – both the single and full-stack varieties –  serve well as platforms for handling various activities, from data modelling to preparation to the development and sharing of dashboards.

While using proprietary database tools and ETL with data visualization software is not wrong, you must figure out how all this changes the final price, and whether you’re willing to foot the bill for the perfect analytic solution.



You must also think about the price being paid for by your company as your precious employees spend more time preparing reports instead of focusing on different mission-critical activities. While this applies to the first scenario, where everything’s done through spreadsheets, when you’re getting a modern business intelligence tool, you need to ensure it upholds the standards of self-service expected by business users.

Regular enterprises sought help from qualified data analysts and IT professionals, to build BI. They needed somebody who was perfect for coding and scripting for the purpose of a different query. Though modern tools neglect this, with back-end functionality being absent means that coding and scripting happen in the initial data preparation phase. Businesses users within the company will no longer have to struggle with countless spreadsheets; they can assign the task to technical workers instead, who must operate IT-focused systems for producing reports. To avoid this, business users in your company should consider the tool and answer personal data questions, rather than relying constantly on external or internal tech support.

Opportunity Costs

When you’re unable to do something because you decided on something else, the cost incurred by the company is the opportunity cost. It’s one of the most hidden and overlooked BI expense, and you need to consider what you’ll do during that time with those resources. Although this is difficult to measure in projects, it becomes easier to find the similarities between projects and assign a value to any missed opportunity.

Upfront Payments

Companies often buy business intelligence upfront due to a combination of factors, from high pressured sales strategies to promises of big discounts, to decision makers failing to realize the best course of action. This can significantly add to your BI program expenses and also lead to shelfware if your employees fail to receive proper tool training or remain clueless about the positive impact of the tool and refrain from implementing and using it within the organization. To prevent wasting the company money, you should seek out a BI tool vendor that gives you the opportunity to begin small, prove that the concept is useful to your business, and then scale as required.

Unanswered Questions



Successful BI depends on people in various roles, and even when the project is deployed, many of those roles continue to play a vital part. However, end users will only enter into the process if they think the question they want to answer is worth their effort and time. We automatically assume that a new BI solution is going to be better, which is precisely why we invest resources and time into implementing it. But does it answer the questions of the end users? Think about the questions not asked because they weren’t worth the effort. Or worse, if they were worth it, but the user refused to wait due to lack of time, and so, made an instinctive decision.

Businesses thinking about BI will rarely have the necessary visibility into these missed questions until they discuss the situation with end users. The price of an unasked question may be considerable, for example when you’re deciding if you want to pull or extend a specific marketing campaign.

Workflow Expenses


Rather than look at BI from a tech perspective, you need to consider it also from an end user’s workflow perspective. These costs are recurring, and directly affect the other indirect expenses discussed earlier. While we’ve discussed the price of not asking a specific question above, what price do you have to pay when you do ask and receive a response? If you’re lucky, you won’t have another business intelligence platform decision on your hands for several years, but the workflow to get new answers will keep on repeating itself.


Concluding Remarks

So, it becomes evident that several sources of cost exist outside the upfront expenses of procuring and managing a solution. If you think that offering data-driven insights is a valuable function, you must try and appreciate the actual cost of your company’s BI solution.



How to Include BI in Your 2019 Budget

Avoiding the Hidden Costs of Business Intelligence