Archive for the ‘Advanced Computer Administration and Architecture’ Category

FileCloud Aurora – All About DRM Capabilities


In November 2020, FileCloud released update 20.2 – a complete rehaul of our Sync, Mobile and browser UI and functionalities. We at FileCloud have been working on this for a very, very long time, and so we’re incredibly proud to present to you: FileCloud Aurora.

Today, we’re going to be covering one of the most important security functions that Aurora introduces: DRM Capabilities.

For a comprehensive overview of all of FileCloud Aurora’s new features, please visit our previous blog post Introducing FileCloud Aurora!.

Secure Document Viewer

If the new UI was the biggest change in terms of appearance, FileCloud Aurora’s new Digital Rights Management (DRM) capabilities are unquestionably the most significant change in terms of functionality. 

Your data security has always been FileCloud’s number one priority. We’ve got all the files you’re storing with us safe and sound, but what happens when you need to send out or distribute important documents, such as external contracts, reports, or training materials? Our new DRM solution ensures that nothing you send out gets used in a malicious or abusive manner, even after it’s left your system and entered others. 

Our secure document viewer helps you protect confidential files from unsolicited viewing with FileCloud’s restricted viewing mode. Show only selected parts of the document and hide the rest of it — or choose to reveal sections only as the user scrolls, minimizing the risk of over-the-shoulder compromisation.

For more details, read more about the FileCloud DRM solution here

Screenshot Protection

Utilize the Screenshot Protection feature to prevent recipients from taking screenshots of secure information and documents.

This is an option that can be selected when you create your DRM Document or Document Container, and prevents any recipients from taking screenshots of the document. Not only that, the recipient won’t be able to share screens or screen-record to share the documents either, nullifying any chance of your documents being distributed without your permission or consent.

Document Container 

Easily and securely export multiple documents in an encrypted document container (AES 256 encryption), and share it via FileCloud or third party emails. 

DRM Protection

Support for Multiple File Formats

Protect your Microsoft Office (Word, Powerpoint, Excel), PDF, and image (jpeg, png) files, and include multiple types of files in a single encrypted document container! FileCloud’s DRM solution doesn’t discriminate, ensuring all your most regularly used file, folder and document formats can all be easily handled by our containers and viewer. 

Anytime Restriction of Access to Your Files

Remove the risk of accidentally transmitting confidential files and enforce your policy controls even after distribution. You can revoke file access or change view options (screenshot protection, secure view and max account) anytime, via the FileCloud portal.

Thanks for Reading!

We at FileCloud thank you for being a part of our journey to creating the most revolutionary user interface and experience on the market. We’d love to know what you think about these changes. For full information about all these changes, release notes can be found on our website here

We hope that you’re as excited about these new changes as we are. Stay safe, and happy sharing, everyone!

VDI vs VPN vs RDS (Remote Desktop Services) | FileCloud

As the world slowly moves to inevitably work from home, most organizations have begun actively exploring remote work options. As such, security has become one of the prime considerations of businesses. After all, ensuring the safety of your organizational data and processes is just as important as ensuring business continuity. Virtual digital workspaces managing seamless workflows among employees spread across the globe, of course, must aim to consistently better their user experiences.

However, hackers also thrive during such crises as they know that many people may willingly or unknowingly compromise on safety aspects to meet their business needs. Any breach of data can prove to be a costly affair, especially when taking into account the loss of reputation, which takes a long time to overcome, if at all. It is important then, to understand and evaluate the remote work options, and choose wisely. The most popular options considered are Virtual Private Network (VPN), Virtual Desktop Infrastructure (VDI) and Remote Desktop Services (RDS).

What is a VPN?

In an online world, a VPN is one of the best ways you can ensure the security of your data and applications while working remotely. This is not just about logging in and working securely every day. It also protects you from cyber attacks like identity thefts, when you are browsing the internet through it. This is simply an added layer of security through an application that secures your connection to the Internet in general if using a personal VPN, or to a designated server if using your organizational VPN.

When you try to connect to the Internet through a VPN, it is taken through a virtual, private channel that others do not have access to. Then, this virtual channel (usually a server hosting the application) accesses the Internet on behalf of your computer so that you’re masking your identity and location; especially with hackers who are on the prowl. Many VPN solution providers ensure military-grade encryption and security via this tunnel. Usually, the security encryption differs based on the need of the individuals and organizations choose what works best for them.

VPNs came into being in this every concept of enterprises wanting to protect their data over the public as well as private networks. Access to the VPN may be through authentication methods like passwords, certificates, etc. Simply put, it is a virtual point-to-point communication for the user to access all the resources (for which they have requisite permissions) of the server/network to which they are allowed to connect. One of the drawbacks in this could be the loss in speed due to the encrypted, routed connections.

What is VDI?

This is used to provide endpoint connections to users by creating virtual desktops through a central server hosting. Each user connecting to this server will have access to all resources hosted on the central server, based on the access permissions set for them.  So, each VDI will be configured for a user. And it will feel as if they are working on a local machine. The endpoint through which the user accesses the VDI can be a desktop, laptop, or even a tablet or a smartphone. This means that people can access what they want, even while on the go.

Technically, this is a form of desktop virtualization aimed at providing each user their own Windows-based system. Each user’s virtual desktop exists within a virtual machine (VM) on the central server. Each VM will be allocated dedicated resources that improve the performance as well as the security of the connection. The VMs are host-based; hence, multiple instances of the VMs can exist on the same server or a virtual server which is a cluster of multiple servers.  Since everything is hosted on the server, there is no chance of the data or identity being stolen or misused. Also, VDI ensures a consistent user experience across various devices and results in a productivity boost.

What is RDS?

Microsoft launched Windows Terminal Services with MS Windows 2008, and this later came to be known as remote desktop services. What it means is that a user will be allowed to connect to a server using a client device, and can access the resources on the server. The client accessing the server through a network is a thin client which need not have anything other than client software installed. Everything resides on the server, and the user can use their assigned credentials to access, control and work on the server as if they are working on the local machine. The user is shown the interface of the server and will have to log off the ‘virtual machine’ once the work is over.  All users connected to the same server will be sharing all the resources of the server. This can usually be accessed through any device, even though working through a PC or laptop will provide the best experience. The connections are secure as the users are working on the server, and nothing is local, except the client software.

The Pros and Cons of each

When considering these three choices of VPN, VDI, and RDS, many factors come into play. A few of these that need to be taken into account are:

  1. User Experience/Server Interface – In VDI, each user can work on their familiar Windows system interface so that it increases the comfort factor. Some administrators even allow users to customize their desktop interface to some extent, giving that individual desktop feel which most users are accustomed to. This is not the case in RDS wherein each user of the Server is given the same Server interface, and resources are shared among them. There is a very limited choice of customization available, and mostly all users have the same experience. Users will have to make do with the Server flavor of the Windows systems rather than the desktop flavor that they are used to. The VPN differs from either of these in that it only provides an established point to point connection through a tunnel and processing happens on the client system, as opposed to the other two options.
  2. Cost – If cost happens to be the only consideration, then VPN is a good choice to go with. This is because users can continue to use their existing devices with minimal add-ons or installations. An employee would be able to securely connect to their corporate network and work safely, without any eavesdropping on the data being shared back and forth. The next option is the RDS the cost of which will depend on a few other factors. However, RDS does save cost, time and money, with increased mobility, scalability, and ease of access, with no compromise on security. VDI is the costliest of the three solutions as it needs an additional layer of software for implementation. Examples of this software are VMware of Citrix which helps run the hosted Virtual Machines.
  3. Performance – When it comes to performance, VDI is a better solution, especially for those sectors that rely on speed and processing power like the graphics industry. Since the VDI provides dedicated, compartmentalized resources for each user, it is faster and makes for a better performance and user satisfaction. VPN connections, on the other hand, can slow down considerably, especially depending on the Client hardware, the amount of encryption being done, and the quantum of data transfer done. RDS performance falls in between these two options and can be considered satisfactory.
  4. Security – Since it came into being for the sake of ensuring the security of the corporate data when employees work outside the office, VPN does provide the best security in these three remote work options. With VDI and RDS, the onus on ensuring security lies with the administrators of the system, in how they configure and implement the same. But, it is possible to implement stringent measures to ensure reasonably good levels of security.
  5. End-User Hardware – Where VDI and RDS are considered, end-user hardware is not of much consequence, except in using to establish the connection. In these cases, it is the Server hardware that matters as all processing and storage happen on it. But in ensuring VPN connections, end-user hardware configurations are important as all processing happens on this after establishing the secure connection. VDI offers access to clients for Windows, Mac and at times, even for iPhone and Android. RDS offers clients for Windows and Mac; however, a better experience is delivered with Windows.
  6. Maintenance – VPN systems usually require the least maintenance once all the initial setup is done. VDI, however, can prove to be challenging, as it requires all patches and updates to be reflected across all VMs. RDS needs lesser maintenance than VDI, but more than that of VPN systems. At best, RDS will have to implement and maintain a few patches.

The Summary

Looking at the above inputs, it is obvious that there is no best solution that can be suggested for every business. Each enterprise will have to look at its existing setup, the number of employees, the business goals, the need for remote work, the challenges therein, and then decide, which factor needs to be provided more weightage. If the number of employees is less, perhaps VPN or RDS may be the better way to go. But, if your need is of better performance owing to the graphics kind of work, then we highly recommend taking a look at the VDI option. VDI may be the way to go if you have a large number of employees as well.

Are You Committing Any of These Super Common DevOps Mistakes?


A new venture is never easy. When you try something for the first time, you’re bound to make mistakes. DevOps isn’t an exception to the rule. Sure, you might have read up a lot on the subject but nothing can prepare you for the real thing. Does that mean you give up trying to understand DevOps? Not at all! That’s the first mistake you must overcome; if your knowledge of basic DevOps theory is weak, you will speed up the disaster, and before long, your efforts will seem more disappointing than productive. So, keep at it and in the meantime, check out this list of common mistakes that you can easily avoid:

 A Single Team to Handle the Whole DevOps Workload



Most organizations make this mistake – they rely on a single team to support the entire DevOps functions. Your overburdened development and operations crew already has to communicate and coordinate with the rest of the company. Adding a dedicated team for this purpose adds to the confusion.

The thing is, DevOps began with the idea of enhancing collaboration between the teams involved in software development. So, it is more than just development and operations. They must also handle security, management, quality control, and so on. Thus, the simpler and straightforward you keep things within your company, the better.

Instead of adding a dedicated team for all DevOps functions, work on your company culture. Focus more on automation, stability, and quality. For example, start a dialogue with your company regarding architecture or the common issues plaguing production environments. This will inform the teams about how their work affects one another.  Developers must realize what goes on once they push code, and how operations often have a hard time maintaining an optimum environment. The operations team, on the other hand, should try to avoid becoming a blocker through the automation of repeatable tasks.


 Greater Attention to Culture Than Value

Though it’s a bit contrary to the last point, DevOps isn’t all about organizational culture. Sure, it requires involvement from the company leadership as well as a buy-in from every employee, but they don’t understand the benefits until they have an individual “aha” experience and discover the value. And that happens only when they have a point of comparison. Numbers help with this.



Start paying more attention to measurable aspects. When reading the DevOps report, check the four metrics – lead time for changes, deployment frequency, change failure rate, and mean time to discover. A higher deployment rate helps minimize the risk of releasing minor changes. Enhance the time needed to provide value to consumers once the code is pushed. If you experience failure, decrease the recovery time and also reduce the rate of failure. The truth is, culture isn’t something that can be measured; your customers will not have much interest in the various aspects of your company by the end. They will, however, show an interest in visible and tangible things.


 Select Architecture to Deter Changes



Software that cannot be evolved or changed easily presents some interesting challenges. If parts of your system cannot be deployed independently, starting the system will become difficult. Architecture that isn’t loosely coupled is difficult to adapt. Users face this problem while deploying large systems. They don’t spend a lot of time considering the deployment of independent parts; so they have to deploy all the parts together. You risk breaking the system if only a single part is deployed.

However, know that DevOps is more than simple automation. It tries to decrease the time you spend deploying apps. Even while automated, if the deployment requires a lot of time, customers will fail to experience the value in automation.

This mistake can be avoided by investing a bit of time in the architecture. Simply understand how the parts can be deployed independently. However, do not undertake the effort of defining every little detail, either. Rather, postpone a few of the decisions until a later more opportune moment, when you realize more. Allow the architecture to change by itself.


Lack of Experimentation in Production


In the field of software, companies used to try and get everything right ahead of releasing it to production. Nowadays though, thanks to automation and culture change, it’s easier to get things into production. Thanks to unprecedented speed and consistency, new changes are easily releasable numerous times a day. But people make the mistake of not harnessing the true power of DevOps tooling for experiments in production.

Reaching the production stage is always laudable, but that doesn’t mean the company should stop experimenting and testing in production. Normally, using tools such as production monitoring, release automation, and feature flags allow you to carry out some cool functions. Split tests can be run to verify the layout that works best for a feature, or you can conduct a gradual rollout to understand people’s reactions to something new.

The best part is, you’re capable of doing all of this without obstructing the pipeline for changes that are still on their way. Harnessing the full power of DevOps means to allow actual production data affect the development process in a closed feedback loop.


Too Much Focus on Tooling

While some tools help with DevOps practice, using them doesn’t mean you’re doing DevOps. New tools are coming to the forefront all the time, which means you now have different tools for deployment, version control, continuous integration, orchestrators, and configuration management. A lot of vendors will say they have the perfect tool for DevOps implementation. However, no single tool can possibly cover all your requirements.


So, adopt an agnostic outlook towards tools. Know that there will always be a better method of doing things. Fresh tools will get adopted once a certain amount of time has passed. Use tools to spend more and more time on things that provide customers with the necessary value. Develop a mindset for delivering value to end users at every moment. Think of your job as getting over only when your customers’ expectations are met post-delivery.


Even the smallest DevOps issue can affect other functions of your company if you do not take the effort to correct the problems. Focus on the right aspects of DevOps and keep on perfecting the techniques for a smoother, faster deployment.

The Most Important Tech Trends to Track Throughout 2018

2017 was a roller coaster of a year; it’s breathtaking how the time to market for technologies to create observable impact is shrinking year after year. In the year that went by, several new technologies became mainstream, and several concepts emerged out of tech labs in the form of features within existing technologies.

In particular, industrial IoT and AI-based personal assistance space expanded manifolds in 2017, data as a service continued its rollicking growth, and connected living via smart devices appeared to be a strategic focus for most technology power players. Efforts to curb misinformation on the web also gained prominence.

Artificial intelligence, the blockchain, industrial IoT, mixed reality (AR and VR), cybersecurity– there’s no dearth of buzzwords, really. The bigger question here is – which of these technologies will continue to grow their scope and market base in 2018, and which new entrants will emerge?

Let’s try to find out the answers.


The Changing Dynamics of Tech Companies and Government Regulators

Indications are too prominent to ignore now, there’s increasing pushback from governments, along with attempts to influence the scope of technological innovations. The power and control of technology in human life is well acknowledged now, and naturally, governments feel the need to stay in the mind space of tech giants, as they innovate further. With concerns around smart home devices ‘tapping’ your conversations all the while, it’s reason enough for the end user community to be anxious.

GDPR will come into force in mid-2018, and the first six months after that will be pretty interesting to watch. The extent and intensity of penalties, the emergence of GDPR compliance services, the distinct possibilities of similar regulations emerging in other geographies – all these will be important aspects to track for everyone. Also, the net neutrality debate will continue, and some of the impacts will be visible on the ground. Whether it will be for the better or for the worse of the World Wide Web? By the end of 2018, we might be in a good position to tell.

The ‘People’ Focused Tech Business

The debate around the downsides of technology in terms of altering core human behaviour is getting louder. Call it the aftermath of Netflix’s original series called Black Mirror, which explores the fabrics of a future world where the best of technology and the worst of human behaviour fuse together. Expect the ‘people’ side of technology businesses evolve more quickly throughout this year.

Community-based tech businesses, for instance, will get a lot of attention from tech investors. Take for example businesses such as co-working spaces with particular attention on specific communities, such as women entrepreneurs, innovators who’re dedicated to research in a specific technology, people with special requirements and who’re differently abled.

Also, AI algorithms that make humans more powerful instead of removing them from the equation will come to the fore. Take, for instance, Stitch Fix, an AI-powered personal shopping service that enables stylists to make more customized and suitable suggestions to customers.

Blockchain and IoT Meet

For almost 5 years now, IoT has featured on every list of potentially game-changing technologies, and for good reason. There are, however, two concerns.

How quickly will business organizations be able to translate innovation in IoT into tangible business use cases?

How confident can businesses be about the massive data that will be generated via their connected devices, every day?

Both these concerns can be addressed to a great extent by something that’s being termed BIoT (that’s blockchain Internet of Things).

BIoT is ready to usher in the new era of connected devices. Companies, for instance, will be able to track package shipments and take concrete steps towards building smart cities where connected traffic lights and energy grids will make human lives more organized. When retailers, regulators, transporters, and analysts will have access to shared data from millions of sensors, the collective and credible insights will help them do their jobs better. Of course, the blockchain concept will ensure that data will be too difficult to be hacked.



Yes, bots. We’ve almost become used to bots answering our customer service calls. Why is this technology, then, a potential game-changer for the times to come? Well, that’s because of the tremendous potential for growth that bots have.

Bots are the outcome of coming together of key technologies – natural language processing (NLP) and machine learning (ML). Individually, there’s a lot of growth happening in both these technologies, which means that bots are also growing alongside.

Because of the noteworthy traction of chatbots in 2017, businesses are very likely to put their money in chatbots over apps in 2018. From chatbots that give you tailor-made financial advice to those that tell you which wine would go well with your chosen pizza, the months to follow will bring a lot of exciting value adds from this space.

Quantum Computing: From Sci-Fi to Proof of Concept

Let’s face it; quantum computing has always been a thing from science fiction movies, and not really anything tangible. The research activity in this space, however, hasn’t slackened a bit. Innovators are, in fact, at a stage where quantum computing is no more just a concept. The promise of outperforming traditional supercomputers might not be an empty promise after all. Tech giants are working hard to improve their qubit computing powers while keeping error probability at a minimum. 2018 has every reason to be the year when quantum computing emerges as a business-technology world buzzword.


Concluding Remarks

The pace of disruption of a tech trend is moderated by government regulations, the price war among competing tech giants, and cybersecurity, among other factors. Eventually, we all have to agree that the only thing we can say for certain is that by the time this year draws to an end, we will all be living in ways different from today. It’s very likely that at the core of these changes will be one or more of the technology trends we discussed in this guide.



Author – Rahul Sharma

Top 5 Use Cases For Machine Learning in The Enterprise

machine learning

Artificial intelligence can be loosely defined as the science of mimicking human behavior. Machine learning is the specific subset of AI that trains a machine to learn. The concept emerged from pattern recognition and the theory that computers can learn without being programmed to complete certain tasks. Things like cheaper, more powerful computational processing, the growing volumes of data, and affordable storage has taken deep learning from research papers and labs to real life applications. However, all the media and hype surrounding AI, has made it extremely difficult to separate exciting futuristic predictions from pragmatic real-world enterprise applications. In order to avoid begin caught up in the hype of technical implementation, CIOs and other tech decision makers have to build a conceptual lens and look at the various areas of their company that can be improved by applying machine learning. This article explored some of the practical use cases of machine learning in the enterprise.

1. Process Automation

Intelligent process automation (IPA) combines artificial intelligence and automation. It involves the diverse use of machine learning. From automating manual data entry, to more complex use cases like automating insurance risk assessments. ML is suited for any scenario where human decision is used, but within set constraints, boundaries or patterns. Thanks to cognitive technology like natural language processing, machine vision and deep learning, machines can augment traditional rule-based automation and overtime learn to do them better as it adapts to change. Most IPA solutions already utilize ML-powered capabilities beyond simple rule based automation. The business benefits are much more extensive than cost saving and include better use of costly equipment or highly skilled employees, faster decisions and actions, service and product innovations, and overall better outcomes. By taking over rote tasks, machine learning in the enterprise frees up human worker to focus on product innovation and service improvement; allowing the company to transcend conventional performance trade-offs and achieve unparalleled levels of quality and efficiency.

2. Sales Optimization

Sales typically generates a lot of unstructured data that can ideally be used to train machine learning algorithms. This comes as good news to enterprises that have been saving consumer data for years, because it is also the place with the most potential for immediate financial impact from implementing machine learning. Enterprises eager to gain a competitive edge are applying ML to both marketing and sales challenges in order to accomplish strategic goals. Some popular marketing techniques that rely on machine learning models include intelligent content and ad placement or predictive lead scoring. By adopting machine learning in the enterprise, companies can rapidly evolve and personalize content to meet the ever changing needs of prospective customers. ML models are also being used for customer sentiment analysis, sales forecasting analysis, and customer churn predictions. With these solutions, sales managers are alerted in advance to specific deals or customers that are risk.

3. Customer Service

Chatbots and virtual digital assistants are taking over the world of customer service. Due to the high volume of customer interactions, the massive amounts of data captured and analyzed is the ideal teaching material required to fine tune ML algorithms. Artificial intelligence agents are now capable of recognizing a customer query and suggesting the appropriate article for a swift resolution. Freeing up human agents to focus on more complex issues, subsequently improving the efficiency and speed of decisions. Adopting machine learning in the enterprise cloud have an infallible impact when it comes to customer service-related routine tasks. Juniper research maintains that chatbots will create an annual $8 billion cost savings by 2022. According to a 2017 PWC report, 31 percent of enterprise decision makers believe that virtual personal assistants will significantly impact their business, more than any other AI powered solution. The same report found that 34 percent of executives say that the time saved as a result of using virtual assistants allowed them to channel their focus towards deep thinking and creativity.

4. Security

Machine learning can help enterprises improve their threat analysis and how they respond to attacks and security incidents. ABI research analysts estimate that machine learning in data security will increase spending in analytics, big data and artificial intelligence to $96 billion by 2021. Predictive analytics enables the early detection of infections and threats, while behavioral analytics ensures that any anomalies within the system does not go unnoticed. ML also makes it easy to monitor millions of data logs from mobile and other IoT capable devices and generate profiles for varying behavioral patterns with your IoT ecosystem. This way, previously stretched out security teams can now easily detect the slightest irregularities. Organizations that embrace a risk-aware mind-set are better positioned to capture a leading position in their industry, better navigate regulatory requirements, and disrupt their industries through innovation.

5. Collaboration

The key to getting the most out of machine learning in the enterprise lies in tapping into the capabilities of both machine learning and human intelligence. ML-enhanced collaboration tools have the potential to boost efficiency, quicken the discovery of new ideas and lead to improved outcomes for teams that collaborate from disparate locations. Nemertes’ 2018 UC and collaboration concluded that about 41 percent of enterprises plan to use AI in their unified communications and collaboration applications. Some uses cases in the collaboration space include:
• Video intelligence, audio intelligence and image intelligence can add context to content being shared, making it simpler for customers to find the files they require. Image intelligence coupled with object detection, text and handwriting recognition helps improve meta data indexing for enhance search.
• Real time language translation, facilitates communication and collaboration between global workgroups in their native languages.
• Integrating chatbots into team applications enables native language capabilities, like alerting team members or polling them for status updates.
That is just the tip of the iceberg, machine learning offers significant potential benefits for companies adopting it as part of their communications strategy to enhance data access, collaboration and control of communication endpoints.


How to Deploy A Software Defined Network

Software Defined Network (SDN) was a bit of a buzzword throughout the early to middle of this decade. The potential of optimal network utilization promised by software-defined networking captured the interest and imagination of information technology companies quickly. However, progress was slow, because the general understanding of software-defined networking wasn’t up to the mark, which caused enterprises to make wrong choices and unsustainable strategic decisions upfront.


Where Does SDN Come Into the Picture?

SDN is still a nascent concept for several companies. The virtualization potential for networks offered by SDN calls out IT leaders to improve their understanding of this software heavy approach of network resource management. We hope this guide helps.

What is Software Defined Networking Afterall?

You would know and appreciate how software managed virtual servers and storage make computing resource management more agile and dynamic for enterprises. Imagine the benefits that enterprises could enjoy if the same capabilities could be extended on to your company’s network hardware. That’s what software-defined networking offers.

SDN is about adding a complex software layer on top of the hardware layer in your company’s network infrastructure. This allows network administrators to route network traffic as per sophisticated business rules. These rules can then be extended across to network routers so that administrators don’t have to depend solely on hardware configuration to manage network traffic.

This sounds easy in principle. Ask any network administrator, and they will tell you that’s its really difficult to implement, particularly in companies with matured and stabilized networking infrastructure and processes.




SDN Implementations Demand Upgrades in Network Management Practices

An almost immediate outcome of SDN implementation will be your enterprise’s ability to quickly serve network resource demands using the software. To maintain transparency, the networking team needs to immediately evaluate the corresponding changes they need to bring in, let’s say, the day end network allocation and utilizing reports. This is just one of the many examples of situations where every SDN linked process improvement will need to be matched by equivalent adjustments in related and linked processes.



Managing De-provisioning Along the Way

At the core of SDN implementations is the enterprise focus on optimizing network usage and managing on-demand network resource requests with agility. While SDN implementations help companies achieve these goals fairly quickly, they often also cause unintended network capacity issues. Among the most common reasons for this is that SDN engineers forget to implement rules for de-provisioning networks when the sudden surge in demand is met. By building de-provisioning as the last logical step in every on-demand resource allocation request, networking teams can make sure that SDN doesn’t become the unintentional cause of network congestion.


Pursue 360 degrees network performance visibility

It’s unlikely that your company will go for a complete overhaul of its network management systems and processes. So, it’s very likely that the SDN implementation will be carried out in a phased manner. Some of the key aspects of managing this well are:

  • Always evaluate the ease with which your existing network performance monitoring tools will allow SDN to plug into them.
  • Look for tools whose APIs allow convenient integration with SDN platforms
  • Evaluate how your current network performance management tools will be able to manage and integrate data from non-SDN and SDN sources.

Note – because hybrid SDN (a balance of traditional and software-defined network) is a practical approach for enterprises, implementations much accommodation the baseline performance monitoring goals of the enterprise. In fact, the introduction of SDN often requires networking teams to improve performance monitoring and reporting practices so that concrete and business process-specific improvements can be measured and reported.



Is SDN an Enterprise Priority Already?

The basics reason why SDN is making its way into IT strategic discussions for even SMBs is that the nature of business traffic has changed tremendously. Systems have moved to the cloud-computing model, and there’s a lot of focus on mobile accessibility of this system.

In times when systems operated mostly in the client-server configuration, the basic tree structure of Ethernet switched worked well. Enterprise network requirements today, however, demand more. SDN is particularly beneficial in enabling access to public and private cloud-based services.

SDN also augers well for another very strong enterprise movement – the one towards mobility. That’s because, with SDN, network administrators can easily provision resources for new mobile endpoints, taking care of security considerations. Also, enterprise data volumes and information needs will only grow. Managing network optimization with many virtual machines and servers in the play, traditionally, will require tremendous investments. SDN makes it more manageable, even from a financial perspective.


Understand and Acknowledge Security Aspects of SDN

Make no assumptions. SDN is a major change in the way your company’s network works. There are specific known risks of SDN implementations that consultants and vendors from this sphere will help you prepare for.

Protocol weaknesses are right at the top. A crucial question for the application security and network security teams to work together on is – do our application security routines accommodate the needs of protocols used in the SDN platform? Another key security-related aspect is to devise measures to prevent SDN switch impersonation.


Choosing External Vendors

The success of an SDN implementation is measured in terms of the positive impact it has in the context of business use cases. If/when you initiate discussions with external consultancies and vendors for your enterprise SDN implementation, make sure you evaluate them not only on the basis of their SDN knowledge but also their ability to understand your business applications ecosystem. This helps them implement SDN platforms that accommodate complex and highly sophisticated business rules of network resource allocation. This, in turn, significantly improves the project’s probability for getting all its goals tick marked.


Concluding Remarks

If SDN is on the strategic roadmap being followed by your enterprise, there’s a lot you can help with. Start with the tips and suggestions shared in this guide.



Author: Rahul Sharma

Personal Data Breach Response Under GDPR

personal data breach

Data security is at the heart of the upcoming General Data Protection Regulation (GDPR). It sets strict obligations on data controllers and processors in matters pertaining data security while concurrently providing guidance on the best data security practices. And for the first time, the GDPR will introduce specific breach notification guidelines. With only a few months to go until the new regulations come into effect, businesses should begin focusing on data security. Not just because of the costs and reputational damage a personal data breach can lead to; but also because under the GDPR, a new data breach notification regime will be applied to statute the reporting of certain data breaches to affected individuals and data protection authorities.

What Constitutes a Personal Data Breach Under GDPR?

GDPR describes A personal data breach as – a security breach that leads to the unlawful or accidental loss, destruction, alteration, or unauthorized disclosure of personal data stored, processed or transmitted. A personal data breach is by all means a security incident; however, not all security incidents require the same strict reporting regulations as a personal data breach. Despite the broad definition, it is not unusual in data security laws that require breach reporting. HIPAA, for example, makes the same distinctions at the federal level for medical data. It aims to prevent data protection regulators from being overwhelmed with breach reports.

By limiting breach notifications to personal data (EU speak for personally identifiable information – PII), incidents that solely involve the loss of company data/ intellectual property will not have to be reported. The threshold to establish whether an incident has to be reported to a data protection authority is dependent on the risk it poses to the individuals involved. High risk situations are those that can potentially lead to the significant detrimental suffering – for example, financial loss, discrimination, damage to reputation or any other significant social or economic disadvantage.

…it should be quickly established whether a personal data breach has occurred and to promptly notify the supervisory authority and the data subject.

– Recital 87, GDPR

If an organization is uncertain about who has been affected, the data protection authority can advise and, in certain situations, instruct them to immediately contact the individuals affected is the security breach is deemed to be high risk.

What Does The GDPR Require You to Do?

Under GDPR, the roles and responsibilities of processors and data controllers have been separated. Controllers are obliged to only engage processors who are capable of providing sufficient assurances to implement appropriate organizational and technical measures to protect the rights of data subjects. In the event of a data breach that affects the rights and freedoms of said data subjects, the organization should report it, without any delay and, where practicable, within 72 hours of becoming aware of it.

The data processor is mandated to notify the controller the moment a breach is discovered, but has no other reporting or notification obligation under the GDPR. However, the 72-hour deadline begins the moment the processor becomes aware of the data breach, not when the controller is notified of the breach. A breach notification to a data protection authority has to at least:

  1. Have a description of the nature of the breach, which includes the categories and number of data subjects affected.
  2. Contain the data protection officer’s (DPO) contact information.
  3. Have a description of the possible ramifications of the breach.
  4. Have a description of steps the controller will take to mitigate the effect of the breach.

The information can be provided in phases if it is not available all at once.
If the controller determines that the personal data breach can potentially put the right and freedoms of individuals at risk, it has to communicate any information regarding the breach to the data subjects without undue delay. The communication should plainly and clearly describe the nature of the personal data breach and at least:

  1. Contain the DPO’s contact details or a relevant contact point.
  2. Have a description of the possible ramifications of the breach.
  3. Have a description of measures proposed or taken to mitigate or address the effects of the breach.

The only exception in this case is if the personal data has been encrypted, and the decryption key has not been compromised, then there is not need for the controller to notify the data subject.

The most ideal way for companies to handle this GDPR obligation is to not only minimize breaches, but also, establish policies that facilitate risk assessment and demonstrates compliance.

The GDPR stipulates that all the records pertaining the personal data breach, regardless of whether the breach needs to be reported or not. Said records have to contain the details of the breach, any consequences and effects, and the follow up actions taken to remedy the situation.

Should Ransomware Attacks Be Reported?

Ransomware typically involves the ‘hijacking’ of cooperate data via encryption and payment is demanded in order to decrypt the ransomed data. Under GDPR, Ransomware attacks may be categorized as a security incident but it does not necessarily cross the threshold of a personal data breach. A Ransomware attack would only be considered a personal data breach if there is a back up but the outage directly impacts user’s freedoms and rights, or if there is no back up at all. Ideally, a Ransomware attack where the ransomed data can be quickly recovered does not have to be reported.

What Are the Consequences of Non-Compliance?

A failure to comply with the GDPR’s breach reporting requirements will not only result in negative PR, constant scrutiny, and possibly loss of business; but will also attract an administrative fine of up to € 10 million or up to two percent of the total global annual turnover of the preceding financial year. Additionally, failure to to notify the supervising authority may be indicative of systematic security failures. The would show an additional breach of GDPR and attract more fines. The GDPR does have a list of factors the supervising authority should consider when imposing fine; chief among them being the degree of co-operation by the data controller with protection authority.

In Closing

Data breach notification laws have already been firmly established in the U.S. These laws are designed to push organizations to improve their efforts in the detection and deterrence of data breaches. The regulators intentions are not to punish but to establish a trustful business environment by equipping organizations to deal with with security issues.

Author: Gabriel Lando

image courtesy of freepik

FileCloud High Availability Architecture

Enterprise Cloud Infrastructure is a Critical Service

The availability of enterprise hosted cloud services has opened huge potential for companies to effectively manage files. The files can be stored, shared, exchanged within the enterprise and with their partners efficiently while keeping existing security and audit controls in place. The service provides the power and flexibility of public cloud while maintaining the data control.

The main challenge of enterprise hosted cloud services is to guarantee high uptime (in the order of seven nines) while maintaining high quality of service. The dependency on such services means that any disruption to the service can have significant productivity impacts. Enterprise cloud services typically consist of multiple different services to provide the functionality and any High availability architecture must take into account that all critical services need to have redundancies built into them to be effective. Moreover, detection and handling of failures must not require any user interaction as well as be reasonably quick.

FileCloud Enterprise Cloud

FileCloud enables enterprises to seamlessly access their data using a variety of external agents. The agents can be browsers, mobile devices, client applications, while the data that is enabled for access by FileCloud can be stored locally or in internal NAS devices or in public cloud locations such as AWS S3 or OpenStack SWIFT.

Depending on the specific enterprise requirements, the FileCloud solution may implemented multiple different software services such as Filecloud Helper service, Solr service, virus scanner service, Open Office service etc. Moreover, FileCloud may use the enterprise identity services such as Active Directory or LDAP or ADFS services. Any failure on any of these services can impact end user experience.
FileCloud HA

High Availability Architecture

FileCloud solution can be implemented using the classic three tier high availability architecture. The first tier will consists of the load balancer and access control services. Tier 1 will be a web tier made up of load balancers. Tier 2 will be stateless application servers and for FileCloud implementation, this layer will consist of Apache nodes and helper services. Tier 3 will be the database layer. Any other dependencies such as Active Directory or Data servers are not addressed here.  The advantage this architecture is separation of stateless components from state full components allowing great flexibility in deploying the solution.
AD tiers

Tier 1 – Web Tier

Tier 1 is the front end of the deployment and act as the entry point to all external clients. The components in Tier 1 are stateless and primarily forward the request to the webservers in tier 2. Scaling of the web tier can be done by adding and removing load balancer instances since they are stateless. Each webserver node is capable of handling any request. This layer can also be configured to do SSL offloading allowing lighter weight communication between Tier1 to Tier2. This layer can also be configured to provide simple affinity based on source and destination addresses. The traffic will be forwarded to healthy application server nodes.  This layer also monitors available application servers and will automatically distribute the traffic depending on the load.

Tier 2 – Application Servers

Tier 2 in FileCloud deployment consists of the following services

  • Apache servers
  • FileCloud helper
  • Antivirus service
  • Memcache service
  • Open Office service

The apache servers in FileCloud do not store any state information and are therefore stateless. They however do cache data for faster performance (such as convert and cache documents for display). They primarily execute application code to service a request. All state specific data is stored in database tables and therefore are stateless. If an application server node fails, the request can be handled by a different application server node (provided the clients retry the failing request). Capacity can be increased or reduced (automatically or manually) by adding or removing apache server nodes.

FileCloud helper service provides additional capabilities such as indexed search, NTFS permission retrieval etc.  FileCloud Helper is a stateless service and therefore can be added or removed as needed.

Similar to FileCloud helper service, the Antivirus service is also a stateless service providing antivirus capability to FileCloud. Any file that is uploaded to Filecloud is scanned using this service.

Memcache service is an optional service that is required for local storage encryption. This service is also stateless and is required only if local storage encryption is required. This service is also started in same node as the Apache service.

Open office service is an optional service that is required for creating document file previews in browser. This server is stateless and is started in the same node as the Apache server.

Tier 3 – Database Nodes

Tier 3 consists of state full services. This consists of the following services

  • MongoDB servers
  • Solr Servers

The High availability for each of these servers varies depending on the complexity of the deployment. The failure of these services can have limited or system wide impact. For example, MongoDB server failure will result in FileCloud solution wide failure and is critical, while FileCloud helper server will only impact a portion of function such as network folder access etc.

MongoDB Server High Availability

MongoDB servers store all application data in FileCloud and provide High Availability using replica sets. The MongoDB replica set configuration provides redundancy and increases data availability by keeping multiple copies of data on different database services. Replication also provides fault tolerance against the loss of a single database server. It is also possible to configure Mongo DB to increase the read capacity. The minimum number of nodes needed for Mongo DB server HA is a 3 node member set (It is possible to also use 2 nodes + 1 arbiter).  In case of primary Mongo DB server node failure, one of the secondary node will failover and will become primary.

The heartbeat time frame can be tuned depending on system latency. It is also possible to setup the Mongo DB replica to allow reads from secondary to improve read capacity.
HA Architecture Primary Secondary

Putting It All Together

The three tier structure for FileCloud component is shown below. The actual configuration information is available in FileCloud support. This provides a robust FileCloud implementation with high availability and extensibility.  As new services are added to extended functionality, the layer can be decided whether or not they are stateless or store state. The Stateless (Tier 2) nodes can be added or removed without disrupting service. Tier 3 nodes will store state and require specific implementation depending on the type of service.
HA Architecture

Alternative to WatchDox – Why FileCloud is better for Business File Sharing?


FileCloud competes with WatchDox for business in the Enterprise File Sync and Share space(EFSS). Before we get into the details, I believe an ideal EFSS system should work across all the popular desktop OSes (Windows, Mac and Linux) and offer native mobile applications for iOS, Android, Blackberry and Windows Phone. In addition, the system should offer all the basics expected out of EFSS: Unlimited File Versioning, Remote Wipe, Audit Logs, Desktop Sync Client, Desktop Map Drive and User Management.

The feature comparisons are as follows:

Features WatchDox
On Premise
File Sharing
Access and Monitoring Controls
Secure Access
Document Preview
Document Edit
Outlook Integration
Role Based Administration
Data Loss Prevention
Endpoint Backup
Amazon S3/OpenStack Support
Public File Sharing
Customization, Branding
SAML Integration
NTFS Support
Active Directory/LDAP Support
API Support
Application Integration via API
Large File Support
Network Share Support Buy Additional Product
Mobile Device Management
Desktop Sync Windows, Mac, Linux Windows, Mac
Native Mobile Apps iOS, Android, Windows Phone iOS, Android
Encryption at Rest
Two-Factor Authentication
File Locking
Pricing for 20 users/ year $999 $3600

From outside looking-in, the offerings all look similar. However, the approach to the solution is completely different in satisfying enterprises primary need of easy access to their files without compromising privacy, security and control. The fundamental areas of difference are as follows:

Feature benefits of FileCloud over WatchDox

Unified Device Management Console – FileCloud’s unified device management console provides simplified access to managing mobile devices enabled to access enterprise data, irrespective of whether the device is enterprise owned, employee owned, mobile platform or device type. Manage and control of thousands of iOS and Android, devices in FileCloud’s secure, browser-based dashboard. FileCloud’s administrator console is intuitive and requires no training or dedicated staff. FileCloud’s MDM works on any vendor’s network — even if the managed devices are on the road, at a café, or used at home.

Amazon S3/OpenStack Support Enterprise wanting to use Amazon S3 or OpenStack storage can easily set it up with FileCloud. This feature not only provides enterprise with flexibility to switch storage but also make switch very easily.

Embedded File Upload Website Form – FileCloud’s Embedded File Upload Website Form enables users to embed a small FileCloud interface onto any website, blog, social networking service, intranet, or any public URL that supports HTML embed code. Using the Embedded File Upload Website Form, you can easily allow file uploads to a specific folder within your account. This feature is similar to File Drop Box that allows your customers or associates to send any type of file without requiring them to log in or to create an account.

Multi-Tenancy Support – The multi-tenancy feature allows Managed Service Providers(MSP) serve multiple customers using single instance of FileCloud. The key value proposition of FileCloud multi-tenant architecture is that while providing multi-tenancy the data separation among different tenants is also maintained . Moreover, every tenant has the flexibility for customized branding.

NTFS Shares Support – Many organizations use the NTFS permissions to manage and control the access permissions for internal file shares. It is very hard to duplicate the access permissions to other systems and keep it sync. FileCloud enables access to internal file shares via web and mobile while honoring the existing NTFS file permissions. This functionality is a great time saver for system administrators and provides a single point of management.


Based on our experience, enterprises that look for an EFSS solution want two main things. One, easy integration to their existing storage system without any disruption to access permissions or network home folders. Two, ability to easily expand integration into highly available storage systems such as OpenStack or Amazon S3.

WatchDox neither provides OpenStack/Amazon S3 storage integration support nor NTFS share support. On the other hand, FileCloud provides easy integration support into Amazon S3/OpenStack and honors NTFS permissions on local storage.

With FileCloud, enterprises get one simple solution with all features bundled. For the same 20 user package, the cost is $999/year, almost 1/4th of WatchDox.

Here’s a comprehensive comparison that shows why FileCloud stands out as the best EFSS solution.

Try FileCloud For Free & Receive 5% Discount

Take a tour of FileCloud

A Primer on Windows Servers Disaster Recovery

windows server recovery
In this primer, we’re going to explore some of the best ways to actively restore your Windows Server with minimal impact. Though basic, the following technical pointers will help you with faster Windows servers disaster recovery.

1. RAM and HARD DISK Check

Blue screens are Windows’ way of telling you of some hardware failure such as with a faulty RAM, etc. Before taking any immediate action such as with a software repair option, it is important to run a thorough ram and hard disk check. To analyze issues with blue screens, you can resort to the Blue Screen View tool with auto-USB loading options. If experiencing blue screens, define behavior for windows restart with Control Panel-> System and Security-> System-> Advanced System Settings. Go to Startup and Recovery-> Settings-> Disable Automatically Restart option from System Failure. Choose Automatic Memory dump/Small memory dumpto let BlueScreenView parse memory.dmp file generated. Further errors for hard disk can be checked from Windows Logs-> System-> Event Viewer.

2. Boot Manager Failure

Boot manager failure leads to server loading failures. A Win Server DVD or repair technician can help here. Another solution is to access boot manager through the command prompt and take necessary steps for reactivating it. To overwrite master boot record (from the beginning of the disk), you can use the command bootrec /fixmbr. To view OS not currently listed, input command bootrec /scanos. To reinstate systems in the boot manager, use bootrec /rebuildbcd which reinstalls earlier systems integrated with boot manager. After this, input bootrec /fixboot to create a bootmgr log again. Beyond this, input commands bootsect /nt60 SYS followed by bootsect /nt60 ALL in the command line to repair the boot manager further.

3. Windows Startup Failure

Startup failures result from system files displacement after a crash, which leads to the Server booting up but not launching. One option is to do a system restore and select an earlier restore point. Another option is to open elevated command prompt, input sfc /scannow and allow Windows to scan and restore accordingly.

4. Restoring Server Backup

If installed through server Manager, a Server that is backed up on an external drive can restore data completely. Win Server Manager also offers Win Server Backup Feature to launch backup from tools menu or searching wbadmin.msc from Startup Menu. Block based backups are generated as a result, although it is possible to select particular partitions from the Backup Schedule wizard as well. To start full back (restorable via computer repair option on installation DVD), use wbadmin start sysrecovery/systemstatebackup from command-line. Use wbadmin start backup –allCritical –backupTraget:<insert_disk_of_choice>  -quiet. This backup can then be used to restore from in case of system failures. Boot Win Server through DVD, and then select Repair Your Computer option from Troubleshoot -> System Image Recovery.

5. Hardware Restore

Windows Server 2008 and Win Server 2012 has options to restore system backups from different hardware sources if you select the Bare Metal Recovery option. In this you need to utilize the Exclude disks option which lets you select a disk that is not required during restore operations, e.g. a disk with data rather than OS files is suitable for this. Select Install Drivers if you wish to backup drivers within your recovery data file so as to install it as well during a complete system restore from an initial point of backup. Advanced options are also available to provide options such as automatic system restore after disk defect verification and server restore, etc.

6. Active Directory Backup & Restore

The native backup program within the Win Server OS is sufficiently useful for backing up active directory services and restoring the same. It can not only create a back up of the directory but save all associated data necessary for functioning. To run backup, enable System State and System Reserved option and then back up all the data. In order to restore your Actve Directory, start domain controller and press F8 until the boot menu appears (may vary depending on the model and make of computer in use). In boot option, select Directory Services Restore Mode, log into the applications for Active Directory Restore mode, then complete restore. To boot domain controller into restore mode, input data: bcdedit/set safeboot dsrepair. If in Directory Services restore mode, set bcdedit /deletevalue safeboot to boot normally. Input shutdown t 0 –r to reboot.

7. Active Directory Cleanup

In DND manager/server, look into Properties for Name Servers tab then remove the service but be careful not to remove host entry. Ensure the domain controller is not explicitly registered as such, then remove AD services (e.g. VPN, etc.). If global catalog exists on the server, configure a different one with same deails from AD sites and services snap-in tool  and then go to Sites -> Servers -> Right click NTDS settings -> Properties -> uncheck Global Catalog from General tab. To downgrade domain controller, use PowerShell Uninstall ADDSDomainController cmdlet, use –force if you wish to remove it completely. Metadata can be modified from ntdsutil-> metadata cleanup -> connections. After cleanup, delete domain controller from site of assignation. Go to Snap In -> Domain Controller -> Select Delete. Check NTDS settings from AD to reassure it’s not registered with replication partner (remove if required).
windows server 2003

8. Active Directory Database Rescue

Go to Directory Services Restore Mode, insert call ntdsutil, active instance ntds, then choose files. Input <integrity>, quit to leave file maintenance. Data analysis launched by semantic database analysis command from CMD can give a detailed report if you keep verbose on. Enter go fixup, to start up the diagnostic tool to repair database. Quit and restart with command quit ntdsutil.

9. Backup for Win Exchange

Begin with Select Application under Select Recovery Type, navigate to Exchange Option, View Details to see backups. The backup is current if checkbox Do Not Perform a Roll-Forward appears at this stage. For Roll Forward Recovery, transaction logs created during backup are required as Exchange uses these to write in the database and accomplish recovery. Enabling the Recovr to Original Location option lets you restore all databases to original locations. Beyond the system restore, the backup is integrated with database and can also be manually moved back.

Author: Rahul Sharma

Image courtesy: Salvatore Vuono,