Archive for the ‘data governance’ Category

Technical Data Under ITAR


The International Traffic in Arms Regulations (ITAR) are controls established by the U.S State Department to regulate the temporary import and export of defense articles. While most defense contractors comprehend the implications of ITAR to physical objects, ITAR’s application to data remains unclear to most. The first step to properly identifying technical data and how its controlled for export purposes is having a concise understanding of what technical data is and what it encompasses.

Technical data refers to the unique information required for the development, production and subsequent use of defense articles.

  • Development – is inclusive of all the information that is created or gathered before production and may include but is not limited to: layouts, pilot production schemes, testing and assembly prototypes, design research, integration design, configuration design, design concepts, design analysis, and other forms of design data.
  • Production – is comprised of all the information generated or gathered during the production stages and may include but is not limited to: engineering, manufacture, assembly, integration, testing, inspection and quality assurance.
  • Use – encompasses any information that relates to the installation, operation, maintenance, testing or repair of defense articles.

Technical data also refers to classified data that relates to defense services and defense articles.

Implications of Cloud Computing on Technical Data

The cloud facilitates access to information while expanding the delivery of services. On the other hand, ITAR aims to restrict the flow of information while limiting the provision of services and goods. The contrast between the two creates unique challenges as it relates to compliance for defense contractors who have operations in multiple countries and wish to adopt cloud computing. Some organizations have opted to avoid the cloud altogether and fall back to maintaining separate systems in order to meet ITAR requirements, which tends to be extremely inefficient and costly. In order to fully understand the possible implications of cloud computing on export controlled data, you must first understand what constitutes an export when it comes to technical data.

I. What is an Export?

In global trade, the term export is typically synonymous with large shipping crates being loaded onto ships or wheeled into a large transoceanic cargo plane. However, U.S export control laws are not limited to the movement of hardware across borders. Instead, the regulations also extend to specific technical data. The type of control extended depends on the export control jurisdiction and classification. Export Administration Regulations (EAR) defines an export as the shipment or transmission of items out of the United States, or release of software or technology to a foreign national within the U.S. The ITAR definition of export is analogous.

Technical data is regulated for reasons of foreign policy, non-proliferation and national security; the current law stipulates that technical data should be stored in the U.S and that only authorized U.S persons should have access to it. The existing definition of export was drafted at a time when cloud computing was not in the picture, therefore, the exact application of the term ‘export’ in this space remains unclear.

II. When Does an Export Occur?

When it comes to export control, transmitting data to a cloud platform for storage or manipulation is conceptually similar to carrying a hard copy of the data to another country or sending it via the mail. Transmitting data to the cloud for backup or processing mainly involves copying the data to a remote server. If the server’s location is outside the United States; then uploading export-controlled technical data to it will be deemed and export, as if it had been printed on paper and carried outside the country. This creates an appreciable challenge since, with the cloud, the end-user is not axiomatically privy to the location of the data, and the locations of the cloud server are subject to change.It is important to note that export controlled data doesn’t have to leave the U.S to be considered an export. Under ITAR, technical data should not be disclosed to non-US persons regardless of where they are located, without authorization. Non-US persons encompass any individual who isn’t a lawful permanent resident of the United States. When technology subject to ITAR is uploaded to a cloud server, regardless of whether the provider has made sure that all servers are located within the U.S, and a user from another country accesses it; an export has occurred. Even though the data never left the United States.

III. Who is the Exporter?

Users of cloud services interact with the cloud in multifarious ways; in most cases, the operational specifics are intentionally abstracted by the service provider. Information relating to where the computations are occurring may not be made available to the end-user. However, in the United States, the cloud service provider is generally not considered the exporter of the data that it’s subscribers upload to its servers. Despite the fact that the State Department hasn’t issued a formal directive on the matter, U.S subscribers that upload technical data onto the hardware of a cloud service provider will be considered the exporters of said data in the event of foreign disclosures. Aptly, if ITAR controlled technical data is divulged to a non-US IT administrator of the cloud service provider, it is the subscriber to the service and not the service provider that is deemed the exporter.

In Closing

The cloud has reshaped the landscape with respect to government, business, and consumer information technologies by delivering enhanced flexibility and better cost efficiencies for a vast variety of services. But the nature of cloud computing increases the chances of inadvertent export control violations. When it comes to ITAR controlled technical data, users are inadvertently vulnerable to unexpected and complex export requirements, and in the event of non-compliance, to drastic potential criminal and civil penalties, including weighty fines and possibly jail time. With that in mind, the next logical suggestion would be to forget cloud file sharing and sync altogether; however, that does not have to be in the case. The Bureau of Industry and Security published a rule in the Federal Register that establishes a ‘carve out’ for the transmission of regulated data within a cloud service infrastructure necessitating encryption of the data. Encryption coupled with a set of best practices can enable you to freely adopt the cloud while remaining ITAR compliant.




Author: Gabriel Lando



Personal Data, PII and GDPR Compliance



The countdown for the European Union’s General Data Protection Regulation (GDPR), which will go into full effect in May 2018, is coming to a close. GDPR aims to solidify the data privacy rights of EU residents and the requirements on organizations that handle customer data. It introduces stern fines for data breaches and non-compliance while giving people a voice in matters that concern their data. It will also homogenize data protection rules throughout the EU. The current legislation, the EU Data Protection Directive was enacted in 1995, before cloud technology developed innovative ways of exploiting data; GDPR aims to address that. By enacting strict regulations and stiffer penalties the EU hopes to boost trust within a growing digital economy.

Despite the fact that GDPR came into force on 24th May 2016, organizations and enterprises still have until the 25th of May 2018 to fully comply with the new regulation. A snap survey of 170 cybersecurity pros by Imperva revealed that While a vast majority of IT security professionals are fully aware of GDPR, less than 50 percent of them are getting everything set for its arrival. It went on to conclude that only 43 percent are accessing the impact GDPR will have on their company and adjusting their practices to comply with data protection legislation. Even though most of the respondents we based in the United States, they are still likely to be hit by GDPR if they solicit and/or retain (even through a third party) EU residents’ personal data.

Remaining compliant with GDPR demands, among several other things, a good understanding of what constitutes ‘personal data’ and how it differs from ‘personal identifiable information’ or PII.

What is Personal Data In the GDPR Context?

The EU’s definition of personal data in GDPR is markedly broad, more so than current or past personal data protection. Personal data is defined as data about an identifiable or identified individual, either indirectly or directly. It is now inclusive of any information that relates to a specific person, whether the data is professional, public or private in nature. To mirror the various types of data organizations currently collect about users, online identifiers like IP addresses have been categorized as personal data. Other data such as transaction histories, lifestyle preferences, photographs and even social media posts are potentially classified as personal data under GDPR. Recital 26 states:

To determine whether a natural person is identifiable, account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly. To ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.

This personal data term directly applies to all the 28 states in the European Economic Area (EEA)

Is Personally Identifiable Information (PII) the Same as Personal Data?

The term ‘Personally Identifiable Information’ doesn’t appear anywhere in the GDPR; however, it does have a definite meaning in US privacy law. Therefore the term in itself is likely to cause confusion to anyone seeking to comply with GDPR. For a concept that has become ubiquitous in both technological and legal colloquy, PII is surprisingly hard to define. In a nutshell, PII refers to any information that can be used to distinguish one individual from another. This includes any information that can be used to re-identify anonymous data. This can solely refer to data that is regularly used to authenticate/identify an individual, this may be averse to information that violates the privacy of on individual, that is, reveal sensitive information regarding someone. The US interpretation of the term is undeniably incongruous with what is relevant for a proper GDPR assessment since it pre-selects a set of identifying traits.

To put it bluntly, all PII can be considered personal data but not all personal data is Personally Identifiable Information. Developing a solid GDPR compliance program demands that IT architects and marketers move beyond the restricted scope of PII to examine the full spectrum of personal data as defined by the EU.

Handling Personal Data in Accordance With GDPR

The first step to GDPR compliance in matters pertaining personal data is undoubtedly the risk assessment of how existing data is being stored and accessed, the level of risk attached to it, and whether it contains any PII. The data might be stored on server file systems, databases or even on an end user’s physical storage or cache. Becoming GDPR compliant will mean that you are not only protecting more data types in the future but will also involve dissipating more effort in the identification of existing data that initially wasn’t considered personal data. It is important to note that you cannot limit your scope to the data you hold as if it were a closed system. Nowadays, people typically interact with interconnected systems, and GDPR mirrors that. In such scenarios, organizations should focus outward, and infer who in their ecosystem can connect with an attribute to another, from the multiple varying paths to re-identification within their ecosystem.

Additionally, GDPR requires that a document ‘opt-in’ consent must be provided by each individual. The consent has to explicitly pinpoint the data collected, how it is going to be used and how long it will be retained. Organizations also have to provide participants with an option to remove their consent at any given time and request their personal data be permanently deleted. Participants should have the ability to get factual errors amended, and even request their personal data for review and use.

FileCloud Can Help You Comply With GDPR

The General Data Protection Regulation sets a new standard in the protection of personal data. Its efforts aim to grant data subjects more control over their data while ensuring the transparency of operations. FileCloud provides a set of simple features that can help organizations meet GDPR requirements.

Click here for more information.

Author: Gabriel Lando

Image courtesy of

FileCloud Empowers Government Agencies with Customizable EFSS on AWS GovCloud (U.S.) Region

FileCloud, a cloud-agnostic Enterprise File Sharing and Sync platform, today announced availability on AWS GovCloud (U.S.) Region. FileCloud is one of the first full-featured enterprise file sharing and sync solutions available on AWS GovCloud (U.S.), offering advanced file sharing, synchronization across OSs and endpoint backup. With this new offering, customers will experience the control, flexibility and privacy of FileCloud, as well as the scalability, security and reliability of Amazon Web Services (AWS). This solution allows federal, state and city agencies to run their own customized file sharing, sync and backup solutions on AWS GovCloud (U.S.).

“Having FileCloud available on AWS GovCloud (U.S.) provides the control, flexibility, data separation and customization of FileCloud at the same time as the scalability and resiliency of AWS,” said Madhan Kanagavel, CEO of FileCloud. “With these solutions, government agencies can create their own enterprise file service platform that offers total control.”

Government agency and defense contractors are required to adhere to strict government regulations, including the International Traffic in Arms Regulations (ITAR) and the Federal Risk and Authorization Management Program (FedRAMP). AWS GovCloud (U.S.) is designed specifically for government agencies to meet these requirements.

By using FileCloud and AWS GovCloud (U.S.), agencies can create their own branded file sharing, sync and backup solution, customized with their logo and running under their URL. FileCloud on AWS GovCloud offers the required compliance and reliability and delivers options that allow customers to pick tailored cloud solutions. FileCloud is a cloud-agnostic solution that works on-premises or on the cloud.

“FileCloud allows us to set up a secure file service, on servers that meet our clients’ security requirements,” said Ryan Stevenson, Designer at defense contractor McCormmick Stevenson. “The easy-to-use interfaces and extensive support resources allowed us to customize who can access what files, inside or outside our organization.”

Try FileCloud for free!

FileCloud Unveils ‘Breach Intercept’ to Safeguard Organizations Against Ransomware

FileCloud, the cloud-agnostic EFSS platform, today announced FileCloud Breach Intercept. The newest version of FileCloud offers advanced ransomware protection to help customers handle every phase of a cyberattack: prevention, detection and recovery.

FileCloud is deployed across 90 countries and has more than 100 VARs and Managed Service Providers across the world. Deployed by Fortune 500 and Global 2000 firms, including the world’s leading law firms, government organizations, science and research organizations and world-class universities, FileCloud offers a set of unique features that help organizations build effective anti-ransomware strategies.

Global ransomware damage costs are expected to total more than $5 billion dollars in 2017, compared to $325 million dollars in 2015. Ransomware is growing at an estimated yearly rate of 350 percent with business enterprises becoming the priority target for hackers. Enterprise File Sharing and Sync (EFSS) solutions have seen an increase in ransomware attacks with 40 percent of spam emails containing links to ransomware. Whereas public cloud EFSS solutions such as Box and Dropbox offer centralized targets for ransomware attacks, FileCloud’s decentralized private cloud reduces your company’s exposure to potential attacks.

“Anyone with access to a computer is a potential threat, and the cloud their personal armory,” said Venkat Ramasamy, COO at FileCloud. “Why rummage through hundreds of houses when you can rob a bank? Hackers target centralized storage such as Dropbox or Box rather than self-hosted FileCloud solutions. The freedom to choose the cloud platform that best meets the unique dynamics of each business is our line in the sand of competitive differentiation.”

Breach Intercept

Cyberdefense via customization

The best defense against a phishing attack is to make sure your employees can differentiate genuine communication from malicious spoofing. Hackers can easily spoof email from public SaaS products, which have a standardized, easily falsifiable format. FileCloud offers unparalleled branding and customization tools, allowing you to set your own policies, and design your own emails and broadcast alerts. Customized emails and UX significantly reduce spoofing risk as hackers can’t run a mass spoofing unless they have an exact copy of an email from one of your employees.

Granular controlled folder access

With FileCloud Breach Intercept, you can set different levels of access between top-level folders and sub-folders. Administrators can set read/write/delete/share permissions for any user at any folder level, and permissions are not necessarily inherited according to folder structure, limiting propagation.

Real-time content / behavior heuristic engine

State-of-the-industry heuristic analysis works to detect threats in real time and suspicious content and user activity will activate security protocols and prevent ransomware from taking hold. For example, if FileCloud detects a file posing as a Word document, the system halts the upload and sends an alert to the administrator, preventing propagation of an attack.

Unlimited versioning and backup to rollback

Unlimited versioning and server backup helps companies recover from any data loss accident, including ransomware. FileCloud can roll back not only employee files but also entire server files to any specific date and time before the attack.
FileCloud is available for immediate download from our customer portal. For more information or to download FileCloud Breach Intercept, please visit


Top 10 Predictions in Content Collaboration for 2018

Collaboration within the workplace is not a new concept. However, it has become increasingly crucial in this mobile world as we become more connected across the globe. The proliferation of cloud computing has given rise to a new set of content collaboration tools such as Dropbox, FileCloud, Box. These tools enable employees to effectively collaborate, subsequently leading to a more skilled, engaged and educated workforce. Content collaboration solutions allow employees within the organization to easily share information with each other, and effectively work together on projects irrespective of geographic location via a combination of networking capabilities, software solutions, and well-established collaborative processes. Content collaboration platforms are the evolution of Enterprise File Sharing and Sync (EFSS).
… You can read the full article at VMBlog.

GDPR – Top 10 Things That Organizations Must Do to Prepare

May 25, 2018 – that’s probably the biggest day of the decade for the universe of data on the Internet. On this date, Europe’s data protection rules –  European General Data Protection Regulation (GDPR) – becomes enforceable. In 2012, the initial conversations around GDPR began, followed by lengthy negotiations that ultimately culminated in the GDPR proposal. At the time of writing this guide (Sep 2017), most European businesses have either started making first moves towards becoming compliant with GDPR, or are all set to do so. Considering how GDPR will be a pretty stringent regulation with provisions for significant penalties and fines, it’s obvious how important a topic it has become for tech-powered businesses.

Now, every business uses technology to survive and thrive, and that’s why GDPR has relevance for most businesses. For any businessman, entrepreneur, enterprise IT leader, or IT consultant, GDPR is as urgent as it is critical. However, it’s pretty much like the Y2K problem in the fact that everybody is talking about it, without really knowing much about it.

Most companies are finding it hard to understand the implications of GDPR, and what they need to do to be compliant. Now, all businesses handle customer data, and that makes them subject to Data Protection Act (DPA) regulations. If your business already complies with DPA, the good news is that you already have the most important bases covered. Of course, you will need to understand GDPR and make sure you cover the missing bases and stay safe, secure, reliable, and compliant in the data game. Here are 10 things businesses need to do to be ready for GDPR.

Top 10 things that organizations should do to prepare and comply with GDPR

1.      Learn, gain awareness

It is important to ensure that key people and decision makers in your organization are well aware that the prevailing law is going to change to GDPR. A thorough impact analysis needs to be done for this, and any areas that can cause compliance issues under GDPR needs to be identified. It would be appropriate to start off by examining the risk register at your organization if one exists. GDPR implementation can have significant implications in terms of resources, particularly at complex and large organizations. Compliance could be a difficult ask if preparations are left until the last minute.

2.      Analyze information in hand

It is necessary to document what personal data is being held on hand, what was the source of the data, and who is it being shared with. It may be necessary for you to organize an organization-wide information audit. In some cases, you may only need to conduct an audit of specific business areas.

As per GDPR, there is a requirement to maintain records of all your activities related to data processing. The GDPR comes ready for a networked scenario. For instance, if you have shared incorrect personal data with another organization, you are required to inform the other organization about this so that it may fix its own records. This automatically requires you to know the personal data held by you, the source of the data and who it is being shared with. GDPR’s accountability principle requires organizations to be able to demonstrate their compliance with the principles of data protection imposed by the regulation.

3.      Privacy notices

It is important to review the privacy notices currently in place and put in a plan for making any required changes before GDPR implementation. When personal data is being collected, you currently need to provide specific sets of information such as information pertaining to your identity and how you propose to use that information. This is generally done with a privacy notice.

The GDPR requires you to provide some additional information in your privacy notices. This includes information such as the exact provision in the law that permits asking for that data and retention periods for the data. You are also required to specifically list that people have a right to complain to the ICO if they believe there is a problem with the way their data is being handled. The GDPR requires the information to be provided in the notices in easy to understand, concise and clear language.

4.      Individual rights

You should review your procedures to confirm that they cover all the individual rights set forth in the GDPR. These are the rights provided by the GDPR.

  • To be informed
  • Of access
  • To rectification
  • To erasure
  • To restrict processing
  • To data portability
  • To object
  • To not be subject to automated profiling and other such decision-making

This is an excellent time to review your procedures and ensure that you will be able to handle various types of user requests related to their rights. The right to data portability is new with the GDPR. It applies:

  • To personal data provided by an individual;
  • When processing is based on individual consent or to perform a contract; and
  • Where processing is being done by automated methods.

5.      Requests for Subject access

You would need to plan how to handle requests in a manner compliant with the new rules. Wherever needed, your procedures will need to be updated.

  • In most of the cases, you will not be allowed to charge people for complying with a request
  • Instead of the current period of 40 days, you will have only a month to execute compliance
  • You are permitted to charge for or refuse requests which are apparently excessive or unfounded
  • If a request is refused, you are required to mention the reason to the individual. You are also required to inform them that they have the right to judicial remedy and also to complain to the correct supervising authority. This has to be done, at the very latest, within a month.

6.      Consent

It is important to review how you record, seek and manage consent and if any changes are required. If they don’t meet the GDPR standard, existing consents need to be refreshed. Consent must be specific, freely given, informed, and not ambiguous. A positive opt-in is required and consent cannot be implied by inactivity, pre-ticked boxes or silence. The consent section has to be separated from the rest of the terms and conditions. Simple methods need to be provided for individuals to take back consent. The consent is to be verifiable. It is not required that the existing DPA consent have to be refreshed as you prepare for GDPR.

7.      Aspects related to children

It would be good if you start considering whether systems need to be put in place in order verify the ages of individuals and to get consent from parents or guardians for carrying out any data processing activity. GDPR brings in specific consent requirements for the personal data of children. If your company provides online services to children, you may need a guardian or parent’s consent so as to lawfully process the children’s personal data. As per GDPR, the minimum age at which a child can give her consent to this sort of processing is set to 16. In the UK, this may be lowered to 13.

8.      Aspects related to data breaches

You should ensure that you have the correct procedures necessary to investigate, report, and detect any breaches of personal data. The GDPR imposes a duty on all companies to report specific types of data breaches to the ICO, and in some situations, to individuals. ICO has to be notified of a breach if it is likely to impinge on the freedoms and rights of individuals such as damage to reputation, discrimination, financial loss, and loss of confidentiality. In most cases, you will also have to inform the concerned parties directly. Any failure to report a breach can cause a fine to be imposed apart from a fine for the breach by itself.

9.      Requirements related to privacy by design

The GDPR turns privacy by design into a concrete legal requirement under the umbrella of “data protection by design and by default.” In some situations, it also makes “Privacy Impact Assessments” into a mandatory requirement. The regulation defines Privacy Impact Assessments as “Data Protection Impact Assessments.”’ A DPIA is required whenever data processing has the potential to pose a high level of risk to individuals such as when:

  • New technology is being put in place
  • A profiling action is happening that can significantly affect people
  • Processing is happening on a large set of data

10.  Data protection officers

A specific individual needs to be designated to hold responsibility for data protection compliance. You must designate a data protection officer if:

  • You are a public authority (courts acting in normal capacity exempted)
  • You are an institution that carries out regular monitoring of individuals at scale
  • You are an institution that performs large-scale processing of special categories of data such as health records or criminal convictions

Many of GDPR’s important principles are the same as those defined in DPA; still, there are significant updates that companies will need to do in order to be on the right side of GDPR.

Author: Rahul Sharma




Types of Controls to Manage Your Business Data in an EFSS

EFSS Data Controls

In 2015, there were 38% more security incidents than 2014, and an average cost per stolen record – containing sensitive and confidential data – of $154 (the healthcare industry payed the most, at $363 per record). Worse still, even when 52% of IT professionals felt that a successful cyber-attack against their network would take place in the year, only 29% of SMBs (fewer than 2014), used standard tools like patching and configuration to prevent these attacks.

The consequences of poor data security and data breaches in the cloud cannot be overstated. A look at these statistics shows the effect of data insecurity and data breaches in the cloud are a road that no business wants to take. All the aforementioned statistics show the lack of control of data in the cloud, so we will first look at who controls data in the cloud, followed by how to manage business data in an EFSS.

Who controls data in the Cloud?

It is clear that your IT department does not know who controls data in the cloud, as revealed by participants of a Perspecsys survey on data control in the cloud. According to the results, 48% of IT professionals don’t trust that cloud providers will protect their data, and 57% are not certain of where sensitive data is stored in the cloud.

This issue is also closely tied to data ownership. Once data ownership changes, then we expect a change in the level of control users have on their data . To quote Dan Gray on the concept of data ownership: “Ownership is dependent on the nature of data, and where it was created”. Data created by a user before uploading to the cloud may be subjected to copyright laws, while data created in the cloud changes the whole concept of data ownership. It is no wonder that there is confusion on this matter.

Despite challenges such as half or no control of data stored in the cloud, there exist techniques that we can use to control business data in an EFSS, consequently preventing unauthorized access and security breaches.

Types of data control for business data in an EFSS


Input validation controls

Validation control is important because it ensures that all data fed into a system or application is accurate, complete and reasonable. One essential area of validation control is supplier assessment. For example, is a supplier well equipped to meet a client’s expectations? With regards to controls to ascertain data integrity, security and compliance with industry regulations as well as client policies. This activity is best carried out using an offsite audit in the form of questionnaires. By determining the supplier system life-cycle processes, your team can decide if the EFSS vendor is worthy of further consideration or not. Additionally, the questionnaire serves as a basis to decide whether an on-site assessment will be carried out, based on the risk assessment. If carried out, the scope of an onsite audit will be dependent on the type of service an EFSS vendor provides.

Service level agreements should also be assessed and analyzed to define expectations of both an EFSS vendor and user. Usually, this is also done to ensure that the service rendered is in line with industry regulations. Additionally, we must ensure that an EFSS provider includes the following in the service level agreement.

  • Security
  • Backup and recovery
  • Incident management
  • Incident reporting
  • Testing
  • Quality of service rendered
  • Qualified personnel
  • Alert and escalation procedures
  • Clear documentation on data ownership as well as vendor and client responsibilities
  • Expectations with regards to performance monitoring

Processing controls

Processing control ensures that data is completely and accurately processed in an application, via regular monitoring of models and looking at system results when processing. If this is not done, small changes in equipment caused by age or damage will result in a bad model, which will be reflected as wrong control moves for the process.

Include backup and recovery controls

Processing control ensures that data is completely and accurately processed in an application, via regular monitoring of models as well as looking at system results when processing. If this is not done, small changes in equipment caused by age or damage will result in a bad model, which will be reflected as wrong control moves for the process.

Identity and access management

Usually, Identity and Access Management (IAM) allows cloud administrators to authorize personnel who can take action on specific resources, giving cloud users control and visibility required to manage cloud resources. Although this seems simple, advancement in technology has complicated the process of authentication, authorization and access control in the cloud.

In previous years, IAM was easier to handle because employees had to log into one desktop computer in the office to access any information in the internet. Currently, Microsoft’s Active Directory and Lightweight Directory Access Protocol (LDAP) are insufficient IAM tools. User access and control has to be extended from desktop computers to personal mobile devices, posing a challenge to IT. For example, as stated in a Forrester Research report, personal tablets and mobile devices are being used in 65% of organizations, as 30% of employees provision their own software on these devices for use at work, without IT’s approval. It is no wonder that Gartner predicted in 2013 that Identity and Access Management in the cloud would be one of the sought-after services in cloud-based models, in just 2 years.

With this understanding, it is important to create effective IAM without losing control of internally provisioned resources and applications. By using threat-aware identity and access management capabilities, it should be clear who is doing what, what their role is and what they are trying to access.  Additionally, user identities, including external identities, must be tied to back-end directories, and sign-on capabilities must be single because multiple passwords tend to lead to insecure password management practices.


Simple assurance by an EFSS vendor that you have control of business data in the cloud is not enough. There are certain techniques that should be employed to make sure that you have a significant level of data control. As discussed, ensure that you have an effective identity and access management system, have processing and validation controls as well as business data backup and recovery options in place. Other important controls that we have not discussed include file controls, data destruction controls and change management controls.

Author:Davis Porter

Image Courtesy: jannoon028,

Sharing Large Medical Images and Files – Factors to Consider


According to data collected by the HHS Office for Civil Rights, over 113 million individuals were affected by protected health information breaches in 2015. Ninety-nine percent of these individuals were victims of hacking, while the remaining 1 percent suffered from other forms of breach such as theft, loss, improper disposal, and unauthorized access/disclosure. A quick look at the trend from 2010 shows that health information data breaches are on the rise. An even deeper look at this report shows that network servers and electronic medical records are the leading sources of information breaches, at 107 million and 3 million, respectively.

Sadly, security is not the only issue that medics face when sharing medical records. A 2014 article in the New York Times explains the difficulty medics face when trying to send digital records containing patient information. While the intention is noble—to improve patient care coordination—doctors are facing problems with their existing large file sharing options.

To help doctors share files such as medical images in an easier and safer way, we will explore four factors that should be considered.

HIPAA Compliance

Medical records are sensitive and confidential in nature. This means that handling them should be guided by set industry policies, in this case, Health Insurance Portability and Accountability Act (HIPAA), for example. HIPAA is actually a response to security concerns surrounding the transfer and storage of medical records, in this case, images.

HIPAA places responsibility on medics and healthcare providers in general to secure patient data and keep it confidential. As a result, non-compliance could lead to legal action, which can be costly. Usually, HIPAA makes sure that all Personal Health Information (PHI) is covered, outlining more stringent rules on electronic PHI, mainly because a security breach is more likely to affect a larger number of patients, all at once.

It is a medic’s responsibility to ensure that the selected EFSS solution is HIPAA-compliant if you want to maintain patient trust, keep positive publicity, and avoid steep HIPAA fines imposed after a breach. In fact, the first time you commit an offense, HIPAA will charge approximately $50,000, a figure that escalates accordingly with each subsequent offense.


This is the second level of security you should consider before settling on a large file sharing solution. As much as an EFSS service provider is HIPAA-compliant, you need to ensure that measures outlined in HIPAA are taken.

When you read about patients’ rights as outlined in HIPAA, specifically the ‘Privacy, Security and Electronic Health Records’, you will notice that information security is emphasized. For this reasons, medics should ensure that patient data is encrypted in order to prevent it from being accessed by rogue colleagues or professional hackers.

It is entirely important that all hospital departments—ranging from cardiology, imaging centers and radiology, among others—encrypt medical images and files to further protect patient privacy. Better still, encryption should be both at rest and on transit, and files should only be shared with authorized specialists and physicians as well as the patients themselves.

To further tighten security, these files should be encrypted with non-deterministic encryption keys instead of fixed ones, whose passwords can be hacked. The best thing about this technique is that even when faced with a security breach on the server side, hackers cannot access the encryption keys. Additionally, you can opt for EFSS solutions that offer client-side encryption alone, barring the service provider and its employees from accessing this information.

File Scalability

Compared to other medical records, medical images present a great challenge with regards to file size. It is actually reported that a significant number of sequences and images are an average of 300MB. Additionally, average file size for a standard mammography image and a 3D tomography image are 19MB and 392 MB, respectively. While these file sizes already seem too large, Austin Radiological Association (ARA) predicts that by 2024, annual data from its 3D breast imaging files will reach 3 petabytes. These facts expose the storage challenges that medics face.

A glance at the process of finding medical images for active cases, storing them, and archiving those of inactive cases shows the immense need for medics to find a reliable and convenient large file sharing solution that caters to these storage needs.

A weak server could get overwhelmed with data, progressively becoming inefficient and inept as more files are uploaded into the system. The best way to solve this issue is by using cloud-based services that automatically scale your files according to your needs. This way, you will upload more files in the server, significantly reducing hardware costs by approximately 50 percent, especially when this is done on the cloud as opposed to in-house. In addition to these perks, the cloud will allow you to share these images faster and more conveniently, saving both time and storage.

Technology Increases the Likelihood of Medical Errors

While technology helps solve issues such as security and storage, over-reliance could actually lead to medical errors, incidents that are dreadful to patients and medics as well. As reported by Eric McCann of Healthcare IT News, medical errors cost America a colossal $1 trillion each year, and 400,000 Americans die annually due to these preventable mistakes.

Even though the cloud has been paraded as a solution to reduce incidences of medical error, the need to be watchful and keen can never be overstated. Take, for example, the erroneous click of a mouse and mislabeling of data. A case study on Kenny Lin, MD, a family physician practicing in Washington, D.C., which is detailed in his 2010 piece in the U.S. News & World Report, shows us how easy it is to make a mistake with technology. Dr. Lin nearly made a wrong prescription by accidentally clicking on the wrong choice in his EMR system.

Now, what if you mislabeled a patient’s radiology report? Wouldn’t that start a series of misdiagnosis and treatment? Could you imagine the damage caused? It is for this reason that even when technology makes it easier to share large, sensitive files like medical images, you should counter-check and make sure that the file is labeled correctly and sent to the intended, authorized recipient.

The Way Forward

The sensitivity of medical files is eminent, and with data breaches on the rise, it is vital to ensure the privacy of all medical documents, including large medical images and files. To reduce the possibility of a data breach, any EFSS solution used to share these files should guarantee a reasonable level of file security and HIPAA compliance. In addition to that, its capacity to efficiently handle file sizes and offer easy access to these files should not be ignored. Lastly, as you remain cautious when feeding data into the system, create a safe backup for your data just in case of a data breach. By taking such precautions, medical files can be shared between all necessary parties easier and more safely.

Author: Davis Porter

Image courtesy:, stockdevil

Data Owner Responsibilities When Migrating to an EFSS

While it is easy to say and conclude that all data belongs to your organization, complications arise when the person accountable for data ownership has to be identified. Even when the IT department spearheads the process of processing, storing, and backing up data among other functions, the fact is that it does not own business data. Worse still, outsourced service providers do not own this data any more than the IT department does.

Who Owns Data? What are Data Owner’s Responsibilities?

In the cloud environment, a data owner is a business user who understands the business impact of a security breach that would lead to loss of data, integrity, and confidentiality. This responsibility makes the data owner very conscious of decisions made to mitigate and prevent such security incidents.

When migrating to an EFSS, business data owners should do the following:

Classify Data

Data classification has been extensively labeled as a remedy to data breaches. In essence, data classification helps to significantly reduce insider threats, which are reported to cause 43% of data breaches. Other than malicious employees, data breaches are a result of human error. Additionally, the growing data volume experienced by businesses makes it difficult to track data; hence, it is challenging to know where data is stored, who accesses it, and what they do with this information. By making sure that only authorized employees access certain information, the probability of a data breach is likely to reduce.

Clearly, the need for data classification has never been more evident. To properly classify data, a few measures should be taken by a business.

  • Have detailed “acceptable use” policies. All employees should internalize and sign these documents, which are then reviewed annually or as needed.
  • Make use of data classification technologies. When you train employees using a data classification technology, they will better understand the sensitivity of the data they are creating, storing, and sharing. Consequently, they will treat this data with the highest level of confidentiality and caution.
  • Understand industry regulations to classify data accordingly.
  • Once data is properly classified, apply appropriate access controls and continuously yet randomly monitor data activity to nab suspicious activities as soon as they are carried out.

Monitor Data Life Cycle Activities

When migrating to an EFSS, issues such as data retention and disposal should constantly be monitored by a business data owner. Simply put, how long will the EFSS solution retain your data and how long will it take to dispose of your data completely after you have deleted it? What happens to your data once your contract with the provider ends?

Before a business owner looks at an EFSS provider’s life cycle, he needs to understand the typical seven phases of data life cycle. From the first stage of data capture, data maintenance, data synthesis, data usage, data publication, or data archival to data purging, how safe is it? Who has access to it and how long is it retained in the EFSS?

When this data is stored, is it used and accessed by third-parties who, sadly, cause 63% of all data breaches? Is the EFSS data retention and disposal policy compliant with the law? For example, data retention requirements stipulated in the Health Insurance and Portability and Accountability Act (HIPAA) state that organizations that accept credit cards must adhere to a Payment Card Industry Data Security Standard (PCI DSS) data retention and disposal policy.

Understand Enterprise File Sync-and-Share (EFSS) Deployment Models, As a Way of Assessing Risks

Despite the existence of extensive advice on the best EFSS solutions that exist, business data owners need to gain some technical knowledge. How many EFSS deployment models do you know, for example? Since this is a pretty broad topic, we will briefly discuss three models.

Public Cloud EFSS

In addition to being fast and easy to set up, a public cloud could be cheaper in terms of both infrastructure and storage costs. However, public cloud EFSS might not be the best regarding data protection and security, leaving your company exposed and vulnerable to regulatory non-compliance. It is, therefore, important to analyze the security measures the public cloud has to offer before settling.

Private Cloud EFSS

Although private cloud is believed to be more expensive compared to the public cloud, the cost of ownership depends largely on the vendor and infrastructure choice (for example, FileCloud offers the lowest cost of ownership across public and private clouds). Private cloud EFSS is worthwhile regarding services and security offered. With an adoption rate of 77%, private cloud solutions such as FileCloud are better options. This opinion is attributed to the flexibility and control over where data is stored. Consequently, users can choose which regulations to comply with and have better control over a breach because the IT department can access all the files and monitor, protect, and salvage them, as opposed to a public cloud.

Hybrid Cloud EFSS

According to RightScale’s, “Cloud Computing Trends: 2016 State of the Cloud Survey,” hybrid cloud EFSS adoption rate is 71%. The success is believed to be the result of the ability to harness the positive attributes of both a public and private cloud all at once because, usually in a hybrid environment, some components will run on the premises while others run in the cloud. One great example of a hybrid model is an EFSS application that runs as Software as a Service (SaaS) while data is stored on the premises or at the discretion of the user company.

Closing remarks

It is the responsibility of a business data owner to ascertain that data will be kept safe and confidential before migrating to any EFSS solution. This person needs to be savvy with the advantages a chosen EFSS model offers, compliance with industry regulations, proper access and identity management, understand the EFSS data life cycle processes, and ensure that security measures such as data encryption and authentication processes are in place.

 Author: Davis Porter


Data ownership in the cloud – How does it affect you?

The future of the cloud seems bright, Cisco predicts that by 2018, 59% of cloud workloads will be created from Software As A Service (SaaS). While these statistics are optimistic, we cannot ignore a few concerns that stifle cloud adoption efforts, such as data ownership.

Most people would be inclined to say that they still own data in the cloud. While they may be right in some sense, this is not always the case. For instance, let us look at Facebook, which many people use as cloud storage to keep their photos. According to the Facebook end-user-agreement, the company stores data for as long as it is necessary, which might not be as long as users want. This sadly means that users lose data ownership. Worse still, the servers are located in different locations, in and out of the United States, subjecting data to different laws.

According to Dan Gray, as discussed in ‘Data Ownership In The Cloud,’ the actual ownership of data in the cloud may be dependent on the nature of the data owned and where it was created. He states that there is data created by a user before uploading to the cloud, and data created on the cloud platform. He continues to say that data created prior to cloud upload may be subject to copyright laws depending on the provider, while that created on the platform could have complicated ownership.

In addition to cloud provider policies, certain Acts of Congress, although created to enhance data security and still uphold the nation’s security, have shown how data ownership issues affect businesses. Two of these, the Stored Communications Act (SCA) and the Patriot Act show the challenges of cloud data ownership and privacy issues, with regards to government access to information stored in the cloud.

The Stored Communications Act (SCA)

Usually, when data resides in a cloud provider’s infrastructure, user owner rights cannot be guaranteed. And even when users are assured that they own their data, it does not necessarily mean that the information stored there is private. For example, the United States law, through the Stored Communications Act (SCA), gives the government the right to seize data stored by an American company even if it is hosted elsewhere. The interpretation of this saw Microsoft and other technology giants take the government to court, claiming that it was illegal to use the SCA to obtain a search warrant to peruse and seize data stored beyond the territorial boundaries of the United States.

Microsoft suffered a blow when a district court judge in New York ruled that the U.S government search powers extend to data stored in foreign servers. Fortunately, these companies got a reprieve mid-2016, when the Second Circuit ruled that a federal court may not issue a criminal warrant to order a U.S cloud provider to produce data held in servers in Ireland.  It is however, important to note that this ruling only focused on whether Congress intended for the SCA to apply to data held beyond U.S.A territory, and did not touch on issues to deal with Irish data privacy law.

The Patriot Act

The Patriot Act was put into place in 2001 as an effort by George Bush government to fight terrorism. This act allowed the Federal Bureau of Investigation (FBI) to search telephone, e-mail, and financial records without a court order, as well as expanded law enforcement agencies access to business records, among other provisions. Although many provisions of this Act were set to sunset 4 years later, the contrary happened. Fast-tracking to 2011, President Barrack Obama signed a 4-year extension of 3 key provisions in the Act, which expanded the discovery mechanisms law enforcement would use to gain third-party access. This progress brought about international uproar especially from the European Union, causing the Obama administration to hold a press conference to quell these concerns.

The situation was aggravated when a Microsoft UK director admitted that the Patriot Act could access EU based data, further disclosing that no cloud service was safe from the ACT, and the company could be forced to hand over data to the U.S government. While these provisions expired on June 1 2015, due to lack of congressional approval to renew, the government found a way to renew them through the USA freedom Act.

The two Acts show us that data owned in the cloud, especially public cloud, is usually owned by the cloud providers. This is why we are seeing the laws asking cloud providers to provide this information, and not cloud users.

What To Do In Light Of These Regulations

Even if the SCA has been ruled illegal as not to be used to get warrants to retrieve data stored in the cloud, and the USA freedom Act is purported by some parties as a better version of the Patriot Act, we cannot ignore the need for cloud users to find a way to avoid such compulsions.

One idea users could have is escaping the grasp of these laws, which is unfortunately impractical. To completely outrun the government, you would have to make sure that neither you nor the cloud service used has operations in the United States. This is a great disadvantage because most globally competitive cloud providers are within the United States jurisdiction. Even when you are lucky and find a suitable cloud provider, it is may still be subject to a Mutual Legal Assistance Treaty (MLAT) request. Simply, put, there is no easy way out.

Instead, understand the risks and let your clients know. For example, if the Patriot Act extension attempts were successful, financial institutions would be obliged to share information with law enforcement agencies on suspicion of terrorist activities. In such a case, a good financial institution would warn its clients of these risks before hand. Alternatively, you can find a way of storing data in-house, forcing the feds to go through you and not the cloud provider.


Truthfully, data ownership in the cloud is a complicated issue.  Determined by both government and company policies, data ownership in the cloud is not always retained.  Gladly, depending on data policies and how they categorize data in the cloud, a user could be granted full ownership. In the event that this doesn’t happen, prepare for instances of third-party access and infringement of complete privacy, hence rethink your business strategy. In short, as a cloud services client, please pay attention to the contract that you sign with your provider and understand the laws under which the provider operates.

Author: Davis Porter