Archive for the ‘Security’ Category

IT Admin Guide to NTFS File and Folder Permissions

New Technology File System (NTFS)

One of the most important and often misunderstood pieces of functionality in Microsoft Windows is the File and Folder security permissions framework. These permissions not only control access to all files and folders in the NTFS file system, it also ensures the integrity of the operating system and prevents inadvertent and unauthorized changes by the non-admin users as well as by malicious programs and applications.

So let’s begin at the very beginning, the NTFS file system can be considered as a hierarchical tree structure, with the disk volume at the top level and with each folder being a branch off the tree. Each folder can have any number of files and these files can be considered as leaf nodes. i.e. there can be no further branches off that leaf node. Folders are therefore referred to as Containers, ie objects that can contain other objects.

So, how exactly is access to these sets of hierarchical objects controlled exactly? That is what we will talk about next. When the NTFS file system was originally introduced in Windows NT, the security permissions framework had major shortcomings. This was revamped in Windows 2000 onwards and is the basis of almost all the file permission security functionality present in modern day Windows OS.

To begin, each object in the file hierarchy has a Security Descriptor associated with it. You can consider Security Descriptors as an extended attribute to the file or folder. Note that Security Descriptors are not only limited to files but also apply to other OS level objects like Processes, Threads, Registry keys etc.
At the basic level, a security descriptor contains a bunch of flags in the Header, along with the Owner information as well as the Primary Group information, followed by a set of variable lists called a Discretionary Access Control List (DACL) as well as a System Access Control List (SACL).

Any File or Folder will always have an associated Owner associated with it and no matter what, that Owner can always perform operations to it. The Primary Group is just enabled for compatibility with POSIX standards and can be ignored. The SACL is for specifying which users and groups get audited for which actions performed on the object. For the purposes of this discussion, let’s ignore that list as well.

The DACL is the most interesting section of any Security Descriptor. You can consider a DACL to define a list of users and groups that are allowed to or denied access to that file or folder. So to represent each user or group with the specific allowed or denied action, each DACL consists of one or more Access Control Entries (ACE). An Access Control Entry specifies a user or group, what permissions are being allowed or denied and some additional attributes. Here’s an example of a simple DACL.

So far, if you have been following along, this seems pretty straightforward and it mostly is, but the practical way how this is applied to folders introduces complexity especially if you are unclear how the permissions interact with each other.

Inheritance of Security Descriptors

If every object had its own unique copy of the Security Descriptor associated with it, things would be pretty simple but impossible to manage practically. Imagine a file system with thousands of folders used by hundreds of users. Trying to set the permissions on each and every folder individually will simply break down quickly. If you needed to add or modify the permissions on a set of folders, you will have to individually apply the change to each and every file and folder in those set of folders.

Thus was born the notion of inheritance. Now, not only is it possible to apply a permission (ACE) to a folder, it is also possible to indicate if the permissions should “flow” to all children objects. So if it is a folder, all subfolders and files inside that folder should have the same permissions “inherited”. See below for an example:

Here, when Folder 3 has some permissions setup, these permissions by default are inherited by its children objects which includes SubFolder1, SubFolder2 and so on. Note that this inheritance is automatic, ie if new objects are added to this section of the tree, those objects automatically include the inherited permissions.

The DACL of any subfolder item now looks like the following, assuming this was set as the permissions for Folder 3.

You can see inherited permissions on any security dialog by the grayed out options seen. To edit these options, you have to traverse up to the tree till you reach the object where the items are actually setup (which in this case is Folder 3, where you can actually edit the permissions). Note that if you ever edit the permissions in Folder3, the new permissions automatically re-flow to the child objects without you having to set them one by one explicitly.

So if inheritance is such a cool thing, why would you ever want to disable inheritance? That’s a good question and it brings us to setting up folder permissions for a large organization. In many organizations which have groups and departments, it is pretty common to organize the folders by groups and then just allow permissions to the folders based on the groups the users belong to.

In most cases, this kind of simple organization works fine, however, in some cases, there will be some folders that belong to a group or department which absolutely need complete security and should only be accessed by a select handful of people. For example: consider SubFolder1 as a highly sensitive folder that should be fully locked down.

In this case this subset of folders should be setup without inheritance.

Disabling Inheritance at Sub Folder 1 helps change a few things. Permission changes happening at parent folders like Folder 3 will never affect Sub Folder 1 under any conditions. It is impossible to give access to Sub Folder1, by someone adding a user or group at Folder 3. This now effectively isolates the SubFolder 1 into its own permission hierarchy disconnected from the rest of the system. So IT admins can setup a small handful of specific permissions for Sub Folder1 that are applicable to all the contents inside it.

Order of Permission Evaluation

Having understood Security Descriptors and Inheritance (as well as when inheritance should be disabled), now it is time to look at how all this comes together. What happens when mutually exclusive permissions are applicable to a file or folder, how does the security permissions remain consistent in that case?

For example, consider an object (File 1) where at a parent level folder (Folder 3), say JOHN is allowed to READ and WRITE a folder and these permissions are inherited to a child object in the hierarchy.

Now, if JOHN is not supposed to WRITE to this child item and you as an IT admin add DENY WRITE to JOHN to the File1 item how does this conflicting permissions make sense and get applied?

The rules are pretty simple in this case, the order of ACE evaluation is
• Direct Deny or Disallowed Permission Entries applied directly on the object
• Direct Allowed Permission Entries applied directly on the object
• Inherited Negative Permission Entries from Parent Objects
• Inherited Parent Permission Entries from Parent Objects

The Windows OS will always evaluate the permissions based on this order, so any overrides placed on that object directly or explicitly will be considered first before any inherited permissions. The first rule that denies permissions for a user under consideration is applied and evaluation is stopped, otherwise evaluation continues till required permissions are allowed and then evaluation is stopped. Note that even with inherited permission entries, the permission entries from the nearest parent are evaluated first before the evaluation continues to the farther parent. ie the distance from the child to the parent matters in the evaluation of the permissions.

Applying Folder Security inside the network

If you thought that setting up permissions on folders is all that you need for a network share you are mistaken, you also need to create a folder share and specify permissions for the share. The final permissions for a user is a combination of the permissions applied in the share as well as the security permissions applied to the folders. The minimum permissions applicable is always applied.

So it is always the best practice to create a share and choose Everyone Full access at the share level and let all the permissions be managed by the Security permissions.

Applying NTFS folder security outside the network

It is simple to provide network folder access over the LAN and apply these folder permissions efficiently, however if you want to allow users to access these files outside the LAN via the web browser, mobile apps etc and still enforce NTFS file and folder permissions, then consider using FileCloud (our Enterprise File Sharing and Sync product) that can effortlessly enforce these permissions and still provide seamless access.

Try FileCloud for Free!

Types of Controls to Manage Your Business Data in an EFSS

EFSS Data Controls

In 2015, there were 38% more security incidents than 2014, and an average cost per stolen record – containing sensitive and confidential data – of $154 (the healthcare industry payed the most, at $363 per record). Worse still, even when 52% of IT professionals felt that a successful cyber-attack against their network would take place in the year, only 29% of SMBs (fewer than 2014), used standard tools like patching and configuration to prevent these attacks.

The consequences of poor data security and data breaches in the cloud cannot be overstated. A look at these statistics shows the effect of data insecurity and data breaches in the cloud are a road that no business wants to take. All the aforementioned statistics show the lack of control of data in the cloud, so we will first look at who controls data in the cloud, followed by how to manage business data in an EFSS.

Who controls data in the Cloud?

It is clear that your IT department does not know who controls data in the cloud, as revealed by participants of a Perspecsys survey on data control in the cloud. According to the results, 48% of IT professionals don’t trust that cloud providers will protect their data, and 57% are not certain of where sensitive data is stored in the cloud.

This issue is also closely tied to data ownership. Once data ownership changes, then we expect a change in the level of control users have on their data . To quote Dan Gray on the concept of data ownership: “Ownership is dependent on the nature of data, and where it was created”. Data created by a user before uploading to the cloud may be subjected to copyright laws, while data created in the cloud changes the whole concept of data ownership. It is no wonder that there is confusion on this matter.

Despite challenges such as half or no control of data stored in the cloud, there exist techniques that we can use to control business data in an EFSS, consequently preventing unauthorized access and security breaches.

Types of data control for business data in an EFSS


Input validation controls

Validation control is important because it ensures that all data fed into a system or application is accurate, complete and reasonable. One essential area of validation control is supplier assessment. For example, is a supplier well equipped to meet a client’s expectations? With regards to controls to ascertain data integrity, security and compliance with industry regulations as well as client policies. This activity is best carried out using an offsite audit in the form of questionnaires. By determining the supplier system life-cycle processes, your team can decide if the EFSS vendor is worthy of further consideration or not. Additionally, the questionnaire serves as a basis to decide whether an on-site assessment will be carried out, based on the risk assessment. If carried out, the scope of an onsite audit will be dependent on the type of service an EFSS vendor provides.

Service level agreements should also be assessed and analyzed to define expectations of both an EFSS vendor and user. Usually, this is also done to ensure that the service rendered is in line with industry regulations. Additionally, we must ensure that an EFSS provider includes the following in the service level agreement.

  • Security
  • Backup and recovery
  • Incident management
  • Incident reporting
  • Testing
  • Quality of service rendered
  • Qualified personnel
  • Alert and escalation procedures
  • Clear documentation on data ownership as well as vendor and client responsibilities
  • Expectations with regards to performance monitoring

Processing controls

Processing control ensures that data is completely and accurately processed in an application, via regular monitoring of models and looking at system results when processing. If this is not done, small changes in equipment caused by age or damage will result in a bad model, which will be reflected as wrong control moves for the process.

Include backup and recovery controls

Processing control ensures that data is completely and accurately processed in an application, via regular monitoring of models as well as looking at system results when processing. If this is not done, small changes in equipment caused by age or damage will result in a bad model, which will be reflected as wrong control moves for the process.

Identity and access management

Usually, Identity and Access Management (IAM) allows cloud administrators to authorize personnel who can take action on specific resources, giving cloud users control and visibility required to manage cloud resources. Although this seems simple, advancement in technology has complicated the process of authentication, authorization and access control in the cloud.

In previous years, IAM was easier to handle because employees had to log into one desktop computer in the office to access any information in the internet. Currently, Microsoft’s Active Directory and Lightweight Directory Access Protocol (LDAP) are insufficient IAM tools. User access and control has to be extended from desktop computers to personal mobile devices, posing a challenge to IT. For example, as stated in a Forrester Research report, personal tablets and mobile devices are being used in 65% of organizations, as 30% of employees provision their own software on these devices for use at work, without IT’s approval. It is no wonder that Gartner predicted in 2013 that Identity and Access Management in the cloud would be one of the sought-after services in cloud-based models, in just 2 years.

With this understanding, it is important to create effective IAM without losing control of internally provisioned resources and applications. By using threat-aware identity and access management capabilities, it should be clear who is doing what, what their role is and what they are trying to access.  Additionally, user identities, including external identities, must be tied to back-end directories, and sign-on capabilities must be single because multiple passwords tend to lead to insecure password management practices.


Simple assurance by an EFSS vendor that you have control of business data in the cloud is not enough. There are certain techniques that should be employed to make sure that you have a significant level of data control. As discussed, ensure that you have an effective identity and access management system, have processing and validation controls as well as business data backup and recovery options in place. Other important controls that we have not discussed include file controls, data destruction controls and change management controls.

Author:Davis Porter

Image Courtesy: jannoon028,

Sharing Large Medical Images and Files – Factors to Consider


According to data collected by the HHS Office for Civil Rights, over 113 million individuals were affected by protected health information breaches in 2015. Ninety-nine percent of these individuals were victims of hacking, while the remaining 1 percent suffered from other forms of breach such as theft, loss, improper disposal, and unauthorized access/disclosure. A quick look at the trend from 2010 shows that health information data breaches are on the rise. An even deeper look at this report shows that network servers and electronic medical records are the leading sources of information breaches, at 107 million and 3 million, respectively.

Sadly, security is not the only issue that medics face when sharing medical records. A 2014 article in the New York Times explains the difficulty medics face when trying to send digital records containing patient information. While the intention is noble—to improve patient care coordination—doctors are facing problems with their existing large file sharing options.

To help doctors share files such as medical images in an easier and safer way, we will explore four factors that should be considered.

HIPAA Compliance

Medical records are sensitive and confidential in nature. This means that handling them should be guided by set industry policies, in this case, Health Insurance Portability and Accountability Act (HIPAA), for example. HIPAA is actually a response to security concerns surrounding the transfer and storage of medical records, in this case, images.

HIPAA places responsibility on medics and healthcare providers in general to secure patient data and keep it confidential. As a result, non-compliance could lead to legal action, which can be costly. Usually, HIPAA makes sure that all Personal Health Information (PHI) is covered, outlining more stringent rules on electronic PHI, mainly because a security breach is more likely to affect a larger number of patients, all at once.

It is a medic’s responsibility to ensure that the selected EFSS solution is HIPAA-compliant if you want to maintain patient trust, keep positive publicity, and avoid steep HIPAA fines imposed after a breach. In fact, the first time you commit an offense, HIPAA will charge approximately $50,000, a figure that escalates accordingly with each subsequent offense.


This is the second level of security you should consider before settling on a large file sharing solution. As much as an EFSS service provider is HIPAA-compliant, you need to ensure that measures outlined in HIPAA are taken.

When you read about patients’ rights as outlined in HIPAA, specifically the ‘Privacy, Security and Electronic Health Records’, you will notice that information security is emphasized. For this reasons, medics should ensure that patient data is encrypted in order to prevent it from being accessed by rogue colleagues or professional hackers.

It is entirely important that all hospital departments—ranging from cardiology, imaging centers and radiology, among others—encrypt medical images and files to further protect patient privacy. Better still, encryption should be both at rest and on transit, and files should only be shared with authorized specialists and physicians as well as the patients themselves.

To further tighten security, these files should be encrypted with non-deterministic encryption keys instead of fixed ones, whose passwords can be hacked. The best thing about this technique is that even when faced with a security breach on the server side, hackers cannot access the encryption keys. Additionally, you can opt for EFSS solutions that offer client-side encryption alone, barring the service provider and its employees from accessing this information.

File Scalability

Compared to other medical records, medical images present a great challenge with regards to file size. It is actually reported that a significant number of sequences and images are an average of 300MB. Additionally, average file size for a standard mammography image and a 3D tomography image are 19MB and 392 MB, respectively. While these file sizes already seem too large, Austin Radiological Association (ARA) predicts that by 2024, annual data from its 3D breast imaging files will reach 3 petabytes. These facts expose the storage challenges that medics face.

A glance at the process of finding medical images for active cases, storing them, and archiving those of inactive cases shows the immense need for medics to find a reliable and convenient large file sharing solution that caters to these storage needs.

A weak server could get overwhelmed with data, progressively becoming inefficient and inept as more files are uploaded into the system. The best way to solve this issue is by using cloud-based services that automatically scale your files according to your needs. This way, you will upload more files in the server, significantly reducing hardware costs by approximately 50 percent, especially when this is done on the cloud as opposed to in-house. In addition to these perks, the cloud will allow you to share these images faster and more conveniently, saving both time and storage.

Technology Increases the Likelihood of Medical Errors

While technology helps solve issues such as security and storage, over-reliance could actually lead to medical errors, incidents that are dreadful to patients and medics as well. As reported by Eric McCann of Healthcare IT News, medical errors cost America a colossal $1 trillion each year, and 400,000 Americans die annually due to these preventable mistakes.

Even though the cloud has been paraded as a solution to reduce incidences of medical error, the need to be watchful and keen can never be overstated. Take, for example, the erroneous click of a mouse and mislabeling of data. A case study on Kenny Lin, MD, a family physician practicing in Washington, D.C., which is detailed in his 2010 piece in the U.S. News & World Report, shows us how easy it is to make a mistake with technology. Dr. Lin nearly made a wrong prescription by accidentally clicking on the wrong choice in his EMR system.

Now, what if you mislabeled a patient’s radiology report? Wouldn’t that start a series of misdiagnosis and treatment? Could you imagine the damage caused? It is for this reason that even when technology makes it easier to share large, sensitive files like medical images, you should counter-check and make sure that the file is labeled correctly and sent to the intended, authorized recipient.

The Way Forward

The sensitivity of medical files is eminent, and with data breaches on the rise, it is vital to ensure the privacy of all medical documents, including large medical images and files. To reduce the possibility of a data breach, any EFSS solution used to share these files should guarantee a reasonable level of file security and HIPAA compliance. In addition to that, its capacity to efficiently handle file sizes and offer easy access to these files should not be ignored. Lastly, as you remain cautious when feeding data into the system, create a safe backup for your data just in case of a data breach. By taking such precautions, medical files can be shared between all necessary parties easier and more safely.

Author: Davis Porter

Image courtesy:, stockdevil

Data Owner Responsibilities When Migrating to an EFSS

While it is easy to say and conclude that all data belongs to your organization, complications arise when the person accountable for data ownership has to be identified. Even when the IT department spearheads the process of processing, storing, and backing up data among other functions, the fact is that it does not own business data. Worse still, outsourced service providers do not own this data any more than the IT department does.

Who Owns Data? What are Data Owner’s Responsibilities?

In the cloud environment, a data owner is a business user who understands the business impact of a security breach that would lead to loss of data, integrity, and confidentiality. This responsibility makes the data owner very conscious of decisions made to mitigate and prevent such security incidents.

When migrating to an EFSS, business data owners should do the following:

Classify Data

Data classification has been extensively labeled as a remedy to data breaches. In essence, data classification helps to significantly reduce insider threats, which are reported to cause 43% of data breaches. Other than malicious employees, data breaches are a result of human error. Additionally, the growing data volume experienced by businesses makes it difficult to track data; hence, it is challenging to know where data is stored, who accesses it, and what they do with this information. By making sure that only authorized employees access certain information, the probability of a data breach is likely to reduce.

Clearly, the need for data classification has never been more evident. To properly classify data, a few measures should be taken by a business.

  • Have detailed “acceptable use” policies. All employees should internalize and sign these documents, which are then reviewed annually or as needed.
  • Make use of data classification technologies. When you train employees using a data classification technology, they will better understand the sensitivity of the data they are creating, storing, and sharing. Consequently, they will treat this data with the highest level of confidentiality and caution.
  • Understand industry regulations to classify data accordingly.
  • Once data is properly classified, apply appropriate access controls and continuously yet randomly monitor data activity to nab suspicious activities as soon as they are carried out.

Monitor Data Life Cycle Activities

When migrating to an EFSS, issues such as data retention and disposal should constantly be monitored by a business data owner. Simply put, how long will the EFSS solution retain your data and how long will it take to dispose of your data completely after you have deleted it? What happens to your data once your contract with the provider ends?

Before a business owner looks at an EFSS provider’s life cycle, he needs to understand the typical seven phases of data life cycle. From the first stage of data capture, data maintenance, data synthesis, data usage, data publication, or data archival to data purging, how safe is it? Who has access to it and how long is it retained in the EFSS?

When this data is stored, is it used and accessed by third-parties who, sadly, cause 63% of all data breaches? Is the EFSS data retention and disposal policy compliant with the law? For example, data retention requirements stipulated in the Health Insurance and Portability and Accountability Act (HIPAA) state that organizations that accept credit cards must adhere to a Payment Card Industry Data Security Standard (PCI DSS) data retention and disposal policy.

Understand Enterprise File Sync-and-Share (EFSS) Deployment Models, As a Way of Assessing Risks

Despite the existence of extensive advice on the best EFSS solutions that exist, business data owners need to gain some technical knowledge. How many EFSS deployment models do you know, for example? Since this is a pretty broad topic, we will briefly discuss three models.

Public Cloud EFSS

In addition to being fast and easy to set up, a public cloud could be cheaper in terms of both infrastructure and storage costs. However, public cloud EFSS might not be the best regarding data protection and security, leaving your company exposed and vulnerable to regulatory non-compliance. It is, therefore, important to analyze the security measures the public cloud has to offer before settling.

Private Cloud EFSS

Although private cloud is believed to be more expensive compared to the public cloud, the cost of ownership depends largely on the vendor and infrastructure choice (for example, FileCloud offers the lowest cost of ownership across public and private clouds). Private cloud EFSS is worthwhile regarding services and security offered. With an adoption rate of 77%, private cloud solutions such as FileCloud are better options. This opinion is attributed to the flexibility and control over where data is stored. Consequently, users can choose which regulations to comply with and have better control over a breach because the IT department can access all the files and monitor, protect, and salvage them, as opposed to a public cloud.

Hybrid Cloud EFSS

According to RightScale’s, “Cloud Computing Trends: 2016 State of the Cloud Survey,” hybrid cloud EFSS adoption rate is 71%. The success is believed to be the result of the ability to harness the positive attributes of both a public and private cloud all at once because, usually in a hybrid environment, some components will run on the premises while others run in the cloud. One great example of a hybrid model is an EFSS application that runs as Software as a Service (SaaS) while data is stored on the premises or at the discretion of the user company.

Closing remarks

It is the responsibility of a business data owner to ascertain that data will be kept safe and confidential before migrating to any EFSS solution. This person needs to be savvy with the advantages a chosen EFSS model offers, compliance with industry regulations, proper access and identity management, understand the EFSS data life cycle processes, and ensure that security measures such as data encryption and authentication processes are in place.

 Author: Davis Porter


HIPAA Prescribed Safeguards for File Sharing

health care data governance

The Health Insurance Portability & Accountability Act (HIPAA) sets standards for protecting sensitive data of patients in the cloud. Any company which is dealing with PHI (protected health information) needs to ensure all of the required network, physical, and process safety measures are properly followed. If you want to learn more about requirements of HIPPA, click here to learn more.

This includes CE (covered entities), anyone who is providing treatment, operations, and payment in health care, BA (business associates) with access to patient’s information stored in the cloud or those who provide support in payment, operations, or treatment. Subcontractors and associates of associates need to be in compliance too.

Who needs HIPAA?

The privacy rule of the HIPAA helped address the saving, sharing, and accessing of personal and medical data of individuals stored in the cloud while the security rule is more specifically meant for outlining national security standards to help protect the health data which is received, maintained, transmitted, or created electronically, also known as e-PHI (electronic protected health information).

Technical and physical safeguards

If you’re hosting data with HIPAA compliant hosting providers, they need to have particular administrative, technical, and physical safeguards in place as per the US HHS (Department of Health & Human Services). The technical and physical safeguards which are the most important for services provided by hosts are listed below:

  • Physical safeguards include limited facility control or access with authorized access procedures. All entities need to be HIPAA compliant, need to have policies regarding the use and access of electronic media and workstations. This includes removing, transferring, reusing, and disposing of e-PHI and electronic media.
  • Technical safeguards should only allow authorized users to access e-PHI. Access control will include the use of unique user ID’s, emergency access procedures, automatic logoffs, decryption, and encryption.
  • Tracking logs or audit reports need to be implemented in order to keep a record of activity on software or hardware. This is very useful when it comes to pinpointing the source or the cause of security violations.
  • Technical policies need to be used for covering integrity controls and measures should be in place to confirm e-PHI has not been destroyed or altered. Offsite backup and IT disaster recovery are very important in order to ensure any electronic media errors or failures can quickly be remedied, and health information of patients can be recovered intact and accurately.
  • Transmission or network security is the last safeguard needed of HIPAA compliant hosts in order to protect them against any unauthorized access or use of e-PHI. This concerns all of the methods for transmitting data, whether it is over the internet, email, or even on private networks, like a private cloud.

A supplemental act passed in 2009 known as the HITECH (Health Information Technology for Economic & Clinical Health) Act which supported the enforcement of all of the HIPAA requirements by increasing the penalties imposed on organizations who violated the HIPAA privacy or security rules. The HITECH Act was created in response to the development of health technology and increased storage, transmission, and use of electronic health information.

HIPAA has driven a number of healthcare providers to search for solutions that can help them secure cloud data. Medical information is very private, and regulation keeps getting tighter, which means enforcement is also getting tighter. There are a number of healthcare providers have chosen to move their whole EHRs onto a HIPAA compliant platform such as FileCloud in order to reduce their expenses and become more inter-operable across various other devices in a safe, HIPAA-compliant fashion.


Author: Rahul Sharma

images courtesy: Stuart Miles

HIPAA Compliant File Sharing Requirements

HIPAA Complaint File Sharing


In this article, let us explore a bit about origin of HIPAA privacy & security rules and its major parts such as – who are covered, what information is protected, and what safeguards need to be in place to protect electronic health information stored in the cloud, mainly in the context of HIPAA complaint file sharing.


HIPAA (Health Insurance Portability & Accountability Act of 1996) enforced the Secretary of the US HHS to develop regulations that protect the security and privacy of health information that is stored in the cloud. In accordance, HHS published the HIPAA privacy and security rule.

  • The Privacy Rule establishes countrywide standards for protection of cloud information.
  • The Security Rule establishes set security standards to protect electronic information.

The Security Rule puts in motion the protections from the Privacy Rule, and addresses technical as well as non-technical safeguards which the organizations need to have in place for securing e-PHI.

Before HIPAA, there were no accepted security standards or requirements to protect cloud information. New technologies kept evolving, and the industry started moving away from paper and began relying on electronic systems more for paying claims, proving eligibility, providing and sharing information, etc.

Today, providers use clinical applications like CPOE systems, EHR, pharmacy, radiology, etc. Health plans provide access to care and claim management and self-service applications. This may mean that the workforce is more efficient and mobile, but the potential security risk also increases at the same time.

One of the main goals of this rule is to protect individual privacy with regard to cloud information while entities are allowed to adopt new technology to improve the efficiency and quality of patient care. The security rule is scalable and flexible which means covered entities can implement procedures, policies, technologies, etc. which are appropriate for their size and organizations structure.


This rule, just like all administrative rules, applies to health care, health plans, clearinghouses and any health care providers who transmit health information electronically.

What’s protected?

This rule protects individually identifiable information known as PHI (Protected Health Information). It protects the subset of all information covered in the privacy rule which is all of the individually identifiable information created, received, maintained or transmitted electronically by an entity. It doesn’t apply to PHI which is transmitted in writing or orally.
On related note, here is a good article on What is PII and PHI? Why is it Important?

General Rules

The rule requires all covered entities to maintain an appropriate and reasonable technical, physical and administrative safeguard for e-PHI. Covered entities must:

  • Ensure confidentiality, availability, and integrity of e-PHI created, received, maintained or transmitted by them.
  • Identify and even protect against anticipated threats to integrity or security of information.
  • Protect against impermissible, anticipated disclosures or uses.
  • Ensure workforce compliance.

Risk Management and Analysis

The provisions in the rules need entities to conduct risk analysis as a part of security management. The management provisions and risk analysis of this rule are separately addressed here, since determining which security measures are appropriate for an entities shapes the safeguard implementation for the rule.

Administrative Safeguards

  • Security Personnel: Covered entities have to designate security officials who are responsible for implementing and developing security procedures and policies.
  • Security Management Process: Covered entities need to identify and analyze any potential risks to e-PHI. They must implement security measures which will reduce the vulnerabilities and risks to appropriate and reasonable levels.
  • Information Access Management: The security rule tells covered entities to implement procedures and policies that authorize access to e-PHI only at appropriate times depending on the role of the user or recipient.
  • Workforce Management and Training: Covered entities need to provide for appropriate supervision and authorization of the workforce who use e-PHI. Covered entities need to train all members of the workforce about procedures and policies for security and need to have and apply relevant sanctions against any members who violate procedures and policies.
  • Evaluation: Covered entities need to perform periodic assessments on how well security procedures and policies are meeting the requirements of this rule.

Physical safeguards

  • Faculty Control and Access: Covered entities need to limit physical access to facilities and ensure only authorized access is granted.
  • Device and Workstation Security: Covered entities need to implement procedures and policies which specify the correct use of electronic media and workstations. They must also have procedures and policies in place for the removal, disposal, transfer, and reuse of media.

Technical Safeguards

  • Access Control: Covered entities need to implement technical procedures and policies for allowing authorized person’s access e-PHI.
  • Audit Controls: Covered entities need to implement software, hardware and procedural mechanisms for examining and recording access and any other activity in information systems which use or contain e-PHI.
  • Integrity Controls: Covered entities need to implement procedures and policies which ensure e-PHI isn’t improperly destroyed or altered. Electronic measures have to be in place to confirm this as well.
  • Transmission Security: Covered entities need to implement security measures which protect against unsanctioned access to e-PHI which is being transmitted over electronic networks.

Organizational Requirements

  • Business Associate Contracts: HHS develops regulations related to associate obligations and contracts under HITECH Act, 2009.
  • Covered Entity Responsibilities: If covered entities know of activities or practices of associates which constitute violation or breach of their obligation, the entity needs to take reasonable steps to end the violation and fix the breach.

Procedures, Policies and Documentation Requirements

Covered entities need to adopt appropriate and reasonable procedures and policies for complying with provisions of the rule. They must maintain, until six years after the date of creation or last effective date, written procedures, policies, and records, of required activities, actions and assessments.

Updates: Covered entities need to periodically update documentation as a response to organizational or environmental changes which affect the security of e-PHI.

Noncompliance Penalties and Enforcement

Compliance: The rule establishes a set of standards for confidentiality, availability, and integrity of e-PHI. The HHS and OCR are responsible for enforcing and administering standards, in connection with their enforcement of the Privacy Rule and might even conduct investigations into complaints and reviews for compliance.


Author: Rahul Sharma

FileCloud SSO Demystified

Single Sign On or SSO is the solution that gives one-click access to all of the applications with one password. According to Wikipedia, SSO is a property of access control of multiple related, but independent software systems. With this property a user logs in with a single ID and password to gain access to a connected system or systems without using different usernames or passwords.

The convenience of having a single username and password across multiple applications cannot be underestimated. Users can use one username and password across all applications. Users do not have to remember different login credentials for different sites. Users log in into one application and can have their login credentials preserved and carried over to all applications. Administrators do not have to worry about managing different set of passwords for different applications thereby reducing time, cost, and potential risks in password maintenance. Those are just few of the advantages of SSO.

FileCloud supports SSO across a range of authentication sources such as Active Directory, Active Directory Federation Services (ADFS), any SAML 2.0 protocol supported on premise identity provider, or Cloud based identity providers such as OKTA, onelogin, Centrify and much more.


NT LAN Manager (NTLM) is a suite of Microsoft security protocols that provides authentication, integrity and confidentiality to users. The objective is to use web browser such as Internet Explorer or Google Chrome to auto login to FileCloud website using windows Active Directory Authentication. Therefore, when a user browses to the user is seamlessly logged in to FileCloud using AD credentials without asking the user to enter username and password.

NTML Authentication

In Filecloud, Active Directory authentication must be set up and NTLM SSO can be configured as follows


Active Directory Federation Services (ADFS) is a software component that runs on windows servers to provide users with single sign-on access to applications across organizational boundaries. ADFS integrates with Active Directory Domain Services, using it as an identity provider. The objective is to have FileCloud users authenticate against ADFS, on successful authentication response from ADFS the users are logged into FileCloud.

ADFS Authentication

FileCloud integrated seamlessly with ADFS server using the federation metadata. FileCloud server acts as a Service Provider (SP) and ADFS acts as an Identity Provider (IdP). Login requests from the client web browser will be redirected to ADFS server. ADFS server authenticates the user using the ADFS datastore that can be a SQL database, AD Server or LDAP etc and returns the authentication token to successfully log in into FileCloud.

Following link explains the step by step details on setting up ADFS and integrating with FileCloud.



Security Assertion Markup Language (SAML) is an XML based open standard data format for exchanging authentication and authorization data between parties. As with ADFS, FileCloud acts as a Service Provider (SP) and the customer must run the Identity Provider (Idp) server.

FileCloud SAML

The following process explains how the user logs into a hosted FileCloud application through customer-operated SAML based SSO service.

  1. User attempts to reach the hosted FileCloud application through the URL.
  2. FileCloud generates a SAML authentication request. The SAML request is embedded into the URL for the customer’s SSO Service.
  3. FileCloud sends a redirect to the user’s browser. The redirect URL includes the SAML authentication request and is submitted to customer’s SSO Service.
  4. The Customer’s SSO Service authenticates the user based on valid login credentials.
  5. Customer generates a valid SAML response and returns the information to the User’s browser
  6. The customer SAML response is redirected to FileCloud.
  7. FileCloud authentication module verifies the SAML response.
  8. If the User is successfully authenticated, the user will be successfully logged into FileCloud.

Customers can run their own Identity Provider or can use one of the cloud based Identity Providers such as OKTA, One-login, Centrify etc. FileCloud can seamlessly integrate with any IdP as long as the IdP supports SAML 2.0 protocol.

The link explains the steps involved in integrating any Identity Provider with FileCloud.

In conclusion, Single Sign On (SSO) provides the convenience of a one-click login into multiple applications and websites. FileCloud supports different SSO mechanisms and will seamlessly integrate with a number of SSO Identity providers and existing SSO infrastructure.


Practical Security Tips For Lawyers to Prevent Data Breaches

Security tips for lawyers need not be a list of complex technical IT setup. Simple set of actions could prevent some major breaches. With their professional ethics requiring high confidentiality and information security for their clients, lawyers definitely need a secure, leak-proof way of storing their clients’ case details and other vital and delicate information.

To do this, common practices such as password-protecting files and the computer are employed. Despite this, legal firms have still experienced loss of client data and in extreme cases, leakage of damaging information. As a matter of fact, in 2014, a string of data breaches within law firms led the Information Commissioner’s Office (ICO) to issue a public warning to barristers and solicitors. The ICO insisted that more needed to be done to ensure that client information was kept as safe and confidential as possible.

A while back, in 2010, Rob Lee, an information security specialist who investigated data breaches for Mandiant, a security company, estimated that 10% of his time was spent investigating law firm data breaches. A year earlier, in 2009, the FBI actually issued an advisory warning to law firms stating that they were specifically being targeted by hackers.

As a result of this streak of data breaches, clients have gone as far as threatening not to pay for services in law firms whose data security ‘stinks’. To protect themselves, law firms should consider bolstering their security systems.

Usually, law firm owners are tempted to believe the word of their IT managers on matters regarding file security. While this is good, it is better to check the system yourself and also hire a third party to help you ascertain the security levels of your systems.

So, what are the top practical security tips to prevent data breaches?

Practical Security Tips For Lawyers to Prevent Data Breaches

Generally, here are a few tips on what you should do:

  • Improve The Quality Of Your Passwords

Normally, people are advised to have an 8-character alphanumeric password because it is apparently strong and safe enough. However, it has been proven that 8 is no longer the number, with regards to measuring the strength of your password.

In a hacking experiment for Ars Technica, a tech website, a team of hackers managed to crack more than 14,800 cryptographically hashed passwords. In fact, a 16-character password (for example qeadzcwrsfxv1331) was hacked in less than an hour.

In a different study by Trustwave Global Security, it was revealed that 88% of passwords can be hacked within 14 days. What this means to law firms, which as indicated are an attractive victim, is that they need to strengthen their file security by improving the quality of their passwords.

Generally, these should not be obvious passwords and should not be repeated in other documents. In addition, you should refrain from keeping a file named ‘passwords’ in your computer.

  • Protect Your Computers With Whole Disk Encryption

A study by eSecurity Planet found out that the leading cause of data breaches has been loss or theft of unencrypted laptops and USB devices. Unfortunately, this is a playing field for hackers because penetrating a stolen laptop is easier than cracking a database.

To curb this, law firms are advised to protect their devices with whole disk encryption. This can be boosted with biometric access, such as fingerprint swipe.

  • Use The Cloud

The self-hosted or public cloud works as your backup media and also a safer place to store your most sensitive files. One of the leading self-hosted Cloud solution is Filecloud. Unlike public cloud-based file sharing solutions that run on third-party servers, filecloud is an on-premises Enterprise File Sharing and Sync (EFSS) solution, which is self-hosted and managed by your own trusted administrators, safely running on your infrastructure. Better yet, it is regulated by your corporate IT security policies, ensuring that you receive the highest form of possible protection.


Author: Davis Porter

Breaking up with Windows Server 2003? 5 Essential Steps Before Migrating

Microsoft officially stopped supporting Windows Server 2003 (the announcement had been made well in time) starting Jul 2015. If you, as an organization, are still using Windows Server 2003, without the knowledge of the serious risks of doing so, it’s probably time to have a tough talk with your CIO.

It’s Time to Move On

Just like organizations found it terrible to move on from Windows XP (its end of life happened in April 2014), IT departments are still holding on to Windows Server 2003, because they’ve spent years stabilizing their server infrastructure based on the tight and power packed Windows Server 2003 OS. However, it’s important to move on now, particularly when it’s already several months after the end of life of Windows Server 2003, and with cyber sphere already buzzing with stories of server outages occurring in organizations still clinging on to Windows Server 2003.

Some Key Facts to Absorb, Before You Migrate

  • Windows Server 2003’s replacements come aplenty – there’s Windows Server 2008, Windows Server 2012, Windows Sever 2008 R2, and Windows Server 2012 R2.
  • Windows Server 2012 R2 offers substantial compatibility of applications with Windows Server 2003, which makes migration easier.
  • Integrated virtualization, extensive scalability, and better security are the major improvements that Windows Server 2012 R2 brings to the table for an IT department.
  • Continuing beyond the end of life for Windows Server 2003 means exposing your organization’s databases to hackers and cybercriminals with uncovered vulnerabilities, and staying aloof from the latest from the world of server operating systems.
  • With simplified licensing and deployments attuned to a virtualization empowered IT environment, this migration offers the opportunity of adopting long term CAPEX savings.

Guide to Move from Windows Server 2013 – Five Essential Steps 

Step 1 – Step Back and Envisage the Broad IT Spectrum

Understanding the current state of your organization’s servers, the ideal state your IT department envisages, and a reasonably attainable end state for the same. You need to understand and express your IT goals in terms of server infrastructure.

Step 2 – Take Stock of the Current Server OS Utilization

  • Consult with your server administrator and get a good idea of the current state of Windows 2003 environment in your server centre. Get a decent idea of the real workloads being tackled by Windows Server 2003. Need help in mapping your datacenter workloads? Check out a few low cost or no cost applications –
    • Microsoft Assessment and Planning [MAP] Toolkit
    • OpenManage Server Administrator [OMSA] and ChangeBASE), from Dell
    • HP Asset Manager software
  • Find out the accurate number of Windows Server 2003 systems operating on your network. Spend time to extract details regarding CPU, disk space, and memory profile of each system. Identify systems with less than 50% utilizations (these are good candidates for virtualization). Take stock of systems that can be decommissioned; for instance, systems that have always lain unused, or can be retired without business impact.
  • Plus, Microsoft has handled the post end of life consultancy particularly well, which means you will still be able to talk to Microsoft MVPs (most valuable professionals) to get help on the best way forward.

Step 3 – Prepare Inventory Lists

The third step is about parsing applications, services, and workloads into different buckets. Here’s an actionable guide –

  • Displaceable Workloads – Standardized service offerings can be leveraged to displace workloads that can be replaced on-premise using OS services.
  • Minimal Hassle Apps – Applications that can be migrated from Windows Server 2003 to a newer version such as Server 2008 or Server 2012 with minimal time investment need to be recorded.
  • Moderate Effort Apps – These are applications that will need moderate time investment to be migrated.
  • Problem Apps – These are the apps that might need deeper analyses in terms of migration risks, cutover efforts, and could even pose challenges as huge as complete re-architecting.
  • Apps to be Retired – Understand the target server OS features, and look to discard the old applications running on Windows Server 2003 that these features can replace.

Step 4 – Look to Virtualize What You Can

  • Internal applications need to be tackled on a ‘virtualization first’ philosophy, to maximize the benefits of the server OS migration.
  • Look to leverage virtualization as a tool for facilitating quicker, cost effective, and secure transition from Windows Server 2003 to a newer version.
  • Consider the strategy of deploying Windows Server 2012 VMs first and downgrading to Windows Server 2003 as an intermediary step.
  • Look to move apps to PaaS (platform as a service) or IaaS (infrastructure as a service) cloud models. Generally, customer facing and web facing apps are good candidates of being migrated on cloud. Check out Microsoft’s established partners covered under the Cloud OS Initiative of 2013, which offers several viable options of Windows IaaS migrations for existing users of Windows Server 2003.

Step 5 – Risk Assessment

  • Take stock of the mission critical apps; the best way to do so is to envisage your IT infrastructure and business process without the app under consideration. If there is no replacement and has high impact on business, the app falls under ‘critical’ list, and you need to be extra careful for its migration, with a backup plan.
  • To bring down the risks associated with the migration, it’s advisable to begin with P2V migrations, and ensure that all migration activity is recorded. Ensure that snapshot version is running during the migration to ensure that the app functionality is available in case the migration activities are disrupted.

Remember, your love for Windows Server 2003 can never surpass the sense of duty you have for your organization. If you are the CIO or Chief Technology Officer of an SMB still using Windows Server 2003, it’s time to question other senior leaders if they prefer the board of directors and shareholders knowing that the company is running on server OS that will not get any security upgrades, which effectively puts the company’s intellectual properties and digital assets at a huge risk.

Author: Rahul Sharma
Image Courtesy: Microsoft

Alternative to Pydio – Why FileCloud is better for Business File Sharing?


FileCloud competes with Pydio for business in the Enterprise File Sync and Share space(EFSS). Before we get into the details, I believe an ideal EFSS system should work across all the popular desktop OSes (Windows, Mac and Linux) and offer native mobile applications for iOS, Android, Blackberry and Windows Phone. In addition, the system should offer all the basics expected out of EFSS: Unlimited File Versioning, Remote Wipe, Audit Logs, Desktop Sync Client, Desktop Map Drive and User Management.

The feature comparisons are as follows:

Features Pydio
On Premise
File Sharing
Access and Monitoring Controls
Secure Access
Document Preview
Document Edit
Outlook Integration
Role Based Administration
Data Loss Prevention
Endpoint Backup
Amazon S3/OpenStack Support
Public File Sharing
Customization, Branding
SAML Integration
NTFS Support
Active Directory/LDAP Support
API Support
Application Integration via API
Large File Support
Network Share Support
Mobile Device Management
Desktop Sync Windows, Mac, Linux Windows, Mac, Linux
Native Mobile Apps iOS, Android, Windows Phone iOS, Android
Encryption at Rest
Two-Factor Authentication
File Locking
Pricing for 100 users/ year $4199 $1772

From outside looking-in, the offerings all look similar. However, the approach to the solution is completely different in satisfying enterprises primary need of easy access to their files without compromising privacy, security and control. The fundamental areas of difference are as follows:

Feature benefits of FileCloud over Pydio

Document Quick Edit – FileCloud’s Quick Edit feature supports extensive edits of files such as Microsoft® Word, Excel®, Publisher®, Project® and PowerPoint® — right from your Desktop. It’s as simple as selecting a document to edit from FileCloud Web UI, edit the document using Microsoft Office, save and let FileCloud take care of other uninteresting details in the background such as uploading the new version to FileCloud, sync, send notifications, share updates etc.

Embedded File Upload Website Form – FileCloud’s Embedded File Upload Website Form enables users to embed a small FileCloud interface onto any website, blog, social networking service, intranet, or any public URL that supports HTML embed code. Using the Embedded File Upload Website Form, you can easily allow file uploads to a specific folder within your account. This feature is similar to File Drop Box that allows your customers or associates to send any type of file without requiring them to log in or to create an account.

Unified Device Management Console – FileCloud’s unified device management console provides simplified access to managing mobile devices enabled to access enterprise data, irrespective of whether the device is enterprise owned, employee owned, mobile platform or device type. Manage and control of thousands of iOS and Android, devices in FileCloud’s secure, browser-based dashboard. FileCloud’s administrator console is intuitive and requires no training or dedicated staff. FileCloud’s MDM works on any vendor’s network — even if the managed devices are on the road, at a café, or used at home.

Device Commands and Messaging – Ability to send on-demand messages to any device connecting to FileCloud, provides administrators a powerful tool to interact with the enterprise workforce. Any information on security threats or access violations can be easily conveyed to the mobile users. And, above all messages are without any SMS cost.

Amazon S3/OpenStack Support Enterprise wanting to use Amazon S3 or OpenStack storage can easily set it up with FileCloud. This feature not only provides enterprise with flexibility to switch storage but also make switch very easily.

Multi-Tenancy Support – The multi-tenancy feature allows Managed Service Providers(MSP) serve multiple customers using single instance of FileCloud. The key value proposition of FileCloud multi-tenant architecture is that while providing multi-tenancy the data separation among different tenants is also maintained . Moreover, every tenant has the flexibility for customized branding.

Endpoint Backup: FileCloud provides ability to backup user data from any computer running Windows, Mac or Linux to FileCloud. Users can schedule a backup and FileCloud automatically backs up the selected folders on the scheduled time.


The preference for enterprises will depend on whether to rely on Pydio whose focus is split between trying to be open source and commercial with their enterprise offering or FileCloud with the only focus to satisfy all enterprise’s EFSS needs with unlimited product upgrades & support at a very affordable price.

Here’s a comprehensive comparison that shows why FileCloud stands out as the best EFSS solution.

Try FileCloud For Free & Receive 5% Discount

Take a tour of FileCloud