Archive for the ‘Security’ Category

Blockchain Beyond Crypto-currencies

Blockchain can disrupt cloud computing

Blockchain goes beyond crypto-currencies

On the 31st of October 2008, the still mysterious Satoshi Nakamoto (probably a pseudonym for an individual or group) published a white paper introducing the concept of a peer to peer digital cash system referred to as Bitcoin. Bitcoin marked a radical shift in the finance industry. It offers enhanced security and transparency by authenticating peers that share the virtual cash, generating a hash value, and encryption. The global financial industry, predicts that the market for security-based blockchain is will grow to roughly $ 20 billion by 2020.

More Than Just Crypto-currencies

Blockchain is widely known for powering crypto-currencies; it is the data structure that enables Bitcoin (BTC) and other upcoming digital currencies like Ether (ETH) to burgeon via a combination of decentralized encryption, immutability, anonymity, and global scale. However, its uses go way beyond that.

In a nutshell, blockchain refers to a continuously updated record of who holds what.

A blockchain is a distributed data repository or ledger that is decentralized and available for everyone to see and verify. In order to understand it in the context of a trust economy; you can equate it to public ledgers that were used to in towns to record important things like the transfer of property deeds or election results. Blockchain simply utilizes advanced cryptography and distributed programming to effectuate similar results. What you have in the end is a system with trust inherently built into it – a transparent, secure, immutable repository of truth; that has been built to be highly resistant to manipulation, outages, and unnecessary complexity. This consistent record of truth is facilitated by the shared and cryptographic nature of the ledger.

Blockchain’s social perception mainly revolves around crypto-currencies. Most people get encumbered by its perceived technological complexity, dismissing it as something for the intellectual tech-savy; but its basic concept is universal and simple. Its immense potential is nothing short of revolutionary. From financial ledgers and contracts to monitoring and securing all manner of data in the next generation of distributed applications.

Blockchain in the Enterprise

Blockchain is creating waves in the enterprise software market, with companies like Microsoft and IBM leveraging it in developer environments, cloud platforms, Internet of things (IoT) technology and more. Ethereum’s blockchain tech has largely been the gateway, nonetheless, tech giants are firmly in the blockchain business. The collective finance and banking industry is also adopting blockchain transactions in the form of smart contracts.

Blockchain was listed as one of the top trends in the Gartner hype cycle for 2017. The hype cycle takes a close look at technologies that have the potential to significantly increase a company’s competitive edge. According to Gartner, this technology will lead to the reformation of entire industries in the long term. Companies that are at the forefront of disruption, view blockchain as the driving force behind it. This was the key takeaway from a study of 3,000 executives, done by IBM’s Institute for Business Value, which examined the enterprise potential of blockchain. The survey concluded that 33% of the executives were already considering or had already adopted it. Most of the surveyed executives were counting on it to provide a competitive advantage – while creating a platform approach to innovation.

Potential In the Cloud

Cloud computing has been widely adopted in virtually every facet of IT; one can’t help but wonder how the decentralized and security features of blockchain technology can be used to further enhance the clouds appeal. Whenever CIOs begin discussing moving critical applications to the cloud, terms like security, compliance, accountability, reliability, auditability, and acceptance of liability among others, are thrown around. The main point of contention lies in the demand that there is a secure supply chain and that each step in that supply chain is verifiable in real-time, and when things go south it is possible to find out what went wrong and someone can be held accountable. Introducing blockchain into cloud computing creates a convenient service that offers enhanced security.

A Decentralized Cloud

A key characteristic of blockchain is that it was designed to be synchronized and distributed across networks. A blockchain based decentralized cloud facilitates on-demand, low-cost, and secure access to some of the most competitive computing infrastructures; while protecting your files, both on the nodes and in transmission, by utilizing encryption and cryptography. A major reservation for organizations when migrating to the cloud is trusting third parties to secure sensitive, private data.

For most cloud experts, the biggest draw to blockchain is the elimination of intermediaries. Mainly due to the fact that a well-designed and publicly accessible blockchain can easily replace most of the functions performed by intermediaries to ensure a secure environment, free of fraud. On a decentralized ‘blockcloud’ , where data is stored on multiple individual nodes intelligently distributed across the globe, it is virtually impossible to cause meaningful disruptions.

Blockchains like Ethereum provide a different approach to running distributed applications. Using Ethereum, developers can write smart contracts – code that is executed on the blockchain virtual machine, whenever a transaction is fired. The Ethereum blockchain inadvertently provides a distributed run-time environment with a distributed consensus over the execution.

In Closing

Blockchain’s power doesn’t lie in its heavy encryption; its distributive nature makes it hard to manipulate. It is essentially a sequential storage scheme that can verify itself, making it the ideal solution for immutably recording transactions and much more. While everyone remains fixated on the AI buzz, the blockchain is a dark horse that is running under the radar.

Author: Gabriel Lando

Image Courtesy of Freepik

Data Security Questions Every Enterprise Should Ask

Over the past decade, cloud computing has transitioned from being a buzzword to becoming a staple technology for most enterprises, mainly driven by cloud’s accessibility, superior flexibility, and capacity compared to mainstream computing and storage techniques. However, just like mainstream data sharing and storage methods, cloud computing does not lack its fair share of data security issues. Palliating data security risks is essential to creating a level of comfort amongst CIOs, to migrate data applications to the cloud. The decision to transition to the cloud has to be dependent on how sensitive the data is and the security guarantees the cloud vendor provides.

Is your data safe in the hands of a cloud service provider?

In today’s exceedingly mobile world, enterprises are heavily relying on cloud vendors, and allowing remote access to more devices than ever before. The end result is a complex network that requires higher levels of security. The only way organizations can maintain the availability, integrity, and confidentiality of these different applications and datasets is by ensuring their security controls and detection-based tools have been updated to work with the cloud computing model. Whenever data is stored in the cloud, the main point of focus is typically the security of the cloud provider and hosting facility. However, this focus is usually at the expense of how the data itself is handled. This begs the question, do you trust the cloud vendor’s technology? Do you trust their employees? Do you trust their safeguards? Are you completely sure that if their back was against the wall they would not sell or compromise any of your data?

The fact of the matter remains that, once you move your data to a public cloud platform, you can no longer exercise your own security controls. Outsourcing also introduces a costly threat to intellectual property in the form of digital information like engineering drawings, source code, etc. An organization has to give its cloud service provider access to important IP assets, which are vital to the organization’s core business. Exposing invaluable information to third parties presents an epoch-making security risk. In most cases, migrating to the cloud means you have no option but to trust the vigilance, knowledge, and judgment of your chosen vendor.

As cloud-based solutions like Dropbox and Google Drive become more popular within the business setting; enterprises have to come to grips with the fact that issues like loss of control over confidential data are a looming security threat. Despite the fact that cloud vendors implement several security measures to isolate tenant environments, the organization still loses some level of IT control, which equates to risk as sensitive data and applications no longer reside within a private, physically isolated data-center. Is the business value worth the risk?

Why is Metadata Security Important?

In a nutshell, metadata is data about data. The bigger question is whether or not metadata is personally identifiable. If enough of it is linked together, a detailed profile of an individual or organization can be created; enough to personally identify them. Most IT security experts agree that metadata typically contains sensitive information, hidden from obvious view, but easily extractable. Metadata poses a great data leak risk since employees are not even aware of its existence. Whenever a request is made to store or retrieve data from a cloud storage server, the request and subsequent response contain metadata about both the request and the data itself. Since the organization has little to no control of this metadata, there is no way to guarantee its security.

What Happens in the event of a data breach?

As cloud adoption rates increase, cloud providers are increasingly becoming attractive targets for cybercriminals because of the huge amounts of data stored on their servers. Access to unencrypted metadata is enough to count as a full-fledged breach. The severity of a data breach is dependent on the sensitivity of the data being exposed. Breaches that involve trade secrets, health information and intellectual property are usually the most direful. It is worth noting that cloud vendors are not subject to similar data breach disclosure laws as federal agencies, banks, and other entities. So if a breach does occur, it may never be publicized or associated with the vendor.

Despite numerous efforts from public cloud providers to implement stringent security measures to curb the risk of data breaches; the burden of responsibility for data security ultimately falls on the organization and a breach will have critical financial and legal consequences.

Who Controls Your Data?

Ensuring that the data and applications residing in the cloud are kept safe is becoming more crucial as high-value data, mission-critical applications and intellectual property is transferred to the cloud. Despite the fact that cloud computing, in general, can be perceived as less secure, the fear of cloud security is situational. The real conundrum shouldn’t be whether or not to migrate to the cloud, but which cloud to migrate to. From a security standpoint, most cloud service providers are not ready. Using unsecured cloud vendors can expose sensitive cooperate data without your organization even realizing it. Enterprises commercially and legally have to maintain control over their data while customers and employees need to be able to freely collaborate, share and sync files they require. The solution is simple! Private Cloud.

Private Cloud Offers a Better Alternative

A private cloud computing model facilitates control and collaboration while protecting confidential data from unauthorized access. IT stakeholders need to have a detailed understanding of where and how data is being stored and transferred. With a self-hosted cloud deployment for critical data, you have maximum control, integration, and configuration of all the layers of security.

  • Flexible Infrastructure

A cloud deployment is considered private when it is hosted on the organization’s servers. However, that does not necessarily mean the servers are hosted on-premises. By going the self-hosted route, companies are able to choose whether they want to house their files on-premises or in a remote data center. Despite the fact that on-premises infrastructure has the added advantage of more control and ownership, you will also be responsible for capacity planning. Given the costs associated with operating a data center and the redundancy required to operate at 100 percent network and power uptime; organizations can opt to leverage a hosted private cloud in the form of Infrastructure as a Service (IaaS) or Platform as a Service (PaaS).

This model allows the organization to have a scalable, isolated computing environment that has been custom-designed to meet its specific workload requirements, with the jurisdiction of their choice. A good example is AWS’ VPC which provides cloud hosting capabilities with enterprise-grade IT infrastructure through a virtualized network of interconnected virtual servers. GovCloud also allows US government agencies to host private clouds in secure regions operated by U.S citizens, and is only accessible to vetted U.S entities.

In a nutshell, a private cloud allows organizations to develop a flexible infrastructure to deliver applications while retaining control and managing the risk of the services delivered to business partners, users, and customers.

  • Maximum Control

A private cloud deployment gives you control over security, privacy, and compliance. You can manage all your applications, IT services, and the infrastructure in one place using powerful tools like application and performance monitoring, VM templates, and automated self-service deployment. Since you have the control from the ground up, you will not be forced to adjust your security processes to meet those of the cloud; instead, you will bend the cloud to your will. A self-hosted cloud lets you leverage your current security infrastructure and procedures and easily integrates with existing tools. It simply works within your set framework; and when your data requirements scale, you will have the ability to scale with them.

The physical location of the data-center plays a crucial role in cloud adoption. A private cloud creates the opportunity to choose the region data will be stored. By having control over your selection of hosting provider/ data center, you know precisely where your servers are located, and under which nation’s data laws they are governed. Organizations may be obliged or simply prefer, to store data in a jurisdiction or country that is not offered by a public cloud provider.

In Closing

A private cloud expands visibility into workloads and cloud operations. Thus enabling IT administrators to design data storage, hardware, and networks in a way that guarantees the security of data and associated metadata. When IT is fully aware of where the data is located and who has access to it at any given moment in time; the risks of compliance violations, data security vulnerabilities, and data leakage are thwarted.

Author: Gabriel Lando

FileCloud Unveils ‘Breach Intercept’ to Safeguard Organizations Against Ransomware

FileCloud, the cloud-agnostic EFSS platform, today announced FileCloud Breach Intercept. The newest version of FileCloud offers advanced ransomware protection to help customers handle every phase of a cyberattack: prevention, detection and recovery.

FileCloud is deployed across 90 countries and has more than 100 VARs and Managed Service Providers across the world. Deployed by Fortune 500 and Global 2000 firms, including the world’s leading law firms, government organizations, science and research organizations and world-class universities, FileCloud offers a set of unique features that help organizations build effective anti-ransomware strategies.

Global ransomware damage costs are expected to total more than $5 billion dollars in 2017, compared to $325 million dollars in 2015. Ransomware is growing at an estimated yearly rate of 350 percent with business enterprises becoming the priority target for hackers. Enterprise File Sharing and Sync (EFSS) solutions have seen an increase in ransomware attacks with 40 percent of spam emails containing links to ransomware. Whereas public cloud EFSS solutions such as Box and Dropbox offer centralized targets for ransomware attacks, FileCloud’s decentralized private cloud reduces your company’s exposure to potential attacks.

“Anyone with access to a computer is a potential threat, and the cloud their personal armory,” said Venkat Ramasamy, COO at FileCloud. “Why rummage through hundreds of houses when you can rob a bank? Hackers target centralized storage such as Dropbox or Box rather than self-hosted FileCloud solutions. The freedom to choose the cloud platform that best meets the unique dynamics of each business is our line in the sand of competitive differentiation.”

Breach Intercept

Cyberdefense via customization

The best defense against a phishing attack is to make sure your employees can differentiate genuine communication from malicious spoofing. Hackers can easily spoof email from public SaaS products, which have a standardized, easily falsifiable format. FileCloud offers unparalleled branding and customization tools, allowing you to set your own policies, and design your own emails and broadcast alerts. Customized emails and UX significantly reduce spoofing risk as hackers can’t run a mass spoofing unless they have an exact copy of an email from one of your employees.

Granular controlled folder access

With FileCloud Breach Intercept, you can set different levels of access between top-level folders and sub-folders. Administrators can set read/write/delete/share permissions for any user at any folder level, and permissions are not necessarily inherited according to folder structure, limiting propagation.

Real-time content / behavior heuristic engine

State-of-the-industry heuristic analysis works to detect threats in real time and suspicious content and user activity will activate security protocols and prevent ransomware from taking hold. For example, if FileCloud detects a file posing as a Word document, the system halts the upload and sends an alert to the administrator, preventing propagation of an attack.

Unlimited versioning and backup to rollback

Unlimited versioning and server backup helps companies recover from any data loss accident, including ransomware. FileCloud can roll back not only employee files but also entire server files to any specific date and time before the attack.
FileCloud is available for immediate download from our customer portal. For more information or to download FileCloud Breach Intercept, please visit https://www.getfilecloud.com/ransomware-protection-for-enterprise-file-share-sync.

 

IT Admin Guide to NTFS File and Folder Permissions

New Technology File System (NTFS)

One of the most important and often misunderstood pieces of functionality in Microsoft Windows is the File and Folder security permissions framework. These permissions not only control access to all files and folders in the NTFS file system, it also ensures the integrity of the operating system and prevents inadvertent and unauthorized changes by the non-admin users as well as by malicious programs and applications.

So let’s begin at the very beginning, the NTFS file system can be considered as a hierarchical tree structure, with the disk volume at the top level and with each folder being a branch off the tree. Each folder can have any number of files and these files can be considered as leaf nodes. i.e. there can be no further branches off that leaf node. Folders are therefore referred to as Containers, ie objects that can contain other objects.

So, how exactly is access to these sets of hierarchical objects controlled exactly? That is what we will talk about next. When the NTFS file system was originally introduced in Windows NT, the security permissions framework had major shortcomings. This was revamped in Windows 2000 onwards and is the basis of almost all the file permission security functionality present in modern day Windows OS.

To begin, each object in the file hierarchy has a Security Descriptor associated with it. You can consider Security Descriptors as an extended attribute to the file or folder. Note that Security Descriptors are not only limited to files but also apply to other OS level objects like Processes, Threads, Registry keys etc.
At the basic level, a security descriptor contains a bunch of flags in the Header, along with the Owner information as well as the Primary Group information, followed by a set of variable lists called a Discretionary Access Control List (DACL) as well as a System Access Control List (SACL).

Any File or Folder will always have an associated Owner associated with it and no matter what, that Owner can always perform operations to it. The Primary Group is just enabled for compatibility with POSIX standards and can be ignored. The SACL is for specifying which users and groups get audited for which actions performed on the object. For the purposes of this discussion, let’s ignore that list as well.

The DACL is the most interesting section of any Security Descriptor. You can consider a DACL to define a list of users and groups that are allowed to or denied access to that file or folder. So to represent each user or group with the specific allowed or denied action, each DACL consists of one or more Access Control Entries (ACE). An Access Control Entry specifies a user or group, what permissions are being allowed or denied and some additional attributes. Here’s an example of a simple DACL.

So far, if you have been following along, this seems pretty straightforward and it mostly is, but the practical way how this is applied to folders introduces complexity especially if you are unclear how the permissions interact with each other.

Inheritance of Security Descriptors

If every object had its own unique copy of the Security Descriptor associated with it, things would be pretty simple but impossible to manage practically. Imagine a file system with thousands of folders used by hundreds of users. Trying to set the permissions on each and every folder individually will simply break down quickly. If you needed to add or modify the permissions on a set of folders, you will have to individually apply the change to each and every file and folder in those set of folders.

Thus was born the notion of inheritance. Now, not only is it possible to apply a permission (ACE) to a folder, it is also possible to indicate if the permissions should “flow” to all children objects. So if it is a folder, all subfolders and files inside that folder should have the same permissions “inherited”. See below for an example:

Here, when Folder 3 has some permissions setup, these permissions by default are inherited by its children objects which includes SubFolder1, SubFolder2 and so on. Note that this inheritance is automatic, ie if new objects are added to this section of the tree, those objects automatically include the inherited permissions.

The DACL of any subfolder item now looks like the following, assuming this was set as the permissions for Folder 3.

You can see inherited permissions on any security dialog by the grayed out options seen. To edit these options, you have to traverse up to the tree till you reach the object where the items are actually setup (which in this case is Folder 3, where you can actually edit the permissions). Note that if you ever edit the permissions in Folder3, the new permissions automatically re-flow to the child objects without you having to set them one by one explicitly.

So if inheritance is such a cool thing, why would you ever want to disable inheritance? That’s a good question and it brings us to setting up folder permissions for a large organization. In many organizations which have groups and departments, it is pretty common to organize the folders by groups and then just allow permissions to the folders based on the groups the users belong to.

In most cases, this kind of simple organization works fine, however, in some cases, there will be some folders that belong to a group or department which absolutely need complete security and should only be accessed by a select handful of people. For example: consider SubFolder1 as a highly sensitive folder that should be fully locked down.

In this case this subset of folders should be setup without inheritance.

Disabling Inheritance at Sub Folder 1 helps change a few things. Permission changes happening at parent folders like Folder 3 will never affect Sub Folder 1 under any conditions. It is impossible to give access to Sub Folder1, by someone adding a user or group at Folder 3. This now effectively isolates the SubFolder 1 into its own permission hierarchy disconnected from the rest of the system. So IT admins can setup a small handful of specific permissions for Sub Folder1 that are applicable to all the contents inside it.

Order of Permission Evaluation

Having understood Security Descriptors and Inheritance (as well as when inheritance should be disabled), now it is time to look at how all this comes together. What happens when mutually exclusive permissions are applicable to a file or folder, how does the security permissions remain consistent in that case?

For example, consider an object (File 1) where at a parent level folder (Folder 3), say JOHN is allowed to READ and WRITE a folder and these permissions are inherited to a child object in the hierarchy.

Now, if JOHN is not supposed to WRITE to this child item and you as an IT admin add DENY WRITE to JOHN to the File1 item how does this conflicting permissions make sense and get applied?

The rules are pretty simple in this case, the order of ACE evaluation is
• Direct Deny or Disallowed Permission Entries applied directly on the object
• Direct Allowed Permission Entries applied directly on the object
• Inherited Negative Permission Entries from Parent Objects
• Inherited Parent Permission Entries from Parent Objects

The Windows OS will always evaluate the permissions based on this order, so any overrides placed on that object directly or explicitly will be considered first before any inherited permissions. The first rule that denies permissions for a user under consideration is applied and evaluation is stopped, otherwise evaluation continues till required permissions are allowed and then evaluation is stopped. Note that even with inherited permission entries, the permission entries from the nearest parent are evaluated first before the evaluation continues to the farther parent. ie the distance from the child to the parent matters in the evaluation of the permissions.

Applying Folder Security inside the network

If you thought that setting up permissions on folders is all that you need for a network share you are mistaken, you also need to create a folder share and specify permissions for the share. The final permissions for a user is a combination of the permissions applied in the share as well as the security permissions applied to the folders. The minimum permissions applicable is always applied.

So it is always the best practice to create a share and choose Everyone Full access at the share level and let all the permissions be managed by the Security permissions.

Applying NTFS folder security outside the network

It is simple to provide network folder access over the LAN and apply these folder permissions efficiently, however if you want to allow users to access these files outside the LAN via the web browser, mobile apps etc and still enforce NTFS file and folder permissions, then consider using FileCloud (our Enterprise File Sharing and Sync product) that can effortlessly enforce these permissions and still provide seamless access.

Try FileCloud for Free!

Types of Controls to Manage Your Business Data in an EFSS

EFSS Data Controls

In 2015, there were 38% more security incidents than 2014, and an average cost per stolen record – containing sensitive and confidential data – of $154 (the healthcare industry payed the most, at $363 per record). Worse still, even when 52% of IT professionals felt that a successful cyber-attack against their network would take place in the year, only 29% of SMBs (fewer than 2014), used standard tools like patching and configuration to prevent these attacks.

The consequences of poor data security and data breaches in the cloud cannot be overstated. A look at these statistics shows the effect of data insecurity and data breaches in the cloud are a road that no business wants to take. All the aforementioned statistics show the lack of control of data in the cloud, so we will first look at who controls data in the cloud, followed by how to manage business data in an EFSS.

Who controls data in the Cloud?

It is clear that your IT department does not know who controls data in the cloud, as revealed by participants of a Perspecsys survey on data control in the cloud. According to the results, 48% of IT professionals don’t trust that cloud providers will protect their data, and 57% are not certain of where sensitive data is stored in the cloud.

This issue is also closely tied to data ownership. Once data ownership changes, then we expect a change in the level of control users have on their data . To quote Dan Gray on the concept of data ownership: “Ownership is dependent on the nature of data, and where it was created”. Data created by a user before uploading to the cloud may be subjected to copyright laws, while data created in the cloud changes the whole concept of data ownership. It is no wonder that there is confusion on this matter.

Despite challenges such as half or no control of data stored in the cloud, there exist techniques that we can use to control business data in an EFSS, consequently preventing unauthorized access and security breaches.

Types of data control for business data in an EFSS

 

Input validation controls

Validation control is important because it ensures that all data fed into a system or application is accurate, complete and reasonable. One essential area of validation control is supplier assessment. For example, is a supplier well equipped to meet a client’s expectations? With regards to controls to ascertain data integrity, security and compliance with industry regulations as well as client policies. This activity is best carried out using an offsite audit in the form of questionnaires. By determining the supplier system life-cycle processes, your team can decide if the EFSS vendor is worthy of further consideration or not. Additionally, the questionnaire serves as a basis to decide whether an on-site assessment will be carried out, based on the risk assessment. If carried out, the scope of an onsite audit will be dependent on the type of service an EFSS vendor provides.

Service level agreements should also be assessed and analyzed to define expectations of both an EFSS vendor and user. Usually, this is also done to ensure that the service rendered is in line with industry regulations. Additionally, we must ensure that an EFSS provider includes the following in the service level agreement.

  • Security
  • Backup and recovery
  • Incident management
  • Incident reporting
  • Testing
  • Quality of service rendered
  • Qualified personnel
  • Alert and escalation procedures
  • Clear documentation on data ownership as well as vendor and client responsibilities
  • Expectations with regards to performance monitoring

Processing controls

Processing control ensures that data is completely and accurately processed in an application, via regular monitoring of models and looking at system results when processing. If this is not done, small changes in equipment caused by age or damage will result in a bad model, which will be reflected as wrong control moves for the process.

Include backup and recovery controls

Processing control ensures that data is completely and accurately processed in an application, via regular monitoring of models as well as looking at system results when processing. If this is not done, small changes in equipment caused by age or damage will result in a bad model, which will be reflected as wrong control moves for the process.

Identity and access management

Usually, Identity and Access Management (IAM) allows cloud administrators to authorize personnel who can take action on specific resources, giving cloud users control and visibility required to manage cloud resources. Although this seems simple, advancement in technology has complicated the process of authentication, authorization and access control in the cloud.

In previous years, IAM was easier to handle because employees had to log into one desktop computer in the office to access any information in the internet. Currently, Microsoft’s Active Directory and Lightweight Directory Access Protocol (LDAP) are insufficient IAM tools. User access and control has to be extended from desktop computers to personal mobile devices, posing a challenge to IT. For example, as stated in a Forrester Research report, personal tablets and mobile devices are being used in 65% of organizations, as 30% of employees provision their own software on these devices for use at work, without IT’s approval. It is no wonder that Gartner predicted in 2013 that Identity and Access Management in the cloud would be one of the sought-after services in cloud-based models, in just 2 years.

With this understanding, it is important to create effective IAM without losing control of internally provisioned resources and applications. By using threat-aware identity and access management capabilities, it should be clear who is doing what, what their role is and what they are trying to access.  Additionally, user identities, including external identities, must be tied to back-end directories, and sign-on capabilities must be single because multiple passwords tend to lead to insecure password management practices.

Conclusion

Simple assurance by an EFSS vendor that you have control of business data in the cloud is not enough. There are certain techniques that should be employed to make sure that you have a significant level of data control. As discussed, ensure that you have an effective identity and access management system, have processing and validation controls as well as business data backup and recovery options in place. Other important controls that we have not discussed include file controls, data destruction controls and change management controls.

Author:Davis Porter

Image Courtesy: jannoon028,freedigitalphotos.net

Sharing Large Medical Images and Files – Factors to Consider

ID-100414456

According to data collected by the HHS Office for Civil Rights, over 113 million individuals were affected by protected health information breaches in 2015. Ninety-nine percent of these individuals were victims of hacking, while the remaining 1 percent suffered from other forms of breach such as theft, loss, improper disposal, and unauthorized access/disclosure. A quick look at the trend from 2010 shows that health information data breaches are on the rise. An even deeper look at this report shows that network servers and electronic medical records are the leading sources of information breaches, at 107 million and 3 million, respectively.

Sadly, security is not the only issue that medics face when sharing medical records. A 2014 article in the New York Times explains the difficulty medics face when trying to send digital records containing patient information. While the intention is noble—to improve patient care coordination—doctors are facing problems with their existing large file sharing options.

To help doctors share files such as medical images in an easier and safer way, we will explore four factors that should be considered.

HIPAA Compliance

Medical records are sensitive and confidential in nature. This means that handling them should be guided by set industry policies, in this case, Health Insurance Portability and Accountability Act (HIPAA), for example. HIPAA is actually a response to security concerns surrounding the transfer and storage of medical records, in this case, images.

HIPAA places responsibility on medics and healthcare providers in general to secure patient data and keep it confidential. As a result, non-compliance could lead to legal action, which can be costly. Usually, HIPAA makes sure that all Personal Health Information (PHI) is covered, outlining more stringent rules on electronic PHI, mainly because a security breach is more likely to affect a larger number of patients, all at once.

It is a medic’s responsibility to ensure that the selected EFSS solution is HIPAA-compliant if you want to maintain patient trust, keep positive publicity, and avoid steep HIPAA fines imposed after a breach. In fact, the first time you commit an offense, HIPAA will charge approximately $50,000, a figure that escalates accordingly with each subsequent offense.

Encryption

This is the second level of security you should consider before settling on a large file sharing solution. As much as an EFSS service provider is HIPAA-compliant, you need to ensure that measures outlined in HIPAA are taken.

When you read about patients’ rights as outlined in HIPAA, specifically the ‘Privacy, Security and Electronic Health Records’, you will notice that information security is emphasized. For this reasons, medics should ensure that patient data is encrypted in order to prevent it from being accessed by rogue colleagues or professional hackers.

It is entirely important that all hospital departments—ranging from cardiology, imaging centers and radiology, among others—encrypt medical images and files to further protect patient privacy. Better still, encryption should be both at rest and on transit, and files should only be shared with authorized specialists and physicians as well as the patients themselves.

To further tighten security, these files should be encrypted with non-deterministic encryption keys instead of fixed ones, whose passwords can be hacked. The best thing about this technique is that even when faced with a security breach on the server side, hackers cannot access the encryption keys. Additionally, you can opt for EFSS solutions that offer client-side encryption alone, barring the service provider and its employees from accessing this information.

File Scalability

Compared to other medical records, medical images present a great challenge with regards to file size. It is actually reported that a significant number of sequences and images are an average of 300MB. Additionally, average file size for a standard mammography image and a 3D tomography image are 19MB and 392 MB, respectively. While these file sizes already seem too large, Austin Radiological Association (ARA) predicts that by 2024, annual data from its 3D breast imaging files will reach 3 petabytes. These facts expose the storage challenges that medics face.

A glance at the process of finding medical images for active cases, storing them, and archiving those of inactive cases shows the immense need for medics to find a reliable and convenient large file sharing solution that caters to these storage needs.

A weak server could get overwhelmed with data, progressively becoming inefficient and inept as more files are uploaded into the system. The best way to solve this issue is by using cloud-based services that automatically scale your files according to your needs. This way, you will upload more files in the server, significantly reducing hardware costs by approximately 50 percent, especially when this is done on the cloud as opposed to in-house. In addition to these perks, the cloud will allow you to share these images faster and more conveniently, saving both time and storage.

Technology Increases the Likelihood of Medical Errors

While technology helps solve issues such as security and storage, over-reliance could actually lead to medical errors, incidents that are dreadful to patients and medics as well. As reported by Eric McCann of Healthcare IT News, medical errors cost America a colossal $1 trillion each year, and 400,000 Americans die annually due to these preventable mistakes.

Even though the cloud has been paraded as a solution to reduce incidences of medical error, the need to be watchful and keen can never be overstated. Take, for example, the erroneous click of a mouse and mislabeling of data. A case study on Kenny Lin, MD, a family physician practicing in Washington, D.C., which is detailed in his 2010 piece in the U.S. News & World Report, shows us how easy it is to make a mistake with technology. Dr. Lin nearly made a wrong prescription by accidentally clicking on the wrong choice in his EMR system.

Now, what if you mislabeled a patient’s radiology report? Wouldn’t that start a series of misdiagnosis and treatment? Could you imagine the damage caused? It is for this reason that even when technology makes it easier to share large, sensitive files like medical images, you should counter-check and make sure that the file is labeled correctly and sent to the intended, authorized recipient.

The Way Forward

The sensitivity of medical files is eminent, and with data breaches on the rise, it is vital to ensure the privacy of all medical documents, including large medical images and files. To reduce the possibility of a data breach, any EFSS solution used to share these files should guarantee a reasonable level of file security and HIPAA compliance. In addition to that, its capacity to efficiently handle file sizes and offer easy access to these files should not be ignored. Lastly, as you remain cautious when feeding data into the system, create a safe backup for your data just in case of a data breach. By taking such precautions, medical files can be shared between all necessary parties easier and more safely.

Author: Davis Porter

Image courtesy: freedigitalphotos.net, stockdevil

Data Owner Responsibilities When Migrating to an EFSS

While it is easy to say and conclude that all data belongs to your organization, complications arise when the person accountable for data ownership has to be identified. Even when the IT department spearheads the process of processing, storing, and backing up data among other functions, the fact is that it does not own business data. Worse still, outsourced service providers do not own this data any more than the IT department does.

Who Owns Data? What are Data Owner’s Responsibilities?

In the cloud environment, a data owner is a business user who understands the business impact of a security breach that would lead to loss of data, integrity, and confidentiality. This responsibility makes the data owner very conscious of decisions made to mitigate and prevent such security incidents.

When migrating to an EFSS, business data owners should do the following:

Classify Data

Data classification has been extensively labeled as a remedy to data breaches. In essence, data classification helps to significantly reduce insider threats, which are reported to cause 43% of data breaches. Other than malicious employees, data breaches are a result of human error. Additionally, the growing data volume experienced by businesses makes it difficult to track data; hence, it is challenging to know where data is stored, who accesses it, and what they do with this information. By making sure that only authorized employees access certain information, the probability of a data breach is likely to reduce.

Clearly, the need for data classification has never been more evident. To properly classify data, a few measures should be taken by a business.

  • Have detailed “acceptable use” policies. All employees should internalize and sign these documents, which are then reviewed annually or as needed.
  • Make use of data classification technologies. When you train employees using a data classification technology, they will better understand the sensitivity of the data they are creating, storing, and sharing. Consequently, they will treat this data with the highest level of confidentiality and caution.
  • Understand industry regulations to classify data accordingly.
  • Once data is properly classified, apply appropriate access controls and continuously yet randomly monitor data activity to nab suspicious activities as soon as they are carried out.

Monitor Data Life Cycle Activities

When migrating to an EFSS, issues such as data retention and disposal should constantly be monitored by a business data owner. Simply put, how long will the EFSS solution retain your data and how long will it take to dispose of your data completely after you have deleted it? What happens to your data once your contract with the provider ends?

Before a business owner looks at an EFSS provider’s life cycle, he needs to understand the typical seven phases of data life cycle. From the first stage of data capture, data maintenance, data synthesis, data usage, data publication, or data archival to data purging, how safe is it? Who has access to it and how long is it retained in the EFSS?

When this data is stored, is it used and accessed by third-parties who, sadly, cause 63% of all data breaches? Is the EFSS data retention and disposal policy compliant with the law? For example, data retention requirements stipulated in the Health Insurance and Portability and Accountability Act (HIPAA) state that organizations that accept credit cards must adhere to a Payment Card Industry Data Security Standard (PCI DSS) data retention and disposal policy.

Understand Enterprise File Sync-and-Share (EFSS) Deployment Models, As a Way of Assessing Risks

Despite the existence of extensive advice on the best EFSS solutions that exist, business data owners need to gain some technical knowledge. How many EFSS deployment models do you know, for example? Since this is a pretty broad topic, we will briefly discuss three models.

Public Cloud EFSS

In addition to being fast and easy to set up, a public cloud could be cheaper in terms of both infrastructure and storage costs. However, public cloud EFSS might not be the best regarding data protection and security, leaving your company exposed and vulnerable to regulatory non-compliance. It is, therefore, important to analyze the security measures the public cloud has to offer before settling.

Private Cloud EFSS

Although private cloud is believed to be more expensive compared to the public cloud, the cost of ownership depends largely on the vendor and infrastructure choice (for example, FileCloud offers the lowest cost of ownership across public and private clouds). Private cloud EFSS is worthwhile regarding services and security offered. With an adoption rate of 77%, private cloud solutions such as FileCloud are better options. This opinion is attributed to the flexibility and control over where data is stored. Consequently, users can choose which regulations to comply with and have better control over a breach because the IT department can access all the files and monitor, protect, and salvage them, as opposed to a public cloud.

Hybrid Cloud EFSS

According to RightScale’s, “Cloud Computing Trends: 2016 State of the Cloud Survey,” hybrid cloud EFSS adoption rate is 71%. The success is believed to be the result of the ability to harness the positive attributes of both a public and private cloud all at once because, usually in a hybrid environment, some components will run on the premises while others run in the cloud. One great example of a hybrid model is an EFSS application that runs as Software as a Service (SaaS) while data is stored on the premises or at the discretion of the user company.

Closing remarks

It is the responsibility of a business data owner to ascertain that data will be kept safe and confidential before migrating to any EFSS solution. This person needs to be savvy with the advantages a chosen EFSS model offers, compliance with industry regulations, proper access and identity management, understand the EFSS data life cycle processes, and ensure that security measures such as data encryption and authentication processes are in place.

 Author: Davis Porter

 

HIPAA Prescribed Safeguards for File Sharing

health care data governance

The Health Insurance Portability & Accountability Act (HIPAA) sets standards for protecting sensitive data of patients in the cloud. Any company which is dealing with PHI (protected health information) needs to ensure all of the required network, physical, and process safety measures are properly followed. If you want to learn more about requirements of HIPPA, click here to learn more.

This includes CE (covered entities), anyone who is providing treatment, operations, and payment in health care, BA (business associates) with access to patient’s information stored in the cloud or those who provide support in payment, operations, or treatment. Subcontractors and associates of associates need to be in compliance too.

Who needs HIPAA?

The privacy rule of the HIPAA helped address the saving, sharing, and accessing of personal and medical data of individuals stored in the cloud while the security rule is more specifically meant for outlining national security standards to help protect the health data which is received, maintained, transmitted, or created electronically, also known as e-PHI (electronic protected health information).

Technical and physical safeguards

If you’re hosting data with HIPAA compliant hosting providers, they need to have particular administrative, technical, and physical safeguards in place as per the US HHS (Department of Health & Human Services). The technical and physical safeguards which are the most important for services provided by hosts are listed below:

  • Physical safeguards include limited facility control or access with authorized access procedures. All entities need to be HIPAA compliant, need to have policies regarding the use and access of electronic media and workstations. This includes removing, transferring, reusing, and disposing of e-PHI and electronic media.
  • Technical safeguards should only allow authorized users to access e-PHI. Access control will include the use of unique user ID’s, emergency access procedures, automatic logoffs, decryption, and encryption.
  • Tracking logs or audit reports need to be implemented in order to keep a record of activity on software or hardware. This is very useful when it comes to pinpointing the source or the cause of security violations.
  • Technical policies need to be used for covering integrity controls and measures should be in place to confirm e-PHI has not been destroyed or altered. Offsite backup and IT disaster recovery are very important in order to ensure any electronic media errors or failures can quickly be remedied, and health information of patients can be recovered intact and accurately.
  • Transmission or network security is the last safeguard needed of HIPAA compliant hosts in order to protect them against any unauthorized access or use of e-PHI. This concerns all of the methods for transmitting data, whether it is over the internet, email, or even on private networks, like a private cloud.

A supplemental act passed in 2009 known as the HITECH (Health Information Technology for Economic & Clinical Health) Act which supported the enforcement of all of the HIPAA requirements by increasing the penalties imposed on organizations who violated the HIPAA privacy or security rules. The HITECH Act was created in response to the development of health technology and increased storage, transmission, and use of electronic health information.

HIPAA has driven a number of healthcare providers to search for solutions that can help them secure cloud data. Medical information is very private, and regulation keeps getting tighter, which means enforcement is also getting tighter. There are a number of healthcare providers have chosen to move their whole EHRs onto a HIPAA compliant platform such as FileCloud in order to reduce their expenses and become more inter-operable across various other devices in a safe, HIPAA-compliant fashion.

 

Author: Rahul Sharma

images courtesy: freedigitalphotos.net/ Stuart Miles

HIPAA Compliant File Sharing Requirements

HIPAA Complaint File Sharing

 

In this article, let us explore a bit about origin of HIPAA privacy & security rules and its major parts such as – who are covered, what information is protected, and what safeguards need to be in place to protect electronic health information stored in the cloud, mainly in the context of HIPAA complaint file sharing.

Introduction

HIPAA (Health Insurance Portability & Accountability Act of 1996) enforced the Secretary of the US HHS to develop regulations that protect the security and privacy of health information that is stored in the cloud. In accordance, HHS published the HIPAA privacy and security rule.

  • The Privacy Rule establishes countrywide standards for protection of cloud information.
  • The Security Rule establishes set security standards to protect electronic information.

The Security Rule puts in motion the protections from the Privacy Rule, and addresses technical as well as non-technical safeguards which the organizations need to have in place for securing e-PHI.

Before HIPAA, there were no accepted security standards or requirements to protect cloud information. New technologies kept evolving, and the industry started moving away from paper and began relying on electronic systems more for paying claims, proving eligibility, providing and sharing information, etc.

Today, providers use clinical applications like CPOE systems, EHR, pharmacy, radiology, etc. Health plans provide access to care and claim management and self-service applications. This may mean that the workforce is more efficient and mobile, but the potential security risk also increases at the same time.

One of the main goals of this rule is to protect individual privacy with regard to cloud information while entities are allowed to adopt new technology to improve the efficiency and quality of patient care. The security rule is scalable and flexible which means covered entities can implement procedures, policies, technologies, etc. which are appropriate for their size and organizations structure.

Coverage

This rule, just like all administrative rules, applies to health care, health plans, clearinghouses and any health care providers who transmit health information electronically.

What’s protected?

This rule protects individually identifiable information known as PHI (Protected Health Information). It protects the subset of all information covered in the privacy rule which is all of the individually identifiable information created, received, maintained or transmitted electronically by an entity. It doesn’t apply to PHI which is transmitted in writing or orally.
On related note, here is a good article on What is PII and PHI? Why is it Important?

General Rules

The rule requires all covered entities to maintain an appropriate and reasonable technical, physical and administrative safeguard for e-PHI. Covered entities must:

  • Ensure confidentiality, availability, and integrity of e-PHI created, received, maintained or transmitted by them.
  • Identify and even protect against anticipated threats to integrity or security of information.
  • Protect against impermissible, anticipated disclosures or uses.
  • Ensure workforce compliance.

Risk Management and Analysis

The provisions in the rules need entities to conduct risk analysis as a part of security management. The management provisions and risk analysis of this rule are separately addressed here, since determining which security measures are appropriate for an entities shapes the safeguard implementation for the rule.

Administrative Safeguards

  • Security Personnel: Covered entities have to designate security officials who are responsible for implementing and developing security procedures and policies.
  • Security Management Process: Covered entities need to identify and analyze any potential risks to e-PHI. They must implement security measures which will reduce the vulnerabilities and risks to appropriate and reasonable levels.
  • Information Access Management: The security rule tells covered entities to implement procedures and policies that authorize access to e-PHI only at appropriate times depending on the role of the user or recipient.
  • Workforce Management and Training: Covered entities need to provide for appropriate supervision and authorization of the workforce who use e-PHI. Covered entities need to train all members of the workforce about procedures and policies for security and need to have and apply relevant sanctions against any members who violate procedures and policies.
  • Evaluation: Covered entities need to perform periodic assessments on how well security procedures and policies are meeting the requirements of this rule.

Physical safeguards

  • Faculty Control and Access: Covered entities need to limit physical access to facilities and ensure only authorized access is granted.
  • Device and Workstation Security: Covered entities need to implement procedures and policies which specify the correct use of electronic media and workstations. They must also have procedures and policies in place for the removal, disposal, transfer, and reuse of media.

Technical Safeguards

  • Access Control: Covered entities need to implement technical procedures and policies for allowing authorized person’s access e-PHI.
  • Audit Controls: Covered entities need to implement software, hardware and procedural mechanisms for examining and recording access and any other activity in information systems which use or contain e-PHI.
  • Integrity Controls: Covered entities need to implement procedures and policies which ensure e-PHI isn’t improperly destroyed or altered. Electronic measures have to be in place to confirm this as well.
  • Transmission Security: Covered entities need to implement security measures which protect against unsanctioned access to e-PHI which is being transmitted over electronic networks.

Organizational Requirements

  • Business Associate Contracts: HHS develops regulations related to associate obligations and contracts under HITECH Act, 2009.
  • Covered Entity Responsibilities: If covered entities know of activities or practices of associates which constitute violation or breach of their obligation, the entity needs to take reasonable steps to end the violation and fix the breach.

Procedures, Policies and Documentation Requirements

Covered entities need to adopt appropriate and reasonable procedures and policies for complying with provisions of the rule. They must maintain, until six years after the date of creation or last effective date, written procedures, policies, and records, of required activities, actions and assessments.

Updates: Covered entities need to periodically update documentation as a response to organizational or environmental changes which affect the security of e-PHI.

Noncompliance Penalties and Enforcement

Compliance: The rule establishes a set of standards for confidentiality, availability, and integrity of e-PHI. The HHS and OCR are responsible for enforcing and administering standards, in connection with their enforcement of the Privacy Rule and might even conduct investigations into complaints and reviews for compliance.

 

Author: Rahul Sharma

FileCloud SSO Demystified

Single Sign On or SSO is the solution that gives one-click access to all of the applications with one password. According to Wikipedia, SSO is a property of access control of multiple related, but independent software systems. With this property a user logs in with a single ID and password to gain access to a connected system or systems without using different usernames or passwords.

The convenience of having a single username and password across multiple applications cannot be underestimated. Users can use one username and password across all applications. Users do not have to remember different login credentials for different sites. Users log in into one application and can have their login credentials preserved and carried over to all applications. Administrators do not have to worry about managing different set of passwords for different applications thereby reducing time, cost, and potential risks in password maintenance. Those are just few of the advantages of SSO.

FileCloud supports SSO across a range of authentication sources such as Active Directory, Active Directory Federation Services (ADFS), any SAML 2.0 protocol supported on premise identity provider, or Cloud based identity providers such as OKTA, onelogin, Centrify and much more.

NTLM SSO

NT LAN Manager (NTLM) is a suite of Microsoft security protocols that provides authentication, integrity and confidentiality to users. The objective is to use web browser such as Internet Explorer or Google Chrome to auto login to FileCloud website using windows Active Directory Authentication. Therefore, when a user browses to http://myfileclouddomain.com the user is seamlessly logged in to FileCloud using AD credentials without asking the user to enter username and password.


NTML Authentication

In Filecloud, Active Directory authentication must be set up and NTLM SSO can be configured as follows https://www.getfilecloud.com/supportdocs/display/cloud/NTLM+Single+Sign+On+Support

ADFS SSO

Active Directory Federation Services (ADFS) is a software component that runs on windows servers to provide users with single sign-on access to applications across organizational boundaries. ADFS integrates with Active Directory Domain Services, using it as an identity provider. The objective is to have FileCloud users authenticate against ADFS, on successful authentication response from ADFS the users are logged into FileCloud.

ADFS Authentication

FileCloud integrated seamlessly with ADFS server using the federation metadata. FileCloud server acts as a Service Provider (SP) and ADFS acts as an Identity Provider (IdP). Login requests from the client web browser will be redirected to ADFS server. ADFS server authenticates the user using the ADFS datastore that can be a SQL database, AD Server or LDAP etc and returns the authentication token to successfully log in into FileCloud.

Following link https://www.getfilecloud.com/supportdocs/display/cloud/ADFS+Single+Sign+On+Support explains the step by step details on setting up ADFS and integrating with FileCloud.

 

SAML SSO

Security Assertion Markup Language (SAML) is an XML based open standard data format for exchanging authentication and authorization data between parties. As with ADFS, FileCloud acts as a Service Provider (SP) and the customer must run the Identity Provider (Idp) server.

FileCloud SAML

The following process explains how the user logs into a hosted FileCloud application through customer-operated SAML based SSO service.

  1. User attempts to reach the hosted FileCloud application through the URL.
  2. FileCloud generates a SAML authentication request. The SAML request is embedded into the URL for the customer’s SSO Service.
  3. FileCloud sends a redirect to the user’s browser. The redirect URL includes the SAML authentication request and is submitted to customer’s SSO Service.
  4. The Customer’s SSO Service authenticates the user based on valid login credentials.
  5. Customer generates a valid SAML response and returns the information to the User’s browser
  6. The customer SAML response is redirected to FileCloud.
  7. FileCloud authentication module verifies the SAML response.
  8. If the User is successfully authenticated, the user will be successfully logged into FileCloud.

Customers can run their own Identity Provider or can use one of the cloud based Identity Providers such as OKTA, One-login, Centrify etc. FileCloud can seamlessly integrate with any IdP as long as the IdP supports SAML 2.0 protocol.

The link https://www.getfilecloud.com/supportdocs/display/cloud/SAML+Single+Sign+On+Support explains the steps involved in integrating any Identity Provider with FileCloud.

In conclusion, Single Sign On (SSO) provides the convenience of a one-click login into multiple applications and websites. FileCloud supports different SSO mechanisms and will seamlessly integrate with a number of SSO Identity providers and existing SSO infrastructure.