Archive for the ‘data governance’ Category

Demystifying the Complexities of Data Ownership

Enterprises are undergoing a digital transformation as they continue to explore new opportunities offered by connected technologies. However, they are also becoming increasingly reliant on data and driven by data gathering and analytics. The risks to sensitive data are expanding, leading to multiple questions relating to data rights and privacy that have to be unraveled. The fact that data is driving innovation is undeniable, innovators require very large quantities of data from a broad array of sources to push the envelope on emerging technologies like machine learning and AI.

In this digital age, data is collected ubiquitously. Personal data is collected each time you interact online, use a mobile device, via IoT devices in vehicles and homes, and from the various public and private sector services we utilize on a day to day basis. Due to this, data ownership can no longer be considered a niche issue. Enterprises are realizing that data ownership is gaining strategic importance.

What is Data Ownership?

Data ownership boils down to how the data was created and who created it. However, getting the precise definition of data ownership is not straight forward, the term itself is typically misleading. This fact is rooted in the basic concept of ‘ownership’, which can be construed as having legal title and full property rights to something. Going by that definition of ownership, then data ownership must mean having legal title to one or more specific articles of data. In reality, while the actual ‘owner’ of the data is responsible for the entire domain, it’s typically different people who ensure that all the details are accurate and updated.

Who Actually Owns the Data?

Is it the physical individual associated with the personal data, or is it the organization that has invested money and time in the collection, storage, processing, and analysis of the personal data. In an enterprise setting, the term ‘ownership’ generally assigns a level of accountability and responsibility for specific datasets. In this context, the ‘ownership’ bares no legal connotation but refers to other notions like assurance of data security and data quality. From a more legal standpoint, ownership-like rights are currently limited to trade secrets and intellectual property rights. However, none of them provide adequate protection of (ownership in) data.

Most legal professionals are of the opinion that data subjects should keep the ownership of the raw data they provide, while the data processor retains the ownership of ‘constructed data’ – obtained via manipulating the original data, and that can’t be reverse-engineered to extrapolate the raw data. The properties of data itself makes ownership an arduous proposition. Knowing this, regulators have instead chosen to enact simple restrictions on the use of data as opposed to labeling data as an individual’s property.

How Does Technologies Like Machine Learning Affect Data Ownership?

From voice assistants like Alexa and Siri, to self-driving cars, it’s no secret that Artificial Intelligence has come full circle in the last couple of years – largely due to big data and the advancements in computing required to process information and train machine learning systems. But even as we marvel at these technological advancements being driven by data, we cannot fail to consider how data ownership impacts both privacy, and machine learning initiatives.

It’s no secret that data ownership is slowly being solidified by the expansion of data democratization. Paradoxically, the democratization of data and the continuous iteration and development of machine learning applications muddles the concept of data ownership. Enterprises derive invaluable insights from machine-learning-driven models that utilize consumer data. From a data-ownership perspective, the trouble stems from the exact same point as the opportunity.

Creators of machine learning technologies should therefore resolve to integrate organizational and technical measures that implement data protection principles into the design of AI-based tools. Additionally, when it comes to data ownership and artificial intelligence, legal practitioners should remain circumspect to the reality that propriety rights in certain aspects of data may exist. In the context of AI, propriety rights may not protect the data itself, but its compilation, which may include database rights and copyrights regarding the ‘products’ of AI.

Data Governance Within the Enterprise

Data governance refers to the general management of the integrity, usability, security, and availability of the data utilized in the enterprise. The unparalleled rise in sources and volume of data has requisitioned enhanced data management practices for enterprises. Quality, governed data is crucial to effective and felicitous decision making. It’s essential for guaranteeing legal, regulatory and financial compliance.  The first step towards establishing a successful data governance process is clearly defining data for an enterprise-level integration. This lays the ground work for a complete audit trail of who did what to which data, making it simpler for the organization to trace if/where something went wrong. Data stewards should be appointed as part of the governance process, to oversee the entire framework.

Data privacy regulations like CCPA and GDPR have increased the need for enterprise-wide regulatory compliance. A well developed data governance framework facilitates several aspects of regulatory compliance, empowering business to readily classify data and perform process mapping and risk analysis.

Author: Gabriel Lando

The Evolution of Data Protection

Data has penetrated every facet of our lives. It has evolved from an imperative procedural function into an intrinsic component of modern society. This transformative eminence has introduced an expectation of responsibility on data processors, data subjects and data controllers who have to respect the inherent values of data protection law. As privacy rights continually evolve, regulators are faced with the challenge of identifying how best to protect data in the future. While data protection and privacy are closely interconnected, there are distinct differences between the two. To sum it up, while data protection is about securing data from unauthorized access, data privacy is about authorized access – who defines it and who has it. Essentially, data protection is a technical issue whereas data privacy is a legal one. For industries that are required to meet compliance standards, there are indispensable legal implications associated with privacy laws. And guaranteeing data protection may not comply with every stipulated compliance standard.

Data protection law has undergone its own evolution. Instituted in the 1960s and 70s in response to the rising use of computing, re-enlivened in the 90s to handle the trade of personal information, data protection is becoming more complex. In the present age, the relative influence and importance of information privacy to cultural utility can’t be understated. New challenges are constantly emerging in the form of new business models, technologies, services and systems that increasingly rely on ‘Big Data’, analytics, AI and profiling. The environments and spaces we occupy and pass through generate and collect data.

Technology enthusiasts have been adopting new data management techniques such as ETL (Extract, Transform, and Load). ETL is a data warehousing process that uses batch processing and helps business users analyze data which is relevant to their business objectives. There are many ETL tools which manage large volumes of data from multiple data sources, manage migration between multiple databases and easily load data to and from data-marts and data warehouses. ETL tools can also be used to convert (transform) large databases from one format or type to another.

The Limitations of Traditional DLP

Quaint DLP solutions offer little value. Most traditional DLP implementations mainly consist of network appliances designed for primarily looking at gateway egress and ingress points. The cooperate network has evolved; the perimeter has pretty much been dissolved leaving network-only solutions that are full of gaps. Couple that with the dawn of the cloud and the reality that most threats emanate at the endpoint and you understand why traditional, network- appliance only DLP is limited in its effectiveness.

DLP solutions are useful for identifying properly defined content but usually falls short when an administrator is trying to identify other sensitive data, such as intellectual property that might contain schematics, formulas or graphic components. As traditional DLP vendors stay focused on compliance and controlling the insider, progressive DLP solutions are evolving their technologies; both on the endpoint and within the network to enable a complete understanding of the threats that target data.

The data protection criterion has to transform to include a focus on understanding threats irrespective of their source. Demand for data protection within the enterprise is rising as is the variation of threats taxing today’s IT security admins. This transformation demands advanced analytics and enhanced visibility to conclusively identify what the threat is and deliver the versatile controls to appropriately respond, based on business processes and risk tolerance.

Factors Driving the Evolution of Data Protection

Current data protection frameworks have their limitations and new regulatory policies may have to be developed to address emerging data-intensive systems. Protecting privacy in this modern era is crucial to good and effective democratic governance. Some of the factors driving this shift in attitude include;

Regulatory Compliance: Organizations are subject to obligatory compliance standards obtruded by governments. These standards typically specify how businesses should secure Personally Identifiable Information (PII), and other sensitive information.

Intellectual Property: Modern enterprises typically have intangible assets, trade secrets, or other propriety information like business strategies, customer lists, and so on. Losing this type of data can be acutely damaging. DLP solutions should be capable of identifying and safeguarding exigent information assets.

Data visibility: In order to secure sensitive data, organizations must first be aware it exists, where it exists, who is utilizing it and for what purposes.

Data Protection in The Modern Enterprise

As technology continues to evolve and IoT devices become more and more prevalent, several new privacy regulations are being ratified to protect us. In the modern enterprise, you need to keep your data protected, you have to be compliant, you have to constantly be worried about a myriad of like malicious attacks, accidental data leakage, BYOD and much more. Data protection has become essential to the success of the enterprise. Privacy by Design or incorporating data privacy and protection into every IT initiative and project has become the norm.

The potential risks to sensitive corporate data can be as tenuous as the malfunction of small sectors on a disk drive or as broad as the failure of an entire data center. When contriving data protection as part of an IT project, there are multiple considerations an organization has to deal with, beyond selecting which backup and recovery solution they will use. It’s not enough to ‘just’ protect your data – you also have to choose the best way to secure it. The best way to accomplish this in a modern enterprise is to find a solution that delivers intelligent, person-centric and dynamic data-centric fine-grained data protection in an economical and rapidly recoverable way.

Author: Gabriel Lando

Benefits of Centralized Master Data Management

Centralized data management is the preferred choice of most organizations today. With this model, all the important files and even apps are stored on a central computer. Workers can access the data and resources on the central computer through a network connection, virtual desktop infrastructure (VDI) or desktop as a service (DaaS). Many organizations including banks, financial firms, hospitals, and even schools prefer central data management.

If your organization is not using a centralized data management model, you may want to consider it as it gives the company more control and makes things more ordered for workers. In this blog post, we’ll be looking at some of the key benefits of centralized data management.

Security

Security is a top priority for every organization, and it is one of the top benefits of central data storage. In the age of data breach, we cannot overlook the importance of proper security. You do not want sensitive files to fall into the wrong hands.

When data is spread across the different devices of your employees, it is difficult to implement proper security measures across all devices. Even if you lay down security guidelines, there are no guarantees that they’ll be followed. However, central data management ensures that you can take matters into your hands and provide thorough security for your files. You can determine who gets access to them and the level of access each person has. Centralized data management is the best way to provide foolproof security in every organization.

Easier Data Recovery

Sometimes, despite our best efforts, we are faced with the loss of data. It could be because your device is hit with a virus, a software becomes corrupt or even a hardware malfunction. Whatever the case, data recovery an option that you’ll most likely turn to. Of course, you need the right files to serve clients and ensure your organization is running smoothly.

Data recovery is much easier when you have a central data storage. Instead of your IT staff having to go through several devices to recover files and attempt to assemble them, they can focus on working on one device. This doesn’t only make the job of recovering data less complicated, but also more orderly. When you even think about it, the possibility of having to resort to data recovery is less when you opt for central data management. This is because you can implement the best security protocols and maintain the hardware of your central computer, so it never experiences any malware attacks as well as hardware or software failure.

Data Integrity

Another importance of central data management is data integrity. Data integrity refers to the consistency and accuracy of your data. When your files are spread across different devices used by your employees, there is a higher probability for conflicting versions of the same file. For example, if two people are working on different aspects of the same document at the same time. They will present two different versions ultimately, and someone will have to spend precious time compiling them. You also run the risk of having more redundant files on different devices.

Central data management eliminates all of this. When all your files are stored on a master computer, it is easier to spot redundancies as well as conflicting versions of the same document. Central data management ensures that your files are accurate and updated. This makes it easier to access specific information and makes your employees more efficient.

Smooth Collaboration

Collaboration is essential in every organization. It invariably happens that several people have to pitch in to get a job done. With central data management, collaboration is smoother. Instead of having to go around the office or send emails asking other people for particular files, your employees can just log in to the central database and access them. No need to wait because the person who has a specific file is out of the office. This does not only save time but speeds up collaboration.

Central data management also eases the decision-making process and makes it faster. This is because the decision makers can quickly access all the data they need to come to a conclusion. Having a centralized data management system also allows the people at the top tier of the organization to keep track of the activities of everyone and ensure that things are progressing as they should.

Cuts Costs

Although it may not seem obvious, central data management allows organizations to cut costs in different ways. First, your IT staff will have less to deal with if they have to focus on maintaining the central computer in your data center. This means less working hours for them. Also, your organization doesn’t have to cover the cost of purchasing devices for every employee. You can support the bring your own device (BYOD) trend in your workplace. Additionally, you will spend less on power supply and general maintenance costs.

How FileCloud Supports Central Data Management

If you are implementing central data storage in your workplace, FileCloud can help make your work smoother and more efficient. We provide a range of tools to support collaboration, security, data loss prevention and more. Let’s look at some of the things your organization will enjoy by signing up to FileCloud.

A. You can restrict access to certain files and determine who has access to them. You can also access an activity stream which shows who has access to particular files and what they did. FileCloud allows conversations around files so you can let your workers know what to do and they communicate with one another to specify where to pick up the work. What’s more, you can opt to receive smart notifications when a file is changed.

B. FileCloud allows you to manage the files in your central storage. This includes adding metadata tags to files. You can choose to search for files using the metadata or any text in the document.

C. FileCloud also allows you to prevent data loss by restoring deleted files and backing up data. You can also remotely wipe data and block access to compromised devices. FileCloud also provides automatic file versioning. Therefore, if different workers in your organization save new versions of a file at the same time, the app creates different versions automatically to prevent data loss.

This is just a tip of the iceberg of what you enjoy from signing up to FileCloud. We provide everything that your organization needs to make the most of your centralized data storage system, and our prices are very affordable! What more can you ask for?

The Changing Face of Data Governance

In our age of data-driven decision making, the new GDPR laws have once again brought the criticality of data governance to the forefront. Believed to be one of the most extensive revisions to the European data protection and privacy legislation, GDPR and its associated changes have presented businesses with the unique opportunity to organize their data houses.

So, executives should consult with experts familiar with GDPR on its impact on their operations. Businesses need to get used to the idea of handing over control of the data they share with people; only then can they achieve GDPR compliance and establish a better rapport with customers. But how does data governance figure into all this? Find out below:

 

 

Shortcomings in Traditional Data Governance

 

 

There’s nothing wrong with traditional data governance; in fact, it offers a rigorous and strategic framework for designing outline roles, data standards, and responsibilities, along with procedures and policies for data management throughout the organization. What’s more, without traditional data governance, businesses wouldn’t have been able to increase their efficiency and productivity in the use of core business data resources in data and transactional warehousing environments.

The focus of these methods was on data quality, trust, and protection, and they were great for recognized data sources that had known value. However, the modern industry is full of unstructured or unknown data sources like IoT and big data, and traditional data governance just can’t keep up. With the added features of machine learning and artificial intelligence, the shortcomings of the conventional approach are becoming obvious.

Owing to their rigid structure, conventional data governance procedures and policies hinder the possibilities formed by advanced analytics and data technologies by forcing them to fit the age-old mould for legacy infrastructure and data platforms.

 

 

Impact of Emerging Technologies

 

 

IoT provides thousands of unrelated data sources a chance to connect on the same platform. IoT gadgets are more than just data source; they are data generators and gatherers. Sensors, wearable devices, and other modern computing technology can accumulate data by the millisecond and stream the same data into a cloud of possible consumers.

Artificial intelligence and machine learning systems analyze the data in real-time to identify relationships and patterns, gain knowledge, and plan a suitable course of action. While these are data-based autonomous actions rather than explicit instruction or programming, they possess the power to find gaps or extra data requirements and send requests back to the IoT gadgets for collecting or generating fresh data.

Traditional data governance makes the onboarding of IoT devices very difficult because of conventional authorization and validation needs. To foster machine learning and artificial intelligence in these initial stages, the data lifecycle must rely on non-conformity with predefined standards and rules. So, governance must allow new data to be incorporated quickly and efficiently, and offer mechanisms to mitigate dangers, maximize value, and encourage exploration.

 

AI and IoT under the New Data Governance Methods

 

Concepts like IoT and AI aren’t new but they are still highly competitive markets for businesses. While the two undergo expansion, they tend to hypercharge the growing volume of data, especially unstructured data, to unexpected levels. As a result, the volume, velocity, and variety of data increase in unison. And as the volume rises, so does the speed and velocity at which data need to be processed. In such cases, the types of unstructured data increases as well. To manage all this, businesses have to implement the necessary data governance.

Storage and Retention

Big data has increased the variety and volume of data considerably, which means more data storage is a necessity. Data storage and data integration and provisioning are used interchangeably, but they are very distinct. Governance must address them separately and appropriately. While storage normally means the way data is physically retained by the organization, in conventional data management methods, the data storage technology impacts the storage requirements like size and structural limitations. Along with retention practices and budget limitations, often dependent on compliance, these needs restrict the amount of data stored by the business at a certain time.

 

 

 

Security and Privacy

 

 

Security and privacy are the major areas of focus for conventional data governance. But new technologies expand the scope of what needs to be secured and protected, emphasizing the need for additional protection. Even though “privacy” and “security” are thought to be one and the same, they are not.

Security strategies safeguard the integrity, confidentiality, and availability of data created, acquired, and maintained by the company. Security exclusively means protecting data, while privacy is more about protecting entities, like individuals and businesses. Privacy programs make certain that the interests and rights of an individual to control, use, and access their private details are protected and upheld. However, without a successful security strategy, a privacy program is unable to exist. Privacy needs often inform policies in large-scale security operations, but the program itself influences the processes and technology need to implement the necessary controls and protection.

As far as IoT is concerned, security is one of the most crucial aspects. The regular addition of systems and devices constantly leads to new vulnerabilities. Even though business comes first, protection is possible only if they protect and secure the network along with every touch point where data travels. Thanks to IoT, data security isn’t just about permissions and access on a given system. Data protection now incorporates network segmentation, data encryption, data masking, device-to-device authentication, cybersecurity monitoring, and network segmentation. That’s a whole lot more than what traditional governance programs envision.

 

Escalated Digital Transformation

 

The changes in digital transformation will be far-reaching. In fact, the new data governance measures will accelerate the process, thereby rewarding organizations that commit to more than just compliance with data governance. Moreover, a stronger foundation in the field of data governance will provide organizations with various benefits, such as increased operational efficiency, decision-making, improved data understanding, greater revenue, and better data quality.

Data-driven businesses have long enjoyed these advantages, using them to dominate and disrupt their respective industries. But it’s not just meant for large businesses. The moment is right, for your company to de-silo data governance and treat like a strategic operation.

Data governance is changing, and you need to work hard to keep up or get left behind in the industry. However, you can follow the tips given below for the best health and ensure your company is prepared for GDPR.

 

Author : Rahul Sharma

The Changing Face of Data Governance

In our age of data-driven decision making, the new GDPR laws have once again brought the criticality of data governance to the forefront. Believed to be one of the most extensive revisions to the European data protection and privacy legislation, GDPR and its associated changes have presented businesses with the unique opportunity to organize their data houses.

So, executives should consult with experts familiar with GDPR on its impact on their operations. Businesses need to get used to the idea of handing over control of the data they share with people; only then can they achieve GDPR compliance and establish a better rapport with customers. But how does data governance figure into all this? Find out below:

 

Shortcomings in Traditional Data Governance

 

https://upload.wikimedia.org/wikipedia/commons/a/a2/Digital_Transformation.jpg

 

There’s nothing wrong with traditional data governance; in fact, it offers a rigorous and strategic framework for designing outline roles, data standards, and responsibilities, along with procedures and policies for data management throughout the organization. What’s more, without traditional data governance, businesses wouldn’t have been able to increase their efficiency and productivity in the use of core business data resources in data and transactional warehousing environments.

The focus of these methods was on data quality, trust, and protection, and they were great for recognized data sources that had known value. However, the modern industry is full of unstructured or unknown data sources like IoT and big data, and traditional data governance just can’t keep up. With the added features of machine learning and artificial intelligence, the shortcomings of the conventional approach are becoming obvious.

Owing to their rigid structure, conventional data governance procedures and policies hinder the possibilities formed by advanced analytics and data technologies by forcing them to fit the age-old mould for legacy infrastructure and data platforms.

 

Impact of Emerging Technologies

https://upload.wikimedia.org/wikipedia/commons/5/5f/Brain-Controlled_Prosthetic_Arm_2.jpg

 

IoT provides thousands of unrelated data sources a chance to connect on the same platform. IoT gadgets are more than just data source; they are data generators and gatherers. Sensors, wearable devices, and other modern computing technology can accumulate data by the millisecond and stream the same data into a cloud of possible consumers.

Artificial intelligence and machine learning systems analyze the data in real-time to identify relationships and patterns, gain knowledge, and plan a suitable course of action. While these are data-based autonomous actions rather than explicit instruction or programming, they possess the power to find gaps or extra data requirements and send requests back to the IoT gadgets for collecting or generating fresh data.

Traditional data governance makes the onboarding of IoT devices very difficult because of conventional authorization and validation needs. To foster machine learning and artificial intelligence in these initial stages, the data lifecycle must rely on non-conformity with predefined standards and rules. So, governance must allow new data to be incorporated quickly and efficiently, and offer mechanisms to mitigate dangers, maximize value, and encourage exploration.

 

AI and IoT under the New Data Governance Methods

Concepts like IoT and AI aren’t new but they are still highly competitive markets for businesses. While the two undergo expansion, they tend to hypercharge the growing volume of data, especially unstructured data, to unexpected levels. As a result, the volume, velocity, and variety of data increase in unison. And as the volume rises, so does the speed and velocity at which data need to be processed. In such cases, the types of unstructured data increases as well. To manage all this, businesses have to implement the necessary data governance.

Storage and Retention

Big data has increased the variety and volume of data considerably, which means more data storage is a necessity. Data storage and data integration and provisioning are used interchangeably, but they are very distinct. Governance must address them separately and appropriately. While storage normally means the way data is physically retained by the organization, in conventional data management methods, the data storage technology impacts the storage requirements like size and structural limitations. Along with retention practices and budget limitations, often dependent on compliance, these needs restrict the amount of data stored by the business at a certain time.

 

Security and Privacy

https://c1.staticflickr.com/9/8604/16042227002_1d00e0771d_b.jpg

 

Security and privacy are the major areas of focus for conventional data governance. But new technologies expand the scope of what needs to be secured and protected, emphasizing the need for additional protection. Even though “privacy” and “security” are thought to be one and the same, they are not.

Security strategies safeguard the integrity, confidentiality, and availability of data created, acquired, and maintained by the company. Security exclusively means protecting data, while privacy is more about protecting entities, like individuals and businesses. Privacy programs make certain that the interests and rights of an individual to control, use, and access their private details are protected and upheld. However, without a successful security strategy, a privacy program is unable to exist. Privacy needs often inform policies in large-scale security operations, but the program itself influences the processes and technology need to implement the necessary controls and protection.

As far as IoT is concerned, security is one of the most crucial aspects. The regular addition of systems and devices constantly leads to new vulnerabilities. Even though business comes first, protection is possible only if they protect and secure the network along with every touch point where data travels. Thanks to IoT, data security isn’t just about permissions and access on a given system. Data protection now incorporates network segmentation, data encryption, data masking, device-to-device authentication, cybersecurity monitoring, and network segmentation. That’s a whole lot more than what traditional governance programs envision.

 

Escalated Digital Transformation

https://www.mojix.com/wp-content/uploads/2018/01/digital-transformation.jpg

 

The changes in digital transformation will be far-reaching. In fact, the new data governance measures will accelerate the process, thereby rewarding organizations that commit to more than just compliance with data governance. Moreover, a stronger foundation in the field of data governance will provide organizations with various benefits, such as increased operational efficiency, decision-making, improved data understanding, greater revenue, and better data quality.

Data-driven businesses have long enjoyed these advantages, using them to dominate and disrupt their respective industries. But it’s not just meant for large businesses. The moment is right, for your company to de-silo data governance and treat like a strategic operation.

Data governance is changing, and you need to work hard to keep up or get left behind in the industry. However, you can follow the tips given below for the best health and ensure your company is prepared for GDPR.

 

 

 

Author – Rahul Sharma

Data Protection Officers’ (DPO) Role in Overseeing GDPR Compliance

The five-year process since the passage of the General Data Protection Regulation (GDPR) is soon coming to an end, and organizations across the globe are now clamoring to prepare for the several new requirements surrounding data collection and processing. One requirement in particular has been arguably the most debated and amended provision throughout the legislative process of the GDPR; this obligation calls for staffing, something that is yet to be seen in European law outside Germany – specific organizations will have to employ, appoint or contract a designated data protection officer (DPO) by the time the regulation comes into force in May 2018.

The GDPR has made it mandatory for any organization that controls or processes large volumes of personal data, including public bodies – with the exemption of courts, to appoint a DPO. This requirement is not limited to large organizations; GDPR states that as long as the activities of the processor or controller involves the ‘systematic and regular monitoring of data subjects on a grand scale’ or where the entity carries out large scale processing of particular categories of personal data such as data that details things like race, religious beliefs, or ethnicity, they must comply to this requirement. This basically means that even sole trades who handle certain types of data may have to hire a DPO.

A DPO may be appointed to act on behalf of a group of public authorities or companies, depending on the size and structure. A scope in the regulation allows for EU member states to specify additional circumstances for the appointment of a DPO. For example, in Germany, every business with more than nine employees and permanently process personal data have to appoint a DPO.

What is a DPO?

A data protection officer is a position within an organization that independently advocates for the responsible use and protection of personal data. The role of Data Protection Officer is a fundamental part of the GDPR’s accountability-based approach. The GDPR necessitates that a DPO is responsible for liaising with end-users and customers on any privacy related requests, liaising with the various data protection authorities, and ensuring that employees remain informed on any updates regarding data protection requirements. The DPO has to have expert knowledge of data protection practices and laws, on top of having a solid understanding of the company’s organizational and technical structure.

The Controller and the processor shall ensure that the data protection officer is involved, properly and in a timely manner, in all issues which relate to the protection of personal data.

Article 39, GDPR

What is the Role of the DPO?

The DPO is mainly responsible for ensuring a company is compliant with the aims of the GDPR and other data protection policies and laws. This responsibility may include setting rational retention periods for personal data, developing workflows that facilitate access to data, clearly outlining how collected data is anonymized, and monitoring all these systems to make sure they work towards the protection of personal data. Additionally, the DPO should also be available for inquiries on issues related to data protection practices, the right to be forgotten, withdrawal of consent and other related rights the GDPR grants them.

The GDPR affords the data protection officer several rights on top of their responsibilities. The company is obligated to provide any resources the DPO asks for, or requires to fulfill their role and ongoing training. They should have full access to the company’s data processing operations and personnel, a considerable amount of independence in the execution of their tasks, and a direct reporting line to the highest level of management within the company. GDPR expressly prevents the dismissal of the data protection officer for executing their tasks and puts no limitation on the length of their tenure.

How Does DPO Differ from Other Security Roles?

Currently most organizations already have Chief Information Officer (CIO), Chief Data Officer, or CISO roles; however, these roles are different from the DPO role. The holders of these positions are typically responsible for ensuring the company’s data is safe, and ensuring the data a company collects is being used to enhance business processes across the organization. The DPO on the other hand mainly works to ensure the customer’s privacy. This means that instead of retaining ‘valuable’ data indefinitely, or exploiting insights collected in one business line to imbue another; the DPO is there to make sure only the minimum data required to complete a process is collected and subsequently retained. GDPR creates a huge demand for DPOs, but the job itself is far from easy.

Who Qualifies to be DPO?

The GDPR doesn’t specify the exact credentials a DPO should have. The role in itself is somewhat multi-faceted, in the sense that advising on obligations under GDPR is a legal role, while monitoring compliance falls under audit. Additionally, the data protection impact assessment is more of a privacy specialist role, and the working closely with a supervisory authority demands an understanding of how the authority works.

I. Level of expertise – A DPO must poses a level of expertise that is comparable to the complexity, sensitivity and volume of data the company processes. A high level understanding of how to develop, implement and manage data protection programs is crucial. These skills should be founded upon a vast-ranging experience in IT. The DPO should also be able to demonstrate an awareness of evolving threats and fully comprehend how modern technologies can be used to avert these threats.
II. Professional qualities – A DPO doesn’t have to be a lawyer, but they have to be experts in European and national data protection law, this includes an in-depth knowledge of the GDPR. They should be able to act in an independent manner. This points towards the need for a mature professional who can build client relationships while ensuring compliance without taking an adversarial position.
III. Ability to execute tasks – The DPO has to be able to demonstrate high professional ethics, integrity, leadership and project management experience; to be able to request, mobilize and lead the resources needed to fulfill their roles.

Best Practices for ITAR Compliance in the Cloud

The cloud has become part and parcel of todays Enterprise. However, remaining compliant with the International Traffic in Arms regulation (ITAR) demands extensive data management aptness. Most of the regulatory details covered by ITAR aim to guarantee that an organization’s materials and information regarding military and defense technologies on the US munitions list (USML) is only shared within the US, with US authorized entities. While this may seem like a simple precept, in practice, attaining it can be extremely difficult for most companies. Defense contractors and other organizations that primarily handle ITAR controlled technical data have been unable to collaborate on projects while utilizing cloud computing practices that have a proven track record fostering high performance and productivity. Nevertheless, the hurdles impeding the productivity opportunities of the cloud can be overcome. Practices that govern the processing and storage of export controlled technical data are evolving.

Full ITAR compliance in the cloud is not an end result, but a continual odyssey in protecting information assets. In the long run, being ITAR compliant boils down to having a solid data security strategy and defensive technology execution in place.

Utilize End-to-End Encryption

On September 2016, the DDTC published a rule that established a ‘carve out’ for the transmission of export controlled software and technology within a cloud service infrastructure, necessitating the ‘end-to-end’ encryption of data. The proviso is that the data has to be encrypted before it crosses any boarder, and has to remain encrypted at all times during transmission. Likewise, any technical data potentially accessed by a non-US person outside or within the United States has to be encrypted ‘end-to-end’; which the rule delineates as the provision of continual cryptographic protection of data between the originator and the intended recipient. In a nutshell, the mechanism of decrypting the data can’t be given to a third party before it reaches the recipient.

The native encryption of data at rest offered by most cloud providers fails to meet the definition of end-to-end encryption, because the cloud provider likely has access to both the encryption key and data. The cloud provider inadvertently has the ability to access export controlled information. Organizations have to ensure that DDTC definition of ‘end-to-end’ encryption is met before storing their technical data in a public or private cloud environment. Otherwise they will be in violation of ITAR.

Classify Data Accordingly

Most technologies are not limited to single use. Whenever an organization that handles technical data related to defense articles shares information regarding a service or product; steps have to to be taken to make sure that any ITAR controlled data is carefully purged in its entirety. Classification entails reviewing existing business activities and contracts to establish if they fall under ITAR. The process requires a good understanding of licensing terms, court interpretations, agency directives and other guidance. In order to successfully navigate the nuances and complexities of ITAR, organizations have to collect enough metadata to catalog, separate and classify information. For easy identification, the data should be classified into categories such as ‘Public Use’, ‘Confidential’, and ‘Internal Use Only’. Classifying data is a requisite to creating a full-proof Data Leakage Prevention (DLP) implementation.

Develop a Data Leak Prevention (DLP) Strategy

Accidental leaks owing to user error and other oversights occur more often that most would care to admit. Mistakes that can happen, will happen. Establishing a set of stringent policies to obviate users from mishandling data, whether fortuitously or intentionally is crucial to ITAR compliance. Organizations should have a strategy in place to guarantee the continual flow of data across their supply chains, while protecting said data from the following employee scenarios:
Well meaning insiders – employees who makes an innocent mistake.
Malicious insiders – employees with ill intention
Malicious Outsiders – individuals looking to commit cooperate espionage, hackers, enemy states, and competitors among others.

Control Access to Technical Data

Access control is well known technique that is used to regulate who can view or use the resources in a computing environment. Access control can be employed on a logical or physical level. Physical access control restricts access to physical areas and IT assets. Logical access control allows IT administrators to establish who is accessing information, what information they are accessing and where they are accessing it from. Roles, permissions are security restrictions should be established before hand to ensure that only authorized U.S persons have access to export controlled technical information. Multifactor authentication strengthens access control by making it extremely difficult for unauthorized individuals to access ITAR controlled information by compromising an employees access details.

Establish Security Policies and Train the Staff Well

An ITAR specific security stratagem is the corner stone of data security practices. The policies should handle network and physical security considerations. ITAR is riddled with complications that make it easy for organizations to make mistakes if they don’t remain keen. The organization is as secure as it’s weakest link, in most cases it’s usually the staff. A solid security policy on paper simply does not cut it. Without proper staff training, a compliance strategy will be largely ineffective since it doesn’t tie in with the actual organizational procedures. Investing in end-user training is the only way to ensure security policies are implemented.

In Closing

Organizations have turned to government clouds to manage the complex regulatory issues associated with the cloud. Platforms like AWS Gov Cloud has developed substantial capabilities that enable organizations subject to ITAR to effectuate robust document management and access control solutions. When paired with FileCloud organizations can build and operate document and information management systems that satisfy the strictest security and compliance requirements.

 

Author : Gabriel Lando

Backup Mistakes That Companies Continue to Commit

 

 

Imagine a situation where you wake up, reach your office, and witness the chaos. Because your business applications are not working anymore. And that’s because your business data doesn’t exist anymore! Information about thousands of customers, products, sales orders, inventory plans, pricing sheets, contracts, and a lot more – not accessible anymore. What do you do? Well, if your enterprise has been following data backup best practices, you’ll just smile, and check what the progress on the data restoration is. Alas, problems await. That’s because your people might have committed one of the commonplace yet breakneck mistakes of data backups. Read on to find out.

https://www.ophtek.com/5-mistakes-avoid-backing-data/

 

Fixation of the Act of Backup

Sounds weird, but that’s what most enterprises do, really. Data engineers, security experts, and project managers – everyone is so focused on the act of backup, that they all lose track of the eventual goals of the activity. Recovery time objectives (RTO) and recovery point objectives (RPO) should govern every act in the process of data backup. Instead, companies only focus on ensuring that data from every important source is included in the backup.

Nobody, however, pays much heed to backup testing. This, for instance, is one of the key aspects of making your data backup process foolproof. Instead, companies end up facing a need for data restoration, only to realize that the backup file s corrupt, missing, or not compliant with the pre-requisites of the restoration tool.

The solution – make rigorous backup testing a key element of your backup process. There are tools that execute backup tests in tandem with your data backup. If you don’t wish to invest in such tools as yet, make sure you conduct backup testing at least bi-annually.

Not Adopting Data Backup Technologies

What used to be a tedious and strenuous task for administrators and security experts a few years back can now be easily automated using data backup tools. These tools are much more reliable than manual backup operations. What’s more, there will not be the dreaded problems such as those associated with data formats, etc., when the time for restore arrives.

Scheduled backups, simultaneous testing, and execution of backup and restore in sync with your RTO and RPO goals. Of course, businesses must understand the data backup tools available in the market before choosing one.

 

Unclear Business Requirements (In Terms Of Data Backup And Restore)

Take it from us; one size won’t fit all organizations or processes, when it comes to data backups, whether manual or controlled via a tool. Project managers must understand the business requirements around data to be able to plan their data backup projects well. The backbone of a successful data backup process and plan is a document called recovery catalog. This document captures all necessary details centered on aspects such as:

The different formats of data owned by the business

  • The time for which every backup needs to be available for possible restore operations (RPO)
  • The priority of different data blocks from a recovery perspective (RTO)
  • The recovery document will go a long way in helping you enlist the tools you need for successful management of data backup and recovery. Also, it will help you design better processes and improve existing processes related to the entire lifecycle of data backup.

Right Requirement, Wrong Tool

Your CIOs expectations from your team are governed by the business’ expectations from the entire IT department of the company. There’s nothing wrong with the expectations and requirements, it’s possible, however, that the tools you have are not well suited to fulfill those requirements.

For instance, in an IT ecosystem heavily reliant on virtualization, there are already built in cloning capabilities within these virtualization tools. However, these backups can take disk space almost equal to the entire environment. Now if you need to change your VMs often, your storage will soon be exhausted as you keep on making new copies of updated environments.

If you have clarity on the most important business applications, it becomes easier to work with IT vendors and shortlist data backup tools that can easily integrate with these applications. This could be a massive boost to your enterprise’s data backup capabilities.

Failure to Estimate Future Storage Needs

No doubts, the costs of data storage are on their way down, and chances are they’ll continue to do so. However, almost every business only buys storage based on its estimation of what’s needed. It’s commonplace enough for companies to completely ignore the fact that their data backups will also need space to stay safe. And this is why it’s so important to estimate the data storage requirements after accounting for your data backup objectives. While doing a manual backup, for instance, if the executors realize that there’s not much space to play around with, it’s natural for them to leave out important data. Also, account for the possibilities of increased frequencies of backups in the near future.

Not Balancing Costs of Backup with Suitability of Media

It’s a tough decision, really, to choose between tape and disks for data storage. While tapes are available inexpensively, in plenty, and pretty durable from a maintenance perspective, you can’t really store essentials systems data and business critical applications’ data on tape, because the backups are slow. Estimate the cost of time lost in the slow backup because of tapes while deciding on your storage media options. Often, the best option is to store old and secondary data on tape and use disks for storage of more important data. In this case, you will be able to execute data restoration and complete is sooner than depending purely on tape media.

Concluding Remarks

There’s a lot that can go wrong with data backups. You could lose your backed-up data, run out of space for it, realize the data backup files are corrupted when you try to restore them, and in general, fail to meet the RTO and RPO goals. To do better, understand what leads to these mistakes, and invest time and money in careful planning to stay secure.

 

Author: Rahul Sharma

International Traffic in Arms Regulations (ITAR) Compliance in the Cloud

 

 

ITAR was enacted in 1976 to control the export of defense-related articles and services. It stipulates that non-US persons are not allowed to have logical or physical access to articles modulated by International Traffic in Arms Regulations; which is administered by the Directorate of Defense Trade Controls – DDTC, a sub-division of the State Department. The articles covered by ITAR are listed on the United States Munitions List – USML, and generally, encompass any technology that is specifically designed or intended for military end-use. ITAR was also contrived to govern the import and export of any related technical data that consists of describes, supports, or accompanies the actual exported service or goods unless exemptions or special authorization is created.

The goal of ITAR is to prevent the transfer or disclosure of sensitive information, typically related to national security and defense, to a foreign national. In most cases, non-compliance usually translates to the loss of assets and professional reputation. However, with ITAR, lives may possibly be at stake. This is why the International Traffic in Arms Regulations is a strictly enforced United States government regulation and carries some of the most austere criminal and civil penalties that not business or individual would want to be on the receiving end of.

ITAR is not applicable to information that is already available in the public domain, or that is commonly taught in school under general scientific, engineering or mathematical principles.

Who is required to be ITAR compliant?

The law essentially applies to defense contractors who manufacture or export services, items or other information on the United States Munitions List. However, any company that is in the supply chain for such items must make ITAR compliance a priority. ITAR has a fairly complicated set of requirements, and since the repercussions of non-compliance are severe, companies should not hesitate to seek legal clarifications of their obligations if they even suspect the regulation applies to them – better safe than sorry. The vague categories of the USML make it difficult to intelligibly understand what exactly falls under the purview of military equipment.

The list is inclusive of most technology used for spaceflight, along with a vast range of technical data such as product blueprints, software and aircraft technology. Most of these items were initially developed for military purposes but were later on adapted for mainstream purposes – in aviation, maritime, computer security, navigation, electronics and other industries. It is crucial for firms that offer products and services to government consumers to fully grasp this distinction, to avoid expensive legal violations. ITAR may also likely impact large commercial enterprises, universities, research labs, and other institutions who are not directly involved in the defense industry.

The Repercussions of Non-compliance

Violating ITAR could lead to both criminal and civil penalties. The imposed fines are virtually unlimited – typically, organizations are prosecuted for hundreds of violations at once. The penalties for ITAR violations, both criminal and civil, are substantial. Criminal penalties may include fines of up to a million dollars per violation and 10 years’ imprisonment while civil fines can be as high as half a million dollars per violation. Failure to comply with ITAR may also damage an organizations reputation and ability to conduct business. The State Department maintains publicly available records of all penalties and violations dating back to 1978. Organizations and individuals run the risk of being completely debarred from exporting defense-related services and items.

Challenges in the Cloud

ITAR compliance and the adoption of cloud platforms presents unique challenges. Uploading technical data to the cloud carries with it a huge risk of penalties and violations. There are a lot of questions in regards to whether or not regulated technical data can be stored in a public cloud. The intrinsic quandary in that cloud vendors use distributed and shared resources that will likely cross national borders, and this dispensation of resources is not entirely transparent to the end-user. Data back-up and replication are common security measures when sharing files and collaborating via the cloud, but they can inadvertently lead to unlicensed exports in the event data is sent to servers located outside the United States. Once technical data goes beyond U.S borders, the risk of non-US persons having access to it increases exponentially.

In 2016 for example, Microwave Engineering Cooperation settled an ITAR violation with the State Department after technical data related to a defense article was exported to a foreign person without authorization. So if giving a foreign person access to technical data, or placing it on a server in a foreign nation is deemed and export. What guidance does ITAR give to ensure the entire process is done in a legal manner? Or is cloud storage simply off the table?

The State Department maintains that technical data can be stored on servers outside the U.S, provided that the of the ITAR license exemption conditions are met, and adequate measures are taken to obviate non-US individuals from accessing technical data. In most cases, the measure typically involves ensuring that any data sent to a server beyond U.S borders, or that is potentially accessible by a foreign person within or outside the U.S has to be properly encrypted. It is important to note that by law, cloud providers aren’t considered exporters of data, however, your organization might be. So the burden of ensuring ITAR compliance when handling technical data falls squarely on the people within the organization. Organizations dealing with defense-related articles in any capacity have to exercise extreme caution when using any commercial file sharing and sync service.

 

Author: Gabriel Lando

Adopting Privacy by Design to Meet GDPR Compliance

The proliferation of social networking and collaboration tools has ushered in a new era of the remote enterprise workforce; however, they have also made organizational boundaries non-static. Making it increasingly difficult to safeguard the confidential and personal data of their business partners, employees and customers. In theses political uncertain times, defending privacy is paramount to the success of every enterprise. The threats and risks to data are no longer theoretical; they are apparent and menacing. Tech decision makers have to step in-front of the problem and respond to the challenge. Adopting the privacy by design framework is a surefire way of protecting all users from attacks on their privacy and safety.

The bedrock of privacy be design (PbD) is the anticipation, management and prevention of privacy issues during the entire life cycle of the process or system. According to the PbD philosophy, the most ideal way to mitigate privacy risks is not creating them to begin with. Its architect, Dr. Ann Cavoukian, contrived the framework to deal with the rampant issue of developers applying privacy fixes after the completion of a project. The privacy by design framework has been around since the 1990s, but it is yet to become mainstream. That will soon change. The EU’s data protection overhaul, GDPR which comes into effect in May 2018, demands privacy by design as well as data protection by default across all applications and uses. This means that any organization that serves EU residents has to adhere to the newly set data protection standards regardless of whether they themselves are located within the European Union. GDPR has made a risk-based approach to pinpointing digital vulnerabilities and eliminating privacy gaps a requirement.

Privacy by Default

Article 25 of the General Data Protection Regulation systematizes both the concepts of privacy by design and privacy be default. Under the ‘privacy by design’ requirement, organizations will have to setup compliant procedures and policies as fundamental components in the maintenance and design of information systems and mode of operation for every organization. This basically means that privacy by design measures may be inclusive of pseudonymization or other technologies that are capable of enhancing privacy.

Article 25 states that a data controller has to implement suitable organizational and technical measures at the time a mode of processing is determined and at the time the data is actually processed, in order to guarantee data protection principles like data minimization are met.

Simply put, Privacy by Default denotes that strict privacy settings should be applied by default the moment a service is released to the public, without requiring any manual input from the user. Additionally, any personal data provided by the user to facilitate the optimal use of a product must only be kept for the amount of time needed to offer said service of product. The example commonly given is the creation of a social media profile, the default settings should be the most privacy-friendly. Details such as name and email address would be considered essential information but not location or age or location, also all profiles should be set to private by default.

Privacy Impact Assessment (PIA)

Privacy Impact Assessments are an intrinsic part of the privacy by design approach. A PIA highlights what personally Identifiable Information is collected and further explains how that data is maintained, how it will be shared and how it will be protected. Organizations should conduct a PIA to assess legislative authority and pinpoint and extenuate privacy risks before sharing any personal information. Not only will the PIA aid in the design of more efficient and effective processes for handling personal data, but it can also reduce the associated costs and damage to reputation that could potentially accompany a breach of data protection regulations and laws.

The most ideal time to complete a Privacy Impact Assessment is at the design stage of a new process or system, and then re-visit it as legal obligations and program requirements change. Under Article 35 of the GDPR, data protection impact assessments (DPIA) are inescapable for companies with processes and technologies that will likely result in a high risk to the privacy rights of end-users.

The Seven Foundational Principals of Privacy by Design

The main objective of privacy by design are to ensure privacy and control over personal data. Organization can gain a competitive advantage by practicing the seven foundational principles. These principles of privacy by design can be applied to all the varying types of personal data. The zeal of the privacy measures typically corresponds to the sensitivity of the data.

I. Proactive not Reactive; Preventative not Remedial – Be prepared for, pinpoint, and avert privacy issues before they occur. Privacy risks should never materialize on your watch, get ahead of invasive events before the fact, not afterward.
II. Privacy as the default setting – The end user should never take any additional action to secure their privacy. Personal data is automatically protected in all business practices or IT systems right off the bat.
III. Privacy embedded into design – Privacy is not an after thought, it should instead be part and parcel of the design as a core function of the process or system.
IV. Full functionality (positive-sum, not zero sum) – PbD eliminates the need to make trade-offs, and instead seeks to meet the needs of all legitimate objectives and interests in a positive-sum manner; circumventing all dichotomies.
V. End-to-end lifestyle protection – An adequate data minimization, retention and deletion process should be fully-integrated into the process or system before any personal data is collected.
VI. Transparency and visibility – Regardless of the technology or business practice involved, the set privacy standards have to be visible, transparent and open to providers and users alike; it should also be documented and independently verifiable.
VII. Keep it user-centric – Respect the privacy of your users/customers by offering granular privacy options, solid privacy defaults, timely and detailed information notices, and empowering user-friendly options.

In Closing

The General Data Protection Regulation makes privacy by design and privacy by default legal requirements in the European Union. So if you do business in the EU or process any personal data belonging to EU residents you will have to implement internal processes and procedures to address the set privacy requirements. A vast majority of organizations already prioritize security as part of their processes. However, becoming fully compliant with the privacy by design and privacy by default requirement may demand additional steps. This will mean implementing a privacy impact assessment template that can be populated every time a new system is procured, implemented or designed. Organizations should also revisit their data collection forms to make sure that only essential data is being collected. Lastly it will be prudent to set up automated deletion processes for specific data, implementing technical measures to guarantee that personal data is flagged for deletion after it is no longer required. FileCloud checks all the boxes when it comes to the seven principles of privacy by design and offers granular features that will set you on the path to full GDPR compliance. Click here for more information.

Author Gabriel Lando

image courtesy of freepik.com