Archive for the ‘Advanced Computer Administration and Architecture’ Category

Are You Committing Any of These Super Common DevOps Mistakes?

 

A new venture is never easy. When you try something for the first time, you’re bound to make mistakes. DevOps isn’t an exception to the rule. Sure, you might have read up a lot on the subject but nothing can prepare you for the real thing. Does that mean you give up trying to understand DevOps? Not at all! That’s the first mistake you must overcome; if your knowledge of basic DevOps theory is weak, you will speed up the disaster, and before long, your efforts will seem more disappointing than productive. So, keep at it and in the meantime, check out this list of common mistakes that you can easily avoid:

 A Single Team to Handle the Whole DevOps Workload

 

 

Most organizations make this mistake – they rely on a single team to support the entire DevOps functions. Your overburdened development and operations crew already has to communicate and coordinate with the rest of the company. Adding a dedicated team for this purpose adds to the confusion.

The thing is, DevOps began with the idea of enhancing collaboration between the teams involved in software development. So, it is more than just development and operations. They must also handle security, management, quality control, and so on. Thus, the simpler and straightforward you keep things within your company, the better.

Instead of adding a dedicated team for all DevOps functions, work on your company culture. Focus more on automation, stability, and quality. For example, start a dialogue with your company regarding architecture or the common issues plaguing production environments. This will inform the teams about how their work affects one another.  Developers must realize what goes on once they push code, and how operations often have a hard time maintaining an optimum environment. The operations team, on the other hand, should try to avoid becoming a blocker through the automation of repeatable tasks.

 

 Greater Attention to Culture Than Value

Though it’s a bit contrary to the last point, DevOps isn’t all about organizational culture. Sure, it requires involvement from the company leadership as well as a buy-in from every employee, but they don’t understand the benefits until they have an individual “aha” experience and discover the value. And that happens only when they have a point of comparison. Numbers help with this.

 

 

Start paying more attention to measurable aspects. When reading the DevOps report, check the four metrics – lead time for changes, deployment frequency, change failure rate, and mean time to discover. A higher deployment rate helps minimize the risk of releasing minor changes. Enhance the time needed to provide value to consumers once the code is pushed. If you experience failure, decrease the recovery time and also reduce the rate of failure. The truth is, culture isn’t something that can be measured; your customers will not have much interest in the various aspects of your company by the end. They will, however, show an interest in visible and tangible things.

 

 Select Architecture to Deter Changes

 

 

Software that cannot be evolved or changed easily presents some interesting challenges. If parts of your system cannot be deployed independently, starting the system will become difficult. Architecture that isn’t loosely coupled is difficult to adapt. Users face this problem while deploying large systems. They don’t spend a lot of time considering the deployment of independent parts; so they have to deploy all the parts together. You risk breaking the system if only a single part is deployed.

However, know that DevOps is more than simple automation. It tries to decrease the time you spend deploying apps. Even while automated, if the deployment requires a lot of time, customers will fail to experience the value in automation.

This mistake can be avoided by investing a bit of time in the architecture. Simply understand how the parts can be deployed independently. However, do not undertake the effort of defining every little detail, either. Rather, postpone a few of the decisions until a later more opportune moment, when you realize more. Allow the architecture to change by itself.

 

Lack of Experimentation in Production

 

In the field of software, companies used to try and get everything right ahead of releasing it to production. Nowadays though, thanks to automation and culture change, it’s easier to get things into production. Thanks to unprecedented speed and consistency, new changes are easily releasable numerous times a day. But people make the mistake of not harnessing the true power of DevOps tooling for experiments in production.

Reaching the production stage is always laudable, but that doesn’t mean the company should stop experimenting and testing in production. Normally, using tools such as production monitoring, release automation, and feature flags allow you to carry out some cool functions. Split tests can be run to verify the layout that works best for a feature, or you can conduct a gradual rollout to understand people’s reactions to something new.

The best part is, you’re capable of doing all of this without obstructing the pipeline for changes that are still on their way. Harnessing the full power of DevOps means to allow actual production data affect the development process in a closed feedback loop.

 

Too Much Focus on Tooling

While some tools help with DevOps practice, using them doesn’t mean you’re doing DevOps. New tools are coming to the forefront all the time, which means you now have different tools for deployment, version control, continuous integration, orchestrators, and configuration management. A lot of vendors will say they have the perfect tool for DevOps implementation. However, no single tool can possibly cover all your requirements.

 

So, adopt an agnostic outlook towards tools. Know that there will always be a better method of doing things. Fresh tools will get adopted once a certain amount of time has passed. Use tools to spend more and more time on things that provide customers with the necessary value. Develop a mindset for delivering value to end users at every moment. Think of your job as getting over only when your customers’ expectations are met post-delivery.

 

Even the smallest DevOps issue can affect other functions of your company if you do not take the effort to correct the problems. Focus on the right aspects of DevOps and keep on perfecting the techniques for a smoother, faster deployment.

The Most Important Tech Trends to Track Throughout 2018

2017 was a roller coaster of a year; it’s breathtaking how the time to market for technologies to create observable impact is shrinking year after year. In the year that went by, several new technologies became mainstream, and several concepts emerged out of tech labs in the form of features within existing technologies.

In particular, industrial IoT and AI-based personal assistance space expanded manifolds in 2017, data as a service continued its rollicking growth, and connected living via smart devices appeared to be a strategic focus for most technology power players. Efforts to curb misinformation on the web also gained prominence.

Artificial intelligence, the blockchain, industrial IoT, mixed reality (AR and VR), cybersecurity– there’s no dearth of buzzwords, really. The bigger question here is – which of these technologies will continue to grow their scope and market base in 2018, and which new entrants will emerge?

Let’s try to find out the answers.

 

The Changing Dynamics of Tech Companies and Government Regulators

Indications are too prominent to ignore now, there’s increasing pushback from governments, along with attempts to influence the scope of technological innovations. The power and control of technology in human life is well acknowledged now, and naturally, governments feel the need to stay in the mind space of tech giants, as they innovate further. With concerns around smart home devices ‘tapping’ your conversations all the while, it’s reason enough for the end user community to be anxious.

GDPR will come into force in mid-2018, and the first six months after that will be pretty interesting to watch. The extent and intensity of penalties, the emergence of GDPR compliance services, the distinct possibilities of similar regulations emerging in other geographies – all these will be important aspects to track for everyone. Also, the net neutrality debate will continue, and some of the impacts will be visible on the ground. Whether it will be for the better or for the worse of the World Wide Web? By the end of 2018, we might be in a good position to tell.

The ‘People’ Focused Tech Business

The debate around the downsides of technology in terms of altering core human behaviour is getting louder. Call it the aftermath of Netflix’s original series called Black Mirror, which explores the fabrics of a future world where the best of technology and the worst of human behaviour fuse together. Expect the ‘people’ side of technology businesses evolve more quickly throughout this year.

Community-based tech businesses, for instance, will get a lot of attention from tech investors. Take for example businesses such as co-working spaces with particular attention on specific communities, such as women entrepreneurs, innovators who’re dedicated to research in a specific technology, people with special requirements and who’re differently abled.

Also, AI algorithms that make humans more powerful instead of removing them from the equation will come to the fore. Take, for instance, Stitch Fix, an AI-powered personal shopping service that enables stylists to make more customized and suitable suggestions to customers.

Blockchain and IoT Meet

For almost 5 years now, IoT has featured on every list of potentially game-changing technologies, and for good reason. There are, however, two concerns.

How quickly will business organizations be able to translate innovation in IoT into tangible business use cases?

How confident can businesses be about the massive data that will be generated via their connected devices, every day?

Both these concerns can be addressed to a great extent by something that’s being termed BIoT (that’s blockchain Internet of Things).

BIoT is ready to usher in the new era of connected devices. Companies, for instance, will be able to track package shipments and take concrete steps towards building smart cities where connected traffic lights and energy grids will make human lives more organized. When retailers, regulators, transporters, and analysts will have access to shared data from millions of sensors, the collective and credible insights will help them do their jobs better. Of course, the blockchain concept will ensure that data will be too difficult to be hacked.

 

Bots

Yes, bots. We’ve almost become used to bots answering our customer service calls. Why is this technology, then, a potential game-changer for the times to come? Well, that’s because of the tremendous potential for growth that bots have.

Bots are the outcome of coming together of key technologies – natural language processing (NLP) and machine learning (ML). Individually, there’s a lot of growth happening in both these technologies, which means that bots are also growing alongside.

Because of the noteworthy traction of chatbots in 2017, businesses are very likely to put their money in chatbots over apps in 2018. From chatbots that give you tailor-made financial advice to those that tell you which wine would go well with your chosen pizza, the months to follow will bring a lot of exciting value adds from this space.

Quantum Computing: From Sci-Fi to Proof of Concept

Let’s face it; quantum computing has always been a thing from science fiction movies, and not really anything tangible. The research activity in this space, however, hasn’t slackened a bit. Innovators are, in fact, at a stage where quantum computing is no more just a concept. The promise of outperforming traditional supercomputers might not be an empty promise after all. Tech giants are working hard to improve their qubit computing powers while keeping error probability at a minimum. 2018 has every reason to be the year when quantum computing emerges as a business-technology world buzzword.

 

Concluding Remarks

The pace of disruption of a tech trend is moderated by government regulations, the price war among competing tech giants, and cybersecurity, among other factors. Eventually, we all have to agree that the only thing we can say for certain is that by the time this year draws to an end, we will all be living in ways different from today. It’s very likely that at the core of these changes will be one or more of the technology trends we discussed in this guide.

 

 

Author – Rahul Sharma

Top 5 Use Cases For Machine Learning in The Enterprise

machine learning

Artificial intelligence can be loosely defined as the science of mimicking human behavior. Machine learning is the specific subset of AI that trains a machine to learn. The concept emerged from pattern recognition and the theory that computers can learn without being programmed to complete certain tasks. Things like cheaper, more powerful computational processing, the growing volumes of data, and affordable storage has taken deep learning from research papers and labs to real life applications. However, all the media and hype surrounding AI, has made it extremely difficult to separate exciting futuristic predictions from pragmatic real-world enterprise applications. In order to avoid begin caught up in the hype of technical implementation, CIOs and other tech decision makers have to build a conceptual lens and look at the various areas of their company that can be improved by applying machine learning. This article explored some of the practical use cases of machine learning in the enterprise.

1. Process Automation

Intelligent process automation (IPA) combines artificial intelligence and automation. It involves the diverse use of machine learning. From automating manual data entry, to more complex use cases like automating insurance risk assessments. ML is suited for any scenario where human decision is used, but within set constraints, boundaries or patterns. Thanks to cognitive technology like natural language processing, machine vision and deep learning, machines can augment traditional rule-based automation and overtime learn to do them better as it adapts to change. Most IPA solutions already utilize ML-powered capabilities beyond simple rule based automation. The business benefits are much more extensive than cost saving and include better use of costly equipment or highly skilled employees, faster decisions and actions, service and product innovations, and overall better outcomes. By taking over rote tasks, machine learning in the enterprise frees up human worker to focus on product innovation and service improvement; allowing the company to transcend conventional performance trade-offs and achieve unparalleled levels of quality and efficiency.

2. Sales Optimization

Sales typically generates a lot of unstructured data that can ideally be used to train machine learning algorithms. This comes as good news to enterprises that have been saving consumer data for years, because it is also the place with the most potential for immediate financial impact from implementing machine learning. Enterprises eager to gain a competitive edge are applying ML to both marketing and sales challenges in order to accomplish strategic goals. Some popular marketing techniques that rely on machine learning models include intelligent content and ad placement or predictive lead scoring. By adopting machine learning in the enterprise, companies can rapidly evolve and personalize content to meet the ever changing needs of prospective customers. ML models are also being used for customer sentiment analysis, sales forecasting analysis, and customer churn predictions. With these solutions, sales managers are alerted in advance to specific deals or customers that are risk.

3. Customer Service

Chatbots and virtual digital assistants are taking over the world of customer service. Due to the high volume of customer interactions, the massive amounts of data captured and analyzed is the ideal teaching material required to fine tune ML algorithms. Artificial intelligence agents are now capable of recognizing a customer query and suggesting the appropriate article for a swift resolution. Freeing up human agents to focus on more complex issues, subsequently improving the efficiency and speed of decisions. Adopting machine learning in the enterprise cloud have an infallible impact when it comes to customer service-related routine tasks. Juniper research maintains that chatbots will create an annual $8 billion cost savings by 2022. According to a 2017 PWC report, 31 percent of enterprise decision makers believe that virtual personal assistants will significantly impact their business, more than any other AI powered solution. The same report found that 34 percent of executives say that the time saved as a result of using virtual assistants allowed them to channel their focus towards deep thinking and creativity.

4. Security

Machine learning can help enterprises improve their threat analysis and how they respond to attacks and security incidents. ABI research analysts estimate that machine learning in data security will increase spending in analytics, big data and artificial intelligence to $96 billion by 2021. Predictive analytics enables the early detection of infections and threats, while behavioral analytics ensures that any anomalies within the system does not go unnoticed. ML also makes it easy to monitor millions of data logs from mobile and other IoT capable devices and generate profiles for varying behavioral patterns with your IoT ecosystem. This way, previously stretched out security teams can now easily detect the slightest irregularities. Organizations that embrace a risk-aware mind-set are better positioned to capture a leading position in their industry, better navigate regulatory requirements, and disrupt their industries through innovation.

5. Collaboration

The key to getting the most out of machine learning in the enterprise lies in tapping into the capabilities of both machine learning and human intelligence. ML-enhanced collaboration tools have the potential to boost efficiency, quicken the discovery of new ideas and lead to improved outcomes for teams that collaborate from disparate locations. Nemertes’ 2018 UC and collaboration concluded that about 41 percent of enterprises plan to use AI in their unified communications and collaboration applications. Some uses cases in the collaboration space include:
• Video intelligence, audio intelligence and image intelligence can add context to content being shared, making it simpler for customers to find the files they require. Image intelligence coupled with object detection, text and handwriting recognition helps improve meta data indexing for enhance search.
• Real time language translation, facilitates communication and collaboration between global workgroups in their native languages.
• Integrating chatbots into team applications enables native language capabilities, like alerting team members or polling them for status updates.
That is just the tip of the iceberg, machine learning offers significant potential benefits for companies adopting it as part of their communications strategy to enhance data access, collaboration and control of communication endpoints.

 

Author: Gabriel Lando

image courtesy of freepik.com

How to Deploy A Software Defined Network

Software Defined Network (SDN) was a bit of a buzzword throughout the early to middle of this decade. The potential of optimal network utilization promised by software-defined networking captured the interest and imagination of information technology companies quickly. However, progress was slow, because the general understanding of software-defined networking wasn’t up to the mark, which caused enterprises to make wrong choices and unsustainable strategic decisions upfront.

 

Where Does SDN Come Into the Picture?

SDN is still a nascent concept for several companies. The virtualization potential for networks offered by SDN calls out IT leaders to improve their understanding of this software heavy approach of network resource management. We hope this guide helps.

What is Software Defined Networking Afterall?

You would know and appreciate how software managed virtual servers and storage make computing resource management more agile and dynamic for enterprises. Imagine the benefits that enterprises could enjoy if the same capabilities could be extended on to your company’s network hardware. That’s what software-defined networking offers.

SDN is about adding a complex software layer on top of the hardware layer in your company’s network infrastructure. This allows network administrators to route network traffic as per sophisticated business rules. These rules can then be extended across to network routers so that administrators don’t have to depend solely on hardware configuration to manage network traffic.

This sounds easy in principle. Ask any network administrator, and they will tell you that’s its really difficult to implement, particularly in companies with matured and stabilized networking infrastructure and processes.

 

 

 

SDN Implementations Demand Upgrades in Network Management Practices

An almost immediate outcome of SDN implementation will be your enterprise’s ability to quickly serve network resource demands using the software. To maintain transparency, the networking team needs to immediately evaluate the corresponding changes they need to bring in, let’s say, the day end network allocation and utilizing reports. This is just one of the many examples of situations where every SDN linked process improvement will need to be matched by equivalent adjustments in related and linked processes.

 

 

Managing De-provisioning Along the Way

At the core of SDN implementations is the enterprise focus on optimizing network usage and managing on-demand network resource requests with agility. While SDN implementations help companies achieve these goals fairly quickly, they often also cause unintended network capacity issues. Among the most common reasons for this is that SDN engineers forget to implement rules for de-provisioning networks when the sudden surge in demand is met. By building de-provisioning as the last logical step in every on-demand resource allocation request, networking teams can make sure that SDN doesn’t become the unintentional cause of network congestion.

 

Pursue 360 degrees network performance visibility

It’s unlikely that your company will go for a complete overhaul of its network management systems and processes. So, it’s very likely that the SDN implementation will be carried out in a phased manner. Some of the key aspects of managing this well are:

  • Always evaluate the ease with which your existing network performance monitoring tools will allow SDN to plug into them.
  • Look for tools whose APIs allow convenient integration with SDN platforms
  • Evaluate how your current network performance management tools will be able to manage and integrate data from non-SDN and SDN sources.

Note – because hybrid SDN (a balance of traditional and software-defined network) is a practical approach for enterprises, implementations much accommodation the baseline performance monitoring goals of the enterprise. In fact, the introduction of SDN often requires networking teams to improve performance monitoring and reporting practices so that concrete and business process-specific improvements can be measured and reported.

 

 

Is SDN an Enterprise Priority Already?

The basics reason why SDN is making its way into IT strategic discussions for even SMBs is that the nature of business traffic has changed tremendously. Systems have moved to the cloud-computing model, and there’s a lot of focus on mobile accessibility of this system.

In times when systems operated mostly in the client-server configuration, the basic tree structure of Ethernet switched worked well. Enterprise network requirements today, however, demand more. SDN is particularly beneficial in enabling access to public and private cloud-based services.

SDN also augers well for another very strong enterprise movement – the one towards mobility. That’s because, with SDN, network administrators can easily provision resources for new mobile endpoints, taking care of security considerations. Also, enterprise data volumes and information needs will only grow. Managing network optimization with many virtual machines and servers in the play, traditionally, will require tremendous investments. SDN makes it more manageable, even from a financial perspective.

 

Understand and Acknowledge Security Aspects of SDN

Make no assumptions. SDN is a major change in the way your company’s network works. There are specific known risks of SDN implementations that consultants and vendors from this sphere will help you prepare for.

Protocol weaknesses are right at the top. A crucial question for the application security and network security teams to work together on is – do our application security routines accommodate the needs of protocols used in the SDN platform? Another key security-related aspect is to devise measures to prevent SDN switch impersonation.

 

Choosing External Vendors

The success of an SDN implementation is measured in terms of the positive impact it has in the context of business use cases. If/when you initiate discussions with external consultancies and vendors for your enterprise SDN implementation, make sure you evaluate them not only on the basis of their SDN knowledge but also their ability to understand your business applications ecosystem. This helps them implement SDN platforms that accommodate complex and highly sophisticated business rules of network resource allocation. This, in turn, significantly improves the project’s probability for getting all its goals tick marked.

 

Concluding Remarks

If SDN is on the strategic roadmap being followed by your enterprise, there’s a lot you can help with. Start with the tips and suggestions shared in this guide.

 

 

Author: Rahul Sharma

Personal Data Breach Response Under GDPR

personal data breach

Data security is at the heart of the upcoming General Data Protection Regulation (GDPR). It sets strict obligations on data controllers and processors in matters pertaining data security while concurrently providing guidance on the best data security practices. And for the first time, the GDPR will introduce specific breach notification guidelines. With only a few months to go until the new regulations come into effect, businesses should begin focusing on data security. Not just because of the costs and reputational damage a personal data breach can lead to; but also because under the GDPR, a new data breach notification regime will be applied to statute the reporting of certain data breaches to affected individuals and data protection authorities.

What Constitutes a Personal Data Breach Under GDPR?

GDPR describes A personal data breach as – a security breach that leads to the unlawful or accidental loss, destruction, alteration, or unauthorized disclosure of personal data stored, processed or transmitted. A personal data breach is by all means a security incident; however, not all security incidents require the same strict reporting regulations as a personal data breach. Despite the broad definition, it is not unusual in data security laws that require breach reporting. HIPAA, for example, makes the same distinctions at the federal level for medical data. It aims to prevent data protection regulators from being overwhelmed with breach reports.

By limiting breach notifications to personal data (EU speak for personally identifiable information – PII), incidents that solely involve the loss of company data/ intellectual property will not have to be reported. The threshold to establish whether an incident has to be reported to a data protection authority is dependent on the risk it poses to the individuals involved. High risk situations are those that can potentially lead to the significant detrimental suffering – for example, financial loss, discrimination, damage to reputation or any other significant social or economic disadvantage.

…it should be quickly established whether a personal data breach has occurred and to promptly notify the supervisory authority and the data subject.

– Recital 87, GDPR

If an organization is uncertain about who has been affected, the data protection authority can advise and, in certain situations, instruct them to immediately contact the individuals affected is the security breach is deemed to be high risk.

What Does The GDPR Require You to Do?

Under GDPR, the roles and responsibilities of processors and data controllers have been separated. Controllers are obliged to only engage processors who are capable of providing sufficient assurances to implement appropriate organizational and technical measures to protect the rights of data subjects. In the event of a data breach that affects the rights and freedoms of said data subjects, the organization should report it, without any delay and, where practicable, within 72 hours of becoming aware of it.

The data processor is mandated to notify the controller the moment a breach is discovered, but has no other reporting or notification obligation under the GDPR. However, the 72-hour deadline begins the moment the processor becomes aware of the data breach, not when the controller is notified of the breach. A breach notification to a data protection authority has to at least:

  1. Have a description of the nature of the breach, which includes the categories and number of data subjects affected.
  2. Contain the data protection officer’s (DPO) contact information.
  3. Have a description of the possible ramifications of the breach.
  4. Have a description of steps the controller will take to mitigate the effect of the breach.

The information can be provided in phases if it is not available all at once.
If the controller determines that the personal data breach can potentially put the right and freedoms of individuals at risk, it has to communicate any information regarding the breach to the data subjects without undue delay. The communication should plainly and clearly describe the nature of the personal data breach and at least:

  1. Contain the DPO’s contact details or a relevant contact point.
  2. Have a description of the possible ramifications of the breach.
  3. Have a description of measures proposed or taken to mitigate or address the effects of the breach.

The only exception in this case is if the personal data has been encrypted, and the decryption key has not been compromised, then there is not need for the controller to notify the data subject.

The most ideal way for companies to handle this GDPR obligation is to not only minimize breaches, but also, establish policies that facilitate risk assessment and demonstrates compliance.

The GDPR stipulates that all the records pertaining the personal data breach, regardless of whether the breach needs to be reported or not. Said records have to contain the details of the breach, any consequences and effects, and the follow up actions taken to remedy the situation.

Should Ransomware Attacks Be Reported?

Ransomware typically involves the ‘hijacking’ of cooperate data via encryption and payment is demanded in order to decrypt the ransomed data. Under GDPR, Ransomware attacks may be categorized as a security incident but it does not necessarily cross the threshold of a personal data breach. A Ransomware attack would only be considered a personal data breach if there is a back up but the outage directly impacts user’s freedoms and rights, or if there is no back up at all. Ideally, a Ransomware attack where the ransomed data can be quickly recovered does not have to be reported.

What Are the Consequences of Non-Compliance?

A failure to comply with the GDPR’s breach reporting requirements will not only result in negative PR, constant scrutiny, and possibly loss of business; but will also attract an administrative fine of up to € 10 million or up to two percent of the total global annual turnover of the preceding financial year. Additionally, failure to to notify the supervising authority may be indicative of systematic security failures. The would show an additional breach of GDPR and attract more fines. The GDPR does have a list of factors the supervising authority should consider when imposing fine; chief among them being the degree of co-operation by the data controller with protection authority.

In Closing

Data breach notification laws have already been firmly established in the U.S. These laws are designed to push organizations to improve their efforts in the detection and deterrence of data breaches. The regulators intentions are not to punish but to establish a trustful business environment by equipping organizations to deal with with security issues.

Author: Gabriel Lando

image courtesy of freepik

FileCloud High Availability Architecture

Enterprise Cloud Infrastructure is a Critical Service

The availability of enterprise hosted cloud services has opened huge potential for companies to effectively manage files. The files can be stored, shared, exchanged within the enterprise and with their partners efficiently while keeping existing security and audit controls in place. The service provides the power and flexibility of public cloud while maintaining the data control.

The main challenge of enterprise hosted cloud services is to guarantee high uptime (in the order of seven nines) while maintaining high quality of service. The dependency on such services means that any disruption to the service can have significant productivity impacts. Enterprise cloud services typically consist of multiple different services to provide the functionality and any High availability architecture must take into account that all critical services need to have redundancies built into them to be effective. Moreover, detection and handling of failures must not require any user interaction as well as be reasonably quick.

FileCloud Enterprise Cloud

FileCloud enables enterprises to seamlessly access their data using a variety of external agents. The agents can be browsers, mobile devices, client applications, while the data that is enabled for access by FileCloud can be stored locally or in internal NAS devices or in public cloud locations such as AWS S3 or OpenStack SWIFT.

Depending on the specific enterprise requirements, the FileCloud solution may implemented multiple different software services such as Filecloud Helper service, Solr service, virus scanner service, Open Office service etc. Moreover, FileCloud may use the enterprise identity services such as Active Directory or LDAP or ADFS services. Any failure on any of these services can impact end user experience.
FileCloud HA

High Availability Architecture

FileCloud solution can be implemented using the classic three tier high availability architecture. The first tier will consists of the load balancer and access control services. Tier 1 will be a web tier made up of load balancers. Tier 2 will be stateless application servers and for FileCloud implementation, this layer will consist of Apache nodes and helper services. Tier 3 will be the database layer. Any other dependencies such as Active Directory or Data servers are not addressed here.  The advantage this architecture is separation of stateless components from state full components allowing great flexibility in deploying the solution.
AD tiers

Tier 1 – Web Tier

Tier 1 is the front end of the deployment and act as the entry point to all external clients. The components in Tier 1 are stateless and primarily forward the request to the webservers in tier 2. Scaling of the web tier can be done by adding and removing load balancer instances since they are stateless. Each webserver node is capable of handling any request. This layer can also be configured to do SSL offloading allowing lighter weight communication between Tier1 to Tier2. This layer can also be configured to provide simple affinity based on source and destination addresses. The traffic will be forwarded to healthy application server nodes.  This layer also monitors available application servers and will automatically distribute the traffic depending on the load.

Tier 2 – Application Servers

Tier 2 in FileCloud deployment consists of the following services

  • Apache servers
  • FileCloud helper
  • Antivirus service
  • Memcache service
  • Open Office service

The apache servers in FileCloud do not store any state information and are therefore stateless. They however do cache data for faster performance (such as convert and cache documents for display). They primarily execute application code to service a request. All state specific data is stored in database tables and therefore are stateless. If an application server node fails, the request can be handled by a different application server node (provided the clients retry the failing request). Capacity can be increased or reduced (automatically or manually) by adding or removing apache server nodes.

FileCloud helper service provides additional capabilities such as indexed search, NTFS permission retrieval etc.  FileCloud Helper is a stateless service and therefore can be added or removed as needed.

Similar to FileCloud helper service, the Antivirus service is also a stateless service providing antivirus capability to FileCloud. Any file that is uploaded to Filecloud is scanned using this service.

Memcache service is an optional service that is required for local storage encryption. This service is also stateless and is required only if local storage encryption is required. This service is also started in same node as the Apache service.

Open office service is an optional service that is required for creating document file previews in browser. This server is stateless and is started in the same node as the Apache server.

Tier 3 – Database Nodes

Tier 3 consists of state full services. This consists of the following services

  • MongoDB servers
  • Solr Servers

The High availability for each of these servers varies depending on the complexity of the deployment. The failure of these services can have limited or system wide impact. For example, MongoDB server failure will result in FileCloud solution wide failure and is critical, while FileCloud helper server will only impact a portion of function such as network folder access etc.

MongoDB Server High Availability

MongoDB servers store all application data in FileCloud and provide High Availability using replica sets. The MongoDB replica set configuration provides redundancy and increases data availability by keeping multiple copies of data on different database services. Replication also provides fault tolerance against the loss of a single database server. It is also possible to configure Mongo DB to increase the read capacity. The minimum number of nodes needed for Mongo DB server HA is a 3 node member set (It is possible to also use 2 nodes + 1 arbiter).  In case of primary Mongo DB server node failure, one of the secondary node will failover and will become primary.

The heartbeat time frame can be tuned depending on system latency. It is also possible to setup the Mongo DB replica to allow reads from secondary to improve read capacity.
HA Architecture Primary Secondary

Putting It All Together

The three tier structure for FileCloud component is shown below. The actual configuration information is available in FileCloud support. This provides a robust FileCloud implementation with high availability and extensibility.  As new services are added to extended functionality, the layer can be decided whether or not they are stateless or store state. The Stateless (Tier 2) nodes can be added or removed without disrupting service. Tier 3 nodes will store state and require specific implementation depending on the type of service.
HA Architecture

Alternative to WatchDox – Why FileCloud is better for Business File Sharing?

WatchDoxVsFileCloud

FileCloud competes with WatchDox for business in the Enterprise File Sync and Share space(EFSS). Before we get into the details, I believe an ideal EFSS system should work across all the popular desktop OSes (Windows, Mac and Linux) and offer native mobile applications for iOS, Android, Blackberry and Windows Phone. In addition, the system should offer all the basics expected out of EFSS: Unlimited File Versioning, Remote Wipe, Audit Logs, Desktop Sync Client, Desktop Map Drive and User Management.

The feature comparisons are as follows:

Features WatchDox
On Premise
File Sharing
Access and Monitoring Controls
Secure Access
Document Preview
Document Edit
Outlook Integration
Role Based Administration
Data Loss Prevention
Web DAV
Endpoint Backup
Amazon S3/OpenStack Support
Public File Sharing
Customization, Branding
SAML Integration
Anti-Virus
NTFS Support
Active Directory/LDAP Support
Multi-Tenancy
API Support
Application Integration via API
Large File Support
Network Share Support Buy Additional Product
Mobile Device Management
Desktop Sync Windows, Mac, Linux Windows, Mac
Native Mobile Apps iOS, Android, Windows Phone iOS, Android
Encryption at Rest
Two-Factor Authentication
File Locking
Pricing for 20 users/ year $999 $3600

From outside looking-in, the offerings all look similar. However, the approach to the solution is completely different in satisfying enterprises primary need of easy access to their files without compromising privacy, security and control. The fundamental areas of difference are as follows:

Feature benefits of FileCloud over WatchDox

Unified Device Management Console – FileCloud’s unified device management console provides simplified access to managing mobile devices enabled to access enterprise data, irrespective of whether the device is enterprise owned, employee owned, mobile platform or device type. Manage and control of thousands of iOS and Android, devices in FileCloud’s secure, browser-based dashboard. FileCloud’s administrator console is intuitive and requires no training or dedicated staff. FileCloud’s MDM works on any vendor’s network — even if the managed devices are on the road, at a café, or used at home.

Amazon S3/OpenStack Support Enterprise wanting to use Amazon S3 or OpenStack storage can easily set it up with FileCloud. This feature not only provides enterprise with flexibility to switch storage but also make switch very easily.

Embedded File Upload Website Form – FileCloud’s Embedded File Upload Website Form enables users to embed a small FileCloud interface onto any website, blog, social networking service, intranet, or any public URL that supports HTML embed code. Using the Embedded File Upload Website Form, you can easily allow file uploads to a specific folder within your account. This feature is similar to File Drop Box that allows your customers or associates to send any type of file without requiring them to log in or to create an account.

Multi-Tenancy Support – The multi-tenancy feature allows Managed Service Providers(MSP) serve multiple customers using single instance of FileCloud. The key value proposition of FileCloud multi-tenant architecture is that while providing multi-tenancy the data separation among different tenants is also maintained . Moreover, every tenant has the flexibility for customized branding.

NTFS Shares Support – Many organizations use the NTFS permissions to manage and control the access permissions for internal file shares. It is very hard to duplicate the access permissions to other systems and keep it sync. FileCloud enables access to internal file shares via web and mobile while honoring the existing NTFS file permissions. This functionality is a great time saver for system administrators and provides a single point of management.

Conclusion

Based on our experience, enterprises that look for an EFSS solution want two main things. One, easy integration to their existing storage system without any disruption to access permissions or network home folders. Two, ability to easily expand integration into highly available storage systems such as OpenStack or Amazon S3.

WatchDox neither provides OpenStack/Amazon S3 storage integration support nor NTFS share support. On the other hand, FileCloud provides easy integration support into Amazon S3/OpenStack and honors NTFS permissions on local storage.

With FileCloud, enterprises get one simple solution with all features bundled. For the same 20 user package, the cost is $999/year, almost 1/4th of WatchDox.

Here’s a comprehensive comparison that shows why FileCloud stands out as the best EFSS solution.

Try FileCloud For Free & Receive 5% Discount

Take a tour of FileCloud

A Primer on Windows Servers Disaster Recovery

windows server recovery
In this primer, we’re going to explore some of the best ways to actively restore your Windows Server with minimal impact. Though basic, the following technical pointers will help you with faster Windows servers disaster recovery.

1. RAM and HARD DISK Check

Blue screens are Windows’ way of telling you of some hardware failure such as with a faulty RAM, etc. Before taking any immediate action such as with a software repair option, it is important to run a thorough ram and hard disk check. To analyze issues with blue screens, you can resort to the Blue Screen View tool with auto-USB loading options. If experiencing blue screens, define behavior for windows restart with Control Panel-> System and Security-> System-> Advanced System Settings. Go to Startup and Recovery-> Settings-> Disable Automatically Restart option from System Failure. Choose Automatic Memory dump/Small memory dumpto let BlueScreenView parse memory.dmp file generated. Further errors for hard disk can be checked from Windows Logs-> System-> Event Viewer.

2. Boot Manager Failure

Boot manager failure leads to server loading failures. A Win Server DVD or repair technician can help here. Another solution is to access boot manager through the command prompt and take necessary steps for reactivating it. To overwrite master boot record (from the beginning of the disk), you can use the command bootrec /fixmbr. To view OS not currently listed, input command bootrec /scanos. To reinstate systems in the boot manager, use bootrec /rebuildbcd which reinstalls earlier systems integrated with boot manager. After this, input bootrec /fixboot to create a bootmgr log again. Beyond this, input commands bootsect /nt60 SYS followed by bootsect /nt60 ALL in the command line to repair the boot manager further.

3. Windows Startup Failure

Startup failures result from system files displacement after a crash, which leads to the Server booting up but not launching. One option is to do a system restore and select an earlier restore point. Another option is to open elevated command prompt, input sfc /scannow and allow Windows to scan and restore accordingly.

4. Restoring Server Backup

If installed through server Manager, a Server that is backed up on an external drive can restore data completely. Win Server Manager also offers Win Server Backup Feature to launch backup from tools menu or searching wbadmin.msc from Startup Menu. Block based backups are generated as a result, although it is possible to select particular partitions from the Backup Schedule wizard as well. To start full back (restorable via computer repair option on installation DVD), use wbadmin start sysrecovery/systemstatebackup from command-line. Use wbadmin start backup –allCritical –backupTraget:<insert_disk_of_choice>  -quiet. This backup can then be used to restore from in case of system failures. Boot Win Server through DVD, and then select Repair Your Computer option from Troubleshoot -> System Image Recovery.

5. Hardware Restore

Windows Server 2008 and Win Server 2012 has options to restore system backups from different hardware sources if you select the Bare Metal Recovery option. In this you need to utilize the Exclude disks option which lets you select a disk that is not required during restore operations, e.g. a disk with data rather than OS files is suitable for this. Select Install Drivers if you wish to backup drivers within your recovery data file so as to install it as well during a complete system restore from an initial point of backup. Advanced options are also available to provide options such as automatic system restore after disk defect verification and server restore, etc.

6. Active Directory Backup & Restore

The native backup program within the Win Server OS is sufficiently useful for backing up active directory services and restoring the same. It can not only create a back up of the directory but save all associated data necessary for functioning. To run backup, enable System State and System Reserved option and then back up all the data. In order to restore your Actve Directory, start domain controller and press F8 until the boot menu appears (may vary depending on the model and make of computer in use). In boot option, select Directory Services Restore Mode, log into the applications for Active Directory Restore mode, then complete restore. To boot domain controller into restore mode, input data: bcdedit/set safeboot dsrepair. If in Directory Services restore mode, set bcdedit /deletevalue safeboot to boot normally. Input shutdown t 0 –r to reboot.

7. Active Directory Cleanup

In DND manager/server, look into Properties for Name Servers tab then remove the service but be careful not to remove host entry. Ensure the domain controller is not explicitly registered as such, then remove AD services (e.g. VPN, etc.). If global catalog exists on the server, configure a different one with same deails from AD sites and services snap-in tool  and then go to Sites -> Servers -> Right click NTDS settings -> Properties -> uncheck Global Catalog from General tab. To downgrade domain controller, use PowerShell Uninstall ADDSDomainController cmdlet, use –force if you wish to remove it completely. Metadata can be modified from ntdsutil-> metadata cleanup -> connections. After cleanup, delete domain controller from site of assignation. Go to Snap In -> Domain Controller -> Select Delete. Check NTDS settings from AD to reassure it’s not registered with replication partner (remove if required).
windows server 2003

8. Active Directory Database Rescue

Go to Directory Services Restore Mode, insert call ntdsutil, active instance ntds, then choose files. Input <integrity>, quit to leave file maintenance. Data analysis launched by semantic database analysis command from CMD can give a detailed report if you keep verbose on. Enter go fixup, to start up the diagnostic tool to repair database. Quit and restart with command quit ntdsutil.

9. Backup for Win Exchange

Begin with Select Application under Select Recovery Type, navigate to Exchange Option, View Details to see backups. The backup is current if checkbox Do Not Perform a Roll-Forward appears at this stage. For Roll Forward Recovery, transaction logs created during backup are required as Exchange uses these to write in the database and accomplish recovery. Enabling the Recovr to Original Location option lets you restore all databases to original locations. Beyond the system restore, the backup is integrated with database and can also be manually moved back.

Author: Rahul Sharma

Image courtesy: Salvatore Vuono, freedigitalphotos.net

Alternative to Pydio – Why FileCloud is better for Business File Sharing?

FileCloudVsPydio

FileCloud competes with Pydio for business in the Enterprise File Sync and Share space(EFSS). Before we get into the details, I believe an ideal EFSS system should work across all the popular desktop OSes (Windows, Mac and Linux) and offer native mobile applications for iOS, Android, Blackberry and Windows Phone. In addition, the system should offer all the basics expected out of EFSS: Unlimited File Versioning, Remote Wipe, Audit Logs, Desktop Sync Client, Desktop Map Drive and User Management.

The feature comparisons are as follows:

Features Pydio
On Premise
File Sharing
Access and Monitoring Controls
Secure Access
Document Preview
Document Edit
Outlook Integration
Role Based Administration
Data Loss Prevention
Web DAV
Endpoint Backup
Amazon S3/OpenStack Support
Public File Sharing
Customization, Branding
SAML Integration
Anti-Virus
NTFS Support
Active Directory/LDAP Support
Multi-Tenancy
API Support
Application Integration via API
Large File Support
Network Share Support
Mobile Device Management
Desktop Sync Windows, Mac, Linux Windows, Mac, Linux
Native Mobile Apps iOS, Android, Windows Phone iOS, Android
Encryption at Rest
Two-Factor Authentication
File Locking

From outside looking-in, the offerings all look similar. However, the approach to the solution is completely different in satisfying enterprises primary need of easy access to their files without compromising privacy, security and control. The fundamental areas of difference are as follows:

Feature benefits of FileCloud over Pydio

Document Quick Edit – FileCloud’s Quick Edit feature supports extensive edits of files such as Microsoft® Word, Excel®, Publisher®, Project® and PowerPoint® — right from your Desktop. It’s as simple as selecting a document to edit from FileCloud Web UI, edit the document using Microsoft Office, save and let FileCloud take care of other uninteresting details in the background such as uploading the new version to FileCloud, sync, send notifications, share updates etc.

Embedded File Upload Website Form – FileCloud’s Embedded File Upload Website Form enables users to embed a small FileCloud interface onto any website, blog, social networking service, intranet, or any public URL that supports HTML embed code. Using the Embedded File Upload Website Form, you can easily allow file uploads to a specific folder within your account. This feature is similar to File Drop Box that allows your customers or associates to send any type of file without requiring them to log in or to create an account.

Unified Device Management Console – FileCloud’s unified device management console provides simplified access to managing mobile devices enabled to access enterprise data, irrespective of whether the device is enterprise owned, employee owned, mobile platform or device type. Manage and control of thousands of iOS and Android, devices in FileCloud’s secure, browser-based dashboard. FileCloud’s administrator console is intuitive and requires no training or dedicated staff. FileCloud’s MDM works on any vendor’s network — even if the managed devices are on the road, at a café, or used at home.

Device Commands and Messaging – Ability to send on-demand messages to any device connecting to FileCloud, provides administrators a powerful tool to interact with the enterprise workforce. Any information on security threats or access violations can be easily conveyed to the mobile users. And, above all messages are without any SMS cost.

Amazon S3/OpenStack Support Enterprise wanting to use Amazon S3 or OpenStack storage can easily set it up with FileCloud. This feature not only provides enterprise with flexibility to switch storage but also make switch very easily.

Multi-Tenancy Support – The multi-tenancy feature allows Managed Service Providers(MSP) serve multiple customers using single instance of FileCloud. The key value proposition of FileCloud multi-tenant architecture is that while providing multi-tenancy the data separation among different tenants is also maintained . Moreover, every tenant has the flexibility for customized branding.

Endpoint Backup: FileCloud provides ability to backup user data from any computer running Windows, Mac or Linux to FileCloud. Users can schedule a backup and FileCloud automatically backs up the selected folders on the scheduled time.

Conclusion

The preference for enterprises will depend on whether to rely on Pydio whose focus is split between trying to be open source and commercial with their enterprise offering or FileCloud with the only focus to satisfy all enterprise’s EFSS needs with unlimited product upgrades & support at a very affordable price.

Here’s a comprehensive comparison that shows why FileCloud stands out as the best EFSS solution.

Try FileCloud For Free & Receive 5% Discount

Take a tour of FileCloud

Architectural Patterns for High Availability

As the number of mission-critical web-based services being deployed by enterprise customers continues to increase, the need for a deeper understanding of designing the optimal network availability solutions has never been more critical. High Availability (HA) has become a critical aspect in the development of such systems. High Availability simply refers to a component or system that continuously remains operational for a desirable amount of time. Availability is generally measured relative to ‘100 percent operations’; however, since it is nearly impossible to guarantee 100 percent availability, goals are usually expressed in the number of nines. The most coveted availability goal is the ‘five nines’, which translates to 99.999 percent availability – the equivalent of less than a second of downtime per day.
Five nines availability can be achieved using standard commercial quality software and hardware. The design of high availability architectures is largely based on the combination of redundant hardware components and software to manage fault correction and detection without human intervention. The patterns below address the design and architectural consideration to make when designing a highly available system.

Server Redundancy

The key to coming up with a solid design for a highly available system lies in identifying and addressing single points of failure. A single point of failure simply refers to any part whose failure will result into a complete system shutdown. Production servers are complex systems whose availability is dependent on multiple factors, including hardware, software and communication links; each of these factors is a potential point of failure. Introducing redundancy is the surest way to address single points of failure. It is accomplished by replicating a single part of a system that is crucial to its function. Replication guarantees that there will always be a secondary component available to take over in the event a critical component fails. Redundancy relies on the assumption that they system cannot simultaneously experience multiple faults.
The most widely known example of redundancy is RAID-Redundant Arrays of Inexpensive Disks, which utilizes the combined use of multiple drives. Server redundancy can be achieved through a stand-by form also referred to as active-passive redundancy or through active-active redundancy where all replicas are concurrently active.

  • Active-Passive Redundancy

An active-passive architectural pattern consists of at least two nodes. The passive server (failover) acts as a backup that remains on standby and takes over in the event the active server gets disconnected for whatever reason. The primary active server hosts production, test and development applications.
The secondary passive server essentially remains dormant during normal operation. A major disadvantage of this model is that there is no guarantee that the production application will function as expected on the passive server. The model is also considered a relatively wasteful approach because expensive hardware is left unused.
active_passive_high_availability_cluster

fig 1.1

  • Active-Active Redundancy

The active-active model also contains at least two nodes; however, in this architectural pattern, multiple nodes are actively running the same services simultaneously. In order to fully utilize all the active nodes, an active-active cluster uses load balancing to distribute workloads across the nodes in order to prevent any single node from being overloaded. The distributed workload subsequently leads to a marked improvement in response times and throughput.
The load balancers uses a set of complex algorithms to assign clients to the nodes, the connections are typically based on performance metrics and health checks. In order to guarantee seamless operability, all the nodes in the cluster must be configured for redundancy. A potential drawback for an active-active redundancy is that in case one of the nodes fails, client sessions might be dropped, forcing them to re-login into the system. However, this can easily be mitigated by ensuring that the individual configuration settings of each node are virtually identical.
active_active_high_availability_cluster_load_balancer

fig 1.2

  • N+1 redundancy

An N+1 redundancy pattern is sort of a hybrid solution between active-active and active-passive; it is sometimes referred to as parallel redundancy. Despite the fact that this model is mostly used as UPS configuration, it can also be applied for high availability. An N+1 architectural pattern basically introduces 1 slave (passive) for N potential single point of failures in a system. The slave remains in standby mode and waits for a failure to occur in any of the N active parts. The system is therefore granted the capability of handling failure in one out of N components with compromising performance.
n+1 redundancy

fig 2.1

Data Center Redundancy

While a datacenter may contain redundant components, an organization may also benefit from having multiple datacenters. Factors such as weather, power failure or even simple equipment failure may cause an entire datacenter to shut down. In this scenario, replication within the datacenter will be of very little use. Such an unplanned outage can be a significantly costly affair for an enterprise. When failures on a data center level are considered, the need for a high availability pattern that includes multiple servers becomes apparent.
It is important to note that establishing multiple data centers in geographically distinct locations, and buying physical hardware to provide redundancy within the datacenters, is extremely costly. Additionally, setting up is a time-consuming affair, and may seem too difficult to achieve in the long-run. However, high purchase, set-up and maintenance costs can be mitigated by employing the use of IaaS (Infrastructure as a Service) providers.
data center redundancy

fig 3.1

Floating IP Address

A floating IP address can be worked into a high availability cluster that uses redundancy. The term ‘floating’ is used because the IP address can be moved from a one droplet to another droplet within the same cluster in an instance. This means the infrastructure can achieve high availability by immediately pointing an IP address to a redundant server. Floating IPs significantly reduce downtime by allowing customers to associate an IP address with a different droplet. A design pattern that has provisions for floating IPs makes it possible to establish a standby Droplet, which can receive production traffic at moment’s notice.

Author: Gabriel Lando

 

fig 1.1 and fig 1.1 courtesy of  hubspot.net

fig 2.1 courtesy of webworks.in

fig 3.1 courtesy of technet.com