Archive for the ‘Advanced Computer Administration and Architecture’ Category

Top 5 Use Cases For Machine Learning in The Enterprise

machine learning

Artificial intelligence can be loosely defined as the science of mimicking human behavior. Machine learning is the specific subset of AI that trains a machine to learn. The concept emerged from pattern recognition and the theory that computers can learn without being programmed to complete certain tasks. Things like cheaper, more powerful computational processing, the growing volumes of data, and affordable storage has taken deep learning from research papers and labs to real life applications. However, all the media and hype surrounding AI, has made it extremely difficult to separate exciting futuristic predictions from pragmatic real-world enterprise applications. In order to avoid begin caught up in the hype of technical implementation, CIOs and other tech decision makers have to build a conceptual lens and look at the various areas of their company that can be improved by applying machine learning. This article explored some of the practical use cases of machine learning in the enterprise.

1. Process Automation

Intelligent process automation (IPA) combines artificial intelligence and automation. It involves the diverse use of machine learning. From automating manual data entry, to more complex use cases like automating insurance risk assessments. ML is suited for any scenario where human decision is used, but within set constraints, boundaries or patterns. Thanks to cognitive technology like natural language processing, machine vision and deep learning, machines can augment traditional rule-based automation and overtime learn to do them better as it adapts to change. Most IPA solutions already utilize ML-powered capabilities beyond simple rule based automation. The business benefits are much more extensive than cost saving and include better use of costly equipment or highly skilled employees, faster decisions and actions, service and product innovations, and overall better outcomes. By taking over rote tasks, machine learning in the enterprise frees up human worker to focus on product innovation and service improvement; allowing the company to transcend conventional performance trade-offs and achieve unparalleled levels of quality and efficiency.

2. Sales Optimization

Sales typically generates a lot of unstructured data that can ideally be used to train machine learning algorithms. This comes as good news to enterprises that have been saving consumer data for years, because it is also the place with the most potential for immediate financial impact from implementing machine learning. Enterprises eager to gain a competitive edge are applying ML to both marketing and sales challenges in order to accomplish strategic goals. Some popular marketing techniques that rely on machine learning models include intelligent content and ad placement or predictive lead scoring. By adopting machine learning in the enterprise, companies can rapidly evolve and personalize content to meet the ever changing needs of prospective customers. ML models are also being used for customer sentiment analysis, sales forecasting analysis, and customer churn predictions. With these solutions, sales managers are alerted in advance to specific deals or customers that are risk.

3. Customer Service

Chatbots and virtual digital assistants are taking over the world of customer service. Due to the high volume of customer interactions, the massive amounts of data captured and analyzed is the ideal teaching material required to fine tune ML algorithms. Artificial intelligence agents are now capable of recognizing a customer query and suggesting the appropriate article for a swift resolution. Freeing up human agents to focus on more complex issues, subsequently improving the efficiency and speed of decisions. Adopting machine learning in the enterprise cloud have an infallible impact when it comes to customer service-related routine tasks. Juniper research maintains that chatbots will create an annual $8 billion cost savings by 2022. According to a 2017 PWC report, 31 percent of enterprise decision makers believe that virtual personal assistants will significantly impact their business, more than any other AI powered solution. The same report found that 34 percent of executives say that the time saved as a result of using virtual assistants allowed them to channel their focus towards deep thinking and creativity.

4. Security

Machine learning can help enterprises improve their threat analysis and how they respond to attacks and security incidents. ABI research analysts estimate that machine learning in data security will increase spending in analytics, big data and artificial intelligence to $96 billion by 2021. Predictive analytics enables the early detection of infections and threats, while behavioral analytics ensures that any anomalies within the system does not go unnoticed. ML also makes it easy to monitor millions of data logs from mobile and other IoT capable devices and generate profiles for varying behavioral patterns with your IoT ecosystem. This way, previously stretched out security teams can now easily detect the slightest irregularities. Organizations that embrace a risk-aware mind-set are better positioned to capture a leading position in their industry, better navigate regulatory requirements, and disrupt their industries through innovation.

5. Collaboration

The key to getting the most out of machine learning in the enterprise lies in tapping into the capabilities of both machine learning and human intelligence. ML-enhanced collaboration tools have the potential to boost efficiency, quicken the discovery of new ideas and lead to improved outcomes for teams that collaborate from disparate locations. Nemertes’ 2018 UC and collaboration concluded that about 41 percent of enterprises plan to use AI in their unified communications and collaboration applications. Some uses cases in the collaboration space include:
• Video intelligence, audio intelligence and image intelligence can add context to content being shared, making it simpler for customers to find the files they require. Image intelligence coupled with object detection, text and handwriting recognition helps improve meta data indexing for enhance search.
• Real time language translation, facilitates communication and collaboration between global workgroups in their native languages.
• Integrating chatbots into team applications enables native language capabilities, like alerting team members or polling them for status updates.
That is just the tip of the iceberg, machine learning offers significant potential benefits for companies adopting it as part of their communications strategy to enhance data access, collaboration and control of communication endpoints.

 

Author: Gabriel Lando

image courtesy of freepik.com

How to Deploy A Software Defined Network

Software Defined Network (SDN) was a bit of a buzzword throughout the early to middle of this decade. The potential of optimal network utilization promised by software-defined networking captured the interest and imagination of information technology companies quickly. However, progress was slow, because the general understanding of software-defined networking wasn’t up to the mark, which caused enterprises to make wrong choices and unsustainable strategic decisions upfront.

 

Where Does SDN Come Into the Picture?

SDN is still a nascent concept for several companies. The virtualization potential for networks offered by SDN calls out IT leaders to improve their understanding of this software heavy approach of network resource management. We hope this guide helps.

What is Software Defined Networking Afterall?

You would know and appreciate how software managed virtual servers and storage make computing resource management more agile and dynamic for enterprises. Imagine the benefits that enterprises could enjoy if the same capabilities could be extended on to your company’s network hardware. That’s what software-defined networking offers.

SDN is about adding a complex software layer on top of the hardware layer in your company’s network infrastructure. This allows network administrators to route network traffic as per sophisticated business rules. These rules can then be extended across to network routers so that administrators don’t have to depend solely on hardware configuration to manage network traffic.

This sounds easy in principle. Ask any network administrator, and they will tell you that’s its really difficult to implement, particularly in companies with matured and stabilized networking infrastructure and processes.

 

 

 

SDN Implementations Demand Upgrades in Network Management Practices

An almost immediate outcome of SDN implementation will be your enterprise’s ability to quickly serve network resource demands using the software. To maintain transparency, the networking team needs to immediately evaluate the corresponding changes they need to bring in, let’s say, the day end network allocation and utilizing reports. This is just one of the many examples of situations where every SDN linked process improvement will need to be matched by equivalent adjustments in related and linked processes.

 

 

Managing De-provisioning Along the Way

At the core of SDN implementations is the enterprise focus on optimizing network usage and managing on-demand network resource requests with agility. While SDN implementations help companies achieve these goals fairly quickly, they often also cause unintended network capacity issues. Among the most common reasons for this is that SDN engineers forget to implement rules for de-provisioning networks when the sudden surge in demand is met. By building de-provisioning as the last logical step in every on-demand resource allocation request, networking teams can make sure that SDN doesn’t become the unintentional cause of network congestion.

 

Pursue 360 degrees network performance visibility

It’s unlikely that your company will go for a complete overhaul of its network management systems and processes. So, it’s very likely that the SDN implementation will be carried out in a phased manner. Some of the key aspects of managing this well are:

  • Always evaluate the ease with which your existing network performance monitoring tools will allow SDN to plug into them.
  • Look for tools whose APIs allow convenient integration with SDN platforms
  • Evaluate how your current network performance management tools will be able to manage and integrate data from non-SDN and SDN sources.

Note – because hybrid SDN (a balance of traditional and software-defined network) is a practical approach for enterprises, implementations much accommodation the baseline performance monitoring goals of the enterprise. In fact, the introduction of SDN often requires networking teams to improve performance monitoring and reporting practices so that concrete and business process-specific improvements can be measured and reported.

 

 

Is SDN an Enterprise Priority Already?

The basics reason why SDN is making its way into IT strategic discussions for even SMBs is that the nature of business traffic has changed tremendously. Systems have moved to the cloud-computing model, and there’s a lot of focus on mobile accessibility of this system.

In times when systems operated mostly in the client-server configuration, the basic tree structure of Ethernet switched worked well. Enterprise network requirements today, however, demand more. SDN is particularly beneficial in enabling access to public and private cloud-based services.

SDN also augers well for another very strong enterprise movement – the one towards mobility. That’s because, with SDN, network administrators can easily provision resources for new mobile endpoints, taking care of security considerations. Also, enterprise data volumes and information needs will only grow. Managing network optimization with many virtual machines and servers in the play, traditionally, will require tremendous investments. SDN makes it more manageable, even from a financial perspective.

 

Understand and Acknowledge Security Aspects of SDN

Make no assumptions. SDN is a major change in the way your company’s network works. There are specific known risks of SDN implementations that consultants and vendors from this sphere will help you prepare for.

Protocol weaknesses are right at the top. A crucial question for the application security and network security teams to work together on is – do our application security routines accommodate the needs of protocols used in the SDN platform? Another key security-related aspect is to devise measures to prevent SDN switch impersonation.

 

Choosing External Vendors

The success of an SDN implementation is measured in terms of the positive impact it has in the context of business use cases. If/when you initiate discussions with external consultancies and vendors for your enterprise SDN implementation, make sure you evaluate them not only on the basis of their SDN knowledge but also their ability to understand your business applications ecosystem. This helps them implement SDN platforms that accommodate complex and highly sophisticated business rules of network resource allocation. This, in turn, significantly improves the project’s probability for getting all its goals tick marked.

 

Concluding Remarks

If SDN is on the strategic roadmap being followed by your enterprise, there’s a lot you can help with. Start with the tips and suggestions shared in this guide.

 

 

Author: Rahul Sharma

Personal Data Breach Response Under GDPR

personal data breach

Data security is at the heart of the upcoming General Data Protection Regulation (GDPR). It sets strict obligations on data controllers and processors in matters pertaining data security while concurrently providing guidance on the best data security practices. And for the first time, the GDPR will introduce specific breach notification guidelines. With only a few months to go until the new regulations come into effect, businesses should begin focusing on data security. Not just because of the costs and reputational damage a personal data breach can lead to; but also because under the GDPR, a new data breach notification regime will be applied to statute the reporting of certain data breaches to affected individuals and data protection authorities.

What Constitutes a Personal Data Breach Under GDPR?

GDPR describes A personal data breach as – a security breach that leads to the unlawful or accidental loss, destruction, alteration, or unauthorized disclosure of personal data stored, processed or transmitted. A personal data breach is by all means a security incident; however, not all security incidents require the same strict reporting regulations as a personal data breach. Despite the broad definition, it is not unusual in data security laws that require breach reporting. HIPAA, for example, makes the same distinctions at the federal level for medical data. It aims to prevent data protection regulators from being overwhelmed with breach reports.

By limiting breach notifications to personal data (EU speak for personally identifiable information – PII), incidents that solely involve the loss of company data/ intellectual property will not have to be reported. The threshold to establish whether an incident has to be reported to a data protection authority is dependent on the risk it poses to the individuals involved. High risk situations are those that can potentially lead to the significant detrimental suffering – for example, financial loss, discrimination, damage to reputation or any other significant social or economic disadvantage.

…it should be quickly established whether a personal data breach has occurred and to promptly notify the supervisory authority and the data subject.

– Recital 87, GDPR

If an organization is uncertain about who has been affected, the data protection authority can advise and, in certain situations, instruct them to immediately contact the individuals affected is the security breach is deemed to be high risk.

What Does The GDPR Require You to Do?

Under GDPR, the roles and responsibilities of processors and data controllers have been separated. Controllers are obliged to only engage processors who are capable of providing sufficient assurances to implement appropriate organizational and technical measures to protect the rights of data subjects. In the event of a data breach that affects the rights and freedoms of said data subjects, the organization should report it, without any delay and, where practicable, within 72 hours of becoming aware of it.

The data processor is mandated to notify the controller the moment a breach is discovered, but has no other reporting or notification obligation under the GDPR. However, the 72-hour deadline begins the moment the processor becomes aware of the data breach, not when the controller is notified of the breach. A breach notification to a data protection authority has to at least:

  1. Have a description of the nature of the breach, which includes the categories and number of data subjects affected.
  2. Contain the data protection officer’s (DPO) contact information.
  3. Have a description of the possible ramifications of the breach.
  4. Have a description of steps the controller will take to mitigate the effect of the breach.

The information can be provided in phases if it is not available all at once.
If the controller determines that the personal data breach can potentially put the right and freedoms of individuals at risk, it has to communicate any information regarding the breach to the data subjects without undue delay. The communication should plainly and clearly describe the nature of the personal data breach and at least:

  1. Contain the DPO’s contact details or a relevant contact point.
  2. Have a description of the possible ramifications of the breach.
  3. Have a description of measures proposed or taken to mitigate or address the effects of the breach.

The only exception in this case is if the personal data has been encrypted, and the decryption key has not been compromised, then there is not need for the controller to notify the data subject.

The most ideal way for companies to handle this GDPR obligation is to not only minimize breaches, but also, establish policies that facilitate risk assessment and demonstrates compliance.

The GDPR stipulates that all the records pertaining the personal data breach, regardless of whether the breach needs to be reported or not. Said records have to contain the details of the breach, any consequences and effects, and the follow up actions taken to remedy the situation.

Should Ransomware Attacks Be Reported?

Ransomware typically involves the ‘hijacking’ of cooperate data via encryption and payment is demanded in order to decrypt the ransomed data. Under GDPR, Ransomware attacks may be categorized as a security incident but it does not necessarily cross the threshold of a personal data breach. A Ransomware attack would only be considered a personal data breach if there is a back up but the outage directly impacts user’s freedoms and rights, or if there is no back up at all. Ideally, a Ransomware attack where the ransomed data can be quickly recovered does not have to be reported.

What Are the Consequences of Non-Compliance?

A failure to comply with the GDPR’s breach reporting requirements will not only result in negative PR, constant scrutiny, and possibly loss of business; but will also attract an administrative fine of up to € 10 million or up to two percent of the total global annual turnover of the preceding financial year. Additionally, failure to to notify the supervising authority may be indicative of systematic security failures. The would show an additional breach of GDPR and attract more fines. The GDPR does have a list of factors the supervising authority should consider when imposing fine; chief among them being the degree of co-operation by the data controller with protection authority.

In Closing

Data breach notification laws have already been firmly established in the U.S. These laws are designed to push organizations to improve their efforts in the detection and deterrence of data breaches. The regulators intentions are not to punish but to establish a trustful business environment by equipping organizations to deal with with security issues.

Author: Gabriel Lando

image courtesy of freepik

FileCloud High Availability Architecture

Enterprise Cloud Infrastructure is a Critical Service

The availability of enterprise hosted cloud services has opened huge potential for companies to effectively manage files. The files can be stored, shared, exchanged within the enterprise and with their partners efficiently while keeping existing security and audit controls in place. The service provides the power and flexibility of public cloud while maintaining the data control.

The main challenge of enterprise hosted cloud services is to guarantee high uptime (in the order of seven nines) while maintaining high quality of service. The dependency on such services means that any disruption to the service can have significant productivity impacts. Enterprise cloud services typically consist of multiple different services to provide the functionality and any High availability architecture must take into account that all critical services need to have redundancies built into them to be effective. Moreover, detection and handling of failures must not require any user interaction as well as be reasonably quick.

FileCloud Enterprise Cloud

FileCloud enables enterprises to seamlessly access their data using a variety of external agents. The agents can be browsers, mobile devices, client applications, while the data that is enabled for access by FileCloud can be stored locally or in internal NAS devices or in public cloud locations such as AWS S3 or OpenStack SWIFT.

Depending on the specific enterprise requirements, the FileCloud solution may implemented multiple different software services such as Filecloud Helper service, Solr service, virus scanner service, Open Office service etc. Moreover, FileCloud may use the enterprise identity services such as Active Directory or LDAP or ADFS services. Any failure on any of these services can impact end user experience.
FileCloud HA

High Availability Architecture

FileCloud solution can be implemented using the classic three tier high availability architecture. The first tier will consists of the load balancer and access control services. Tier 1 will be a web tier made up of load balancers. Tier 2 will be stateless application servers and for FileCloud implementation, this layer will consist of Apache nodes and helper services. Tier 3 will be the database layer. Any other dependencies such as Active Directory or Data servers are not addressed here.  The advantage this architecture is separation of stateless components from state full components allowing great flexibility in deploying the solution.
AD tiers

Tier 1 – Web Tier

Tier 1 is the front end of the deployment and act as the entry point to all external clients. The components in Tier 1 are stateless and primarily forward the request to the webservers in tier 2. Scaling of the web tier can be done by adding and removing load balancer instances since they are stateless. Each webserver node is capable of handling any request. This layer can also be configured to do SSL offloading allowing lighter weight communication between Tier1 to Tier2. This layer can also be configured to provide simple affinity based on source and destination addresses. The traffic will be forwarded to healthy application server nodes.  This layer also monitors available application servers and will automatically distribute the traffic depending on the load.

Tier 2 – Application Servers

Tier 2 in FileCloud deployment consists of the following services

  • Apache servers
  • FileCloud helper
  • Antivirus service
  • Memcache service
  • Open Office service

The apache servers in FileCloud do not store any state information and are therefore stateless. They however do cache data for faster performance (such as convert and cache documents for display). They primarily execute application code to service a request. All state specific data is stored in database tables and therefore are stateless. If an application server node fails, the request can be handled by a different application server node (provided the clients retry the failing request). Capacity can be increased or reduced (automatically or manually) by adding or removing apache server nodes.

FileCloud helper service provides additional capabilities such as indexed search, NTFS permission retrieval etc.  FileCloud Helper is a stateless service and therefore can be added or removed as needed.

Similar to FileCloud helper service, the Antivirus service is also a stateless service providing antivirus capability to FileCloud. Any file that is uploaded to Filecloud is scanned using this service.

Memcache service is an optional service that is required for local storage encryption. This service is also stateless and is required only if local storage encryption is required. This service is also started in same node as the Apache service.

Open office service is an optional service that is required for creating document file previews in browser. This server is stateless and is started in the same node as the Apache server.

Tier 3 – Database Nodes

Tier 3 consists of state full services. This consists of the following services

  • MongoDB servers
  • Solr Servers

The High availability for each of these servers varies depending on the complexity of the deployment. The failure of these services can have limited or system wide impact. For example, MongoDB server failure will result in FileCloud solution wide failure and is critical, while FileCloud helper server will only impact a portion of function such as network folder access etc.

MongoDB Server High Availability

MongoDB servers store all application data in FileCloud and provide High Availability using replica sets. The MongoDB replica set configuration provides redundancy and increases data availability by keeping multiple copies of data on different database services. Replication also provides fault tolerance against the loss of a single database server. It is also possible to configure Mongo DB to increase the read capacity. The minimum number of nodes needed for Mongo DB server HA is a 3 node member set (It is possible to also use 2 nodes + 1 arbiter).  In case of primary Mongo DB server node failure, one of the secondary node will failover and will become primary.

The heartbeat time frame can be tuned depending on system latency. It is also possible to setup the Mongo DB replica to allow reads from secondary to improve read capacity.
HA Architecture Primary Secondary

Putting It All Together

The three tier structure for FileCloud component is shown below. The actual configuration information is available in FileCloud support. This provides a robust FileCloud implementation with high availability and extensibility.  As new services are added to extended functionality, the layer can be decided whether or not they are stateless or store state. The Stateless (Tier 2) nodes can be added or removed without disrupting service. Tier 3 nodes will store state and require specific implementation depending on the type of service.
HA Architecture

Alternative to WatchDox – Why FileCloud is better for Business File Sharing?

WatchDoxVsFileCloud

FileCloud competes with WatchDox for business in the Enterprise File Sync and Share space(EFSS). Before we get into the details, I believe an ideal EFSS system should work across all the popular desktop OSes (Windows, Mac and Linux) and offer native mobile applications for iOS, Android, Blackberry and Windows Phone. In addition, the system should offer all the basics expected out of EFSS: Unlimited File Versioning, Remote Wipe, Audit Logs, Desktop Sync Client, Desktop Map Drive and User Management.

The feature comparisons are as follows:

Features WatchDox
On Premise
File Sharing
Access and Monitoring Controls
Secure Access
Document Preview
Document Edit
Outlook Integration
Role Based Administration
Data Loss Prevention
Web DAV
Endpoint Backup
Amazon S3/OpenStack Support
Public File Sharing
Customization, Branding
SAML Integration
Anti-Virus
NTFS Support
Active Directory/LDAP Support
Multi-Tenancy
API Support
Application Integration via API
Large File Support
Network Share Support Buy Additional Product
Mobile Device Management
Desktop Sync Windows, Mac, Linux Windows, Mac
Native Mobile Apps iOS, Android, Windows Phone iOS, Android
Encryption at Rest
Two-Factor Authentication
File Locking
Pricing for 20 users/ year $999 $3600

From outside looking-in, the offerings all look similar. However, the approach to the solution is completely different in satisfying enterprises primary need of easy access to their files without compromising privacy, security and control. The fundamental areas of difference are as follows:

Feature benefits of FileCloud over WatchDox

Unified Device Management Console – FileCloud’s unified device management console provides simplified access to managing mobile devices enabled to access enterprise data, irrespective of whether the device is enterprise owned, employee owned, mobile platform or device type. Manage and control of thousands of iOS and Android, devices in FileCloud’s secure, browser-based dashboard. FileCloud’s administrator console is intuitive and requires no training or dedicated staff. FileCloud’s MDM works on any vendor’s network — even if the managed devices are on the road, at a café, or used at home.

Amazon S3/OpenStack Support Enterprise wanting to use Amazon S3 or OpenStack storage can easily set it up with FileCloud. This feature not only provides enterprise with flexibility to switch storage but also make switch very easily.

Embedded File Upload Website Form – FileCloud’s Embedded File Upload Website Form enables users to embed a small FileCloud interface onto any website, blog, social networking service, intranet, or any public URL that supports HTML embed code. Using the Embedded File Upload Website Form, you can easily allow file uploads to a specific folder within your account. This feature is similar to File Drop Box that allows your customers or associates to send any type of file without requiring them to log in or to create an account.

Multi-Tenancy Support – The multi-tenancy feature allows Managed Service Providers(MSP) serve multiple customers using single instance of FileCloud. The key value proposition of FileCloud multi-tenant architecture is that while providing multi-tenancy the data separation among different tenants is also maintained . Moreover, every tenant has the flexibility for customized branding.

NTFS Shares Support – Many organizations use the NTFS permissions to manage and control the access permissions for internal file shares. It is very hard to duplicate the access permissions to other systems and keep it sync. FileCloud enables access to internal file shares via web and mobile while honoring the existing NTFS file permissions. This functionality is a great time saver for system administrators and provides a single point of management.

Conclusion

Based on our experience, enterprises that look for an EFSS solution want two main things. One, easy integration to their existing storage system without any disruption to access permissions or network home folders. Two, ability to easily expand integration into highly available storage systems such as OpenStack or Amazon S3.

WatchDox neither provides OpenStack/Amazon S3 storage integration support nor NTFS share support. On the other hand, FileCloud provides easy integration support into Amazon S3/OpenStack and honors NTFS permissions on local storage.

With FileCloud, enterprises get one simple solution with all features bundled. For the same 20 user package, the cost is $999/year, almost 1/4th of WatchDox.

Here’s a comprehensive comparison that shows why FileCloud stands out as the best EFSS solution.

Try FileCloud For Free & Receive 5% Discount

Take a tour of FileCloud

A Primer on Windows Servers Disaster Recovery

windows server recovery
In this primer, we’re going to explore some of the best ways to actively restore your Windows Server with minimal impact. Though basic, the following technical pointers will help you with faster Windows servers disaster recovery.

1. RAM and HARD DISK Check

Blue screens are Windows’ way of telling you of some hardware failure such as with a faulty RAM, etc. Before taking any immediate action such as with a software repair option, it is important to run a thorough ram and hard disk check. To analyze issues with blue screens, you can resort to the Blue Screen View tool with auto-USB loading options. If experiencing blue screens, define behavior for windows restart with Control Panel-> System and Security-> System-> Advanced System Settings. Go to Startup and Recovery-> Settings-> Disable Automatically Restart option from System Failure. Choose Automatic Memory dump/Small memory dumpto let BlueScreenView parse memory.dmp file generated. Further errors for hard disk can be checked from Windows Logs-> System-> Event Viewer.

2. Boot Manager Failure

Boot manager failure leads to server loading failures. A Win Server DVD or repair technician can help here. Another solution is to access boot manager through the command prompt and take necessary steps for reactivating it. To overwrite master boot record (from the beginning of the disk), you can use the command bootrec /fixmbr. To view OS not currently listed, input command bootrec /scanos. To reinstate systems in the boot manager, use bootrec /rebuildbcd which reinstalls earlier systems integrated with boot manager. After this, input bootrec /fixboot to create a bootmgr log again. Beyond this, input commands bootsect /nt60 SYS followed by bootsect /nt60 ALL in the command line to repair the boot manager further.

3. Windows Startup Failure

Startup failures result from system files displacement after a crash, which leads to the Server booting up but not launching. One option is to do a system restore and select an earlier restore point. Another option is to open elevated command prompt, input sfc /scannow and allow Windows to scan and restore accordingly.

4. Restoring Server Backup

If installed through server Manager, a Server that is backed up on an external drive can restore data completely. Win Server Manager also offers Win Server Backup Feature to launch backup from tools menu or searching wbadmin.msc from Startup Menu. Block based backups are generated as a result, although it is possible to select particular partitions from the Backup Schedule wizard as well. To start full back (restorable via computer repair option on installation DVD), use wbadmin start sysrecovery/systemstatebackup from command-line. Use wbadmin start backup –allCritical –backupTraget:<insert_disk_of_choice>  -quiet. This backup can then be used to restore from in case of system failures. Boot Win Server through DVD, and then select Repair Your Computer option from Troubleshoot -> System Image Recovery.

5. Hardware Restore

Windows Server 2008 and Win Server 2012 has options to restore system backups from different hardware sources if you select the Bare Metal Recovery option. In this you need to utilize the Exclude disks option which lets you select a disk that is not required during restore operations, e.g. a disk with data rather than OS files is suitable for this. Select Install Drivers if you wish to backup drivers within your recovery data file so as to install it as well during a complete system restore from an initial point of backup. Advanced options are also available to provide options such as automatic system restore after disk defect verification and server restore, etc.

6. Active Directory Backup & Restore

The native backup program within the Win Server OS is sufficiently useful for backing up active directory services and restoring the same. It can not only create a back up of the directory but save all associated data necessary for functioning. To run backup, enable System State and System Reserved option and then back up all the data. In order to restore your Actve Directory, start domain controller and press F8 until the boot menu appears (may vary depending on the model and make of computer in use). In boot option, select Directory Services Restore Mode, log into the applications for Active Directory Restore mode, then complete restore. To boot domain controller into restore mode, input data: bcdedit/set safeboot dsrepair. If in Directory Services restore mode, set bcdedit /deletevalue safeboot to boot normally. Input shutdown t 0 –r to reboot.

7. Active Directory Cleanup

In DND manager/server, look into Properties for Name Servers tab then remove the service but be careful not to remove host entry. Ensure the domain controller is not explicitly registered as such, then remove AD services (e.g. VPN, etc.). If global catalog exists on the server, configure a different one with same deails from AD sites and services snap-in tool  and then go to Sites -> Servers -> Right click NTDS settings -> Properties -> uncheck Global Catalog from General tab. To downgrade domain controller, use PowerShell Uninstall ADDSDomainController cmdlet, use –force if you wish to remove it completely. Metadata can be modified from ntdsutil-> metadata cleanup -> connections. After cleanup, delete domain controller from site of assignation. Go to Snap In -> Domain Controller -> Select Delete. Check NTDS settings from AD to reassure it’s not registered with replication partner (remove if required).
windows server 2003

8. Active Directory Database Rescue

Go to Directory Services Restore Mode, insert call ntdsutil, active instance ntds, then choose files. Input <integrity>, quit to leave file maintenance. Data analysis launched by semantic database analysis command from CMD can give a detailed report if you keep verbose on. Enter go fixup, to start up the diagnostic tool to repair database. Quit and restart with command quit ntdsutil.

9. Backup for Win Exchange

Begin with Select Application under Select Recovery Type, navigate to Exchange Option, View Details to see backups. The backup is current if checkbox Do Not Perform a Roll-Forward appears at this stage. For Roll Forward Recovery, transaction logs created during backup are required as Exchange uses these to write in the database and accomplish recovery. Enabling the Recovr to Original Location option lets you restore all databases to original locations. Beyond the system restore, the backup is integrated with database and can also be manually moved back.

Author: Rahul Sharma

Image courtesy: Salvatore Vuono, freedigitalphotos.net

Alternative to Pydio – Why FileCloud is better for Business File Sharing?

FileCloudVsPydio

FileCloud competes with Pydio for business in the Enterprise File Sync and Share space(EFSS). Before we get into the details, I believe an ideal EFSS system should work across all the popular desktop OSes (Windows, Mac and Linux) and offer native mobile applications for iOS, Android, Blackberry and Windows Phone. In addition, the system should offer all the basics expected out of EFSS: Unlimited File Versioning, Remote Wipe, Audit Logs, Desktop Sync Client, Desktop Map Drive and User Management.

The feature comparisons are as follows:

Features Pydio
On Premise
File Sharing
Access and Monitoring Controls
Secure Access
Document Preview
Document Edit
Outlook Integration
Role Based Administration
Data Loss Prevention
Web DAV
Endpoint Backup
Amazon S3/OpenStack Support
Public File Sharing
Customization, Branding
SAML Integration
Anti-Virus
NTFS Support
Active Directory/LDAP Support
Multi-Tenancy
API Support
Application Integration via API
Large File Support
Network Share Support
Mobile Device Management
Desktop Sync Windows, Mac, Linux Windows, Mac, Linux
Native Mobile Apps iOS, Android, Windows Phone iOS, Android
Encryption at Rest
Two-Factor Authentication
File Locking

From outside looking-in, the offerings all look similar. However, the approach to the solution is completely different in satisfying enterprises primary need of easy access to their files without compromising privacy, security and control. The fundamental areas of difference are as follows:

Feature benefits of FileCloud over Pydio

Document Quick Edit – FileCloud’s Quick Edit feature supports extensive edits of files such as Microsoft® Word, Excel®, Publisher®, Project® and PowerPoint® — right from your Desktop. It’s as simple as selecting a document to edit from FileCloud Web UI, edit the document using Microsoft Office, save and let FileCloud take care of other uninteresting details in the background such as uploading the new version to FileCloud, sync, send notifications, share updates etc.

Embedded File Upload Website Form – FileCloud’s Embedded File Upload Website Form enables users to embed a small FileCloud interface onto any website, blog, social networking service, intranet, or any public URL that supports HTML embed code. Using the Embedded File Upload Website Form, you can easily allow file uploads to a specific folder within your account. This feature is similar to File Drop Box that allows your customers or associates to send any type of file without requiring them to log in or to create an account.

Unified Device Management Console – FileCloud’s unified device management console provides simplified access to managing mobile devices enabled to access enterprise data, irrespective of whether the device is enterprise owned, employee owned, mobile platform or device type. Manage and control of thousands of iOS and Android, devices in FileCloud’s secure, browser-based dashboard. FileCloud’s administrator console is intuitive and requires no training or dedicated staff. FileCloud’s MDM works on any vendor’s network — even if the managed devices are on the road, at a café, or used at home.

Device Commands and Messaging – Ability to send on-demand messages to any device connecting to FileCloud, provides administrators a powerful tool to interact with the enterprise workforce. Any information on security threats or access violations can be easily conveyed to the mobile users. And, above all messages are without any SMS cost.

Amazon S3/OpenStack Support Enterprise wanting to use Amazon S3 or OpenStack storage can easily set it up with FileCloud. This feature not only provides enterprise with flexibility to switch storage but also make switch very easily.

Multi-Tenancy Support – The multi-tenancy feature allows Managed Service Providers(MSP) serve multiple customers using single instance of FileCloud. The key value proposition of FileCloud multi-tenant architecture is that while providing multi-tenancy the data separation among different tenants is also maintained . Moreover, every tenant has the flexibility for customized branding.

Endpoint Backup: FileCloud provides ability to backup user data from any computer running Windows, Mac or Linux to FileCloud. Users can schedule a backup and FileCloud automatically backs up the selected folders on the scheduled time.

Conclusion

The preference for enterprises will depend on whether to rely on Pydio whose focus is split between trying to be open source and commercial with their enterprise offering or FileCloud with the only focus to satisfy all enterprise’s EFSS needs with unlimited product upgrades & support at a very affordable price.

Here’s a comprehensive comparison that shows why FileCloud stands out as the best EFSS solution.

Try FileCloud For Free & Receive 5% Discount

Take a tour of FileCloud

Architectural Patterns for High Availability

As the number of mission-critical web-based services being deployed by enterprise customers continues to increase, the need for a deeper understanding of designing the optimal network availability solutions has never been more critical. High Availability (HA) has become a critical aspect in the development of such systems. High Availability simply refers to a component or system that continuously remains operational for a desirable amount of time. Availability is generally measured relative to ‘100 percent operations’; however, since it is nearly impossible to guarantee 100 percent availability, goals are usually expressed in the number of nines. The most coveted availability goal is the ‘five nines’, which translates to 99.999 percent availability – the equivalent of less than a second of downtime per day.
Five nines availability can be achieved using standard commercial quality software and hardware. The design of high availability architectures is largely based on the combination of redundant hardware components and software to manage fault correction and detection without human intervention. The patterns below address the design and architectural consideration to make when designing a highly available system.

Server Redundancy

The key to coming up with a solid design for a highly available system lies in identifying and addressing single points of failure. A single point of failure simply refers to any part whose failure will result into a complete system shutdown. Production servers are complex systems whose availability is dependent on multiple factors, including hardware, software and communication links; each of these factors is a potential point of failure. Introducing redundancy is the surest way to address single points of failure. It is accomplished by replicating a single part of a system that is crucial to its function. Replication guarantees that there will always be a secondary component available to take over in the event a critical component fails. Redundancy relies on the assumption that they system cannot simultaneously experience multiple faults.
The most widely known example of redundancy is RAID-Redundant Arrays of Inexpensive Disks, which utilizes the combined use of multiple drives. Server redundancy can be achieved through a stand-by form also referred to as active-passive redundancy or through active-active redundancy where all replicas are concurrently active.

  • Active-Passive Redundancy

An active-passive architectural pattern consists of at least two nodes. The passive server (failover) acts as a backup that remains on standby and takes over in the event the active server gets disconnected for whatever reason. The primary active server hosts production, test and development applications.
The secondary passive server essentially remains dormant during normal operation. A major disadvantage of this model is that there is no guarantee that the production application will function as expected on the passive server. The model is also considered a relatively wasteful approach because expensive hardware is left unused.
active_passive_high_availability_cluster

fig 1.1

  • Active-Active Redundancy

The active-active model also contains at least two nodes; however, in this architectural pattern, multiple nodes are actively running the same services simultaneously. In order to fully utilize all the active nodes, an active-active cluster uses load balancing to distribute workloads across the nodes in order to prevent any single node from being overloaded. The distributed workload subsequently leads to a marked improvement in response times and throughput.
The load balancers uses a set of complex algorithms to assign clients to the nodes, the connections are typically based on performance metrics and health checks. In order to guarantee seamless operability, all the nodes in the cluster must be configured for redundancy. A potential drawback for an active-active redundancy is that in case one of the nodes fails, client sessions might be dropped, forcing them to re-login into the system. However, this can easily be mitigated by ensuring that the individual configuration settings of each node are virtually identical.
active_active_high_availability_cluster_load_balancer

fig 1.2

  • N+1 redundancy

An N+1 redundancy pattern is sort of a hybrid solution between active-active and active-passive; it is sometimes referred to as parallel redundancy. Despite the fact that this model is mostly used as UPS configuration, it can also be applied for high availability. An N+1 architectural pattern basically introduces 1 slave (passive) for N potential single point of failures in a system. The slave remains in standby mode and waits for a failure to occur in any of the N active parts. The system is therefore granted the capability of handling failure in one out of N components with compromising performance.
n+1 redundancy

fig 2.1

Data Center Redundancy

While a datacenter may contain redundant components, an organization may also benefit from having multiple datacenters. Factors such as weather, power failure or even simple equipment failure may cause an entire datacenter to shut down. In this scenario, replication within the datacenter will be of very little use. Such an unplanned outage can be a significantly costly affair for an enterprise. When failures on a data center level are considered, the need for a high availability pattern that includes multiple servers becomes apparent.
It is important to note that establishing multiple data centers in geographically distinct locations, and buying physical hardware to provide redundancy within the datacenters, is extremely costly. Additionally, setting up is a time-consuming affair, and may seem too difficult to achieve in the long-run. However, high purchase, set-up and maintenance costs can be mitigated by employing the use of IaaS (Infrastructure as a Service) providers.
data center redundancy

fig 3.1

Floating IP Address

A floating IP address can be worked into a high availability cluster that uses redundancy. The term ‘floating’ is used because the IP address can be moved from a one droplet to another droplet within the same cluster in an instance. This means the infrastructure can achieve high availability by immediately pointing an IP address to a redundant server. Floating IPs significantly reduce downtime by allowing customers to associate an IP address with a different droplet. A design pattern that has provisions for floating IPs makes it possible to establish a standby Droplet, which can receive production traffic at moment’s notice.

Author: Gabriel Lando

 

fig 1.1 and fig 1.1 courtesy of  hubspot.net

fig 2.1 courtesy of webworks.in

fig 3.1 courtesy of technet.com

The Hottest Mac Tools For The New-Age IT Manager

Mac tools

Handling IT needs, integrating your workspace and managing your Macs can be a breeze with the help of these essential tools.

MAC-tracker

Need to track all the Mac computers on your system? No problem! Just install the MACtracker and let it do your job. It aggregates information on a variety of accessory units such as your mouse, printer, Wi-Fi cards, even scanners, while keeping track of technical specifications of your mac systems. Without it, it’s safe to say that most IT managers would be at a loss on how to best upgrade or utilize their machines.

Apple’s Disk Utility Tool and Software Restore

When it comes to monolithic cloning, the Apple Disk Utility Tool and ASR (software restore, accessible only from the command line) is the way to go. It offers a choice between both GUI and Diskutil tools (with command lines), to help clone systems for easy configuration of multiple computers on the same network, while allowing system administration as well. Both of these tools can be improved upon with the use of accessory tools such as Carbon Copy Cloner or Blast Image Config (for image capture, deployment, and ASR session setups).

Property List Editor

Property list editors such as PlistEdit Pro for Mac are an essential tool for network admins on any IT panel, particularly for situations that demanding editing system or application preferences. The GUI tool that allows editing of XMP .plist preference files is available for both Mac and Windows servers. However, if you would prefer doing these modifications from within an app and then transferring the .plist files resulting from it, you could choose a Preference Setter for Mac-like application, geared towards viewing or editing preference files on OS X platforms.

NetInstall and NetRestore

These features of the Mac OS X Server were conceptualized on the basis of a “NetBoot” system, with allowance for servers to host boot volumes and direct booting from networks. NetInstall, configured as a utility and admin tool for booting OS X installers, can perform pre- as well as post-installation tasks (binding, installation, partitioning, etc). On the other hand, NetInstall works a little like the ASR to deploy specific images/offer image selections from available database. By the way, AutoCasperNBI is a great tool which can help you automate the process of creating NetBoot images.

WiFi Explorer

This inexpensive tool by Adrian Granados functions as a wireless network scanning tool, helping diagnose or troubleshoot connectivity/performance related issues. Not only can it detect channel conflicts, but also take care of configurational or signal overlap problems. WiFi Explorer, with its clean and simplistic UI, can be a great tool to have in the arsenal for sorting out any network related issues on your systems. For additional help in the network department, you could also install the Angry IP Scanner which can resolve IP address hostnames, determine MAC addresses, scan ports, and report back on existing subnet IP connections.

TextWrangler

Config. files are a headache for most IT managers, so is it any wonder that there are some amazing tools to take care of these for you? TextWrangler, for example, is a great free application that highlights the lines relevant to you. It also integrates a search option for Find and Replace features. Whether you need to make changes to UNIX configuration files on the Mac, or have corrupted processing files that your systems can no longer read, TextWrangler is the one-stop solution for simplifying tasks that traditionally take a very long time and a lot of patience.

Apple Remote Desktop

The remote desktop option from Apple can be a little pricey (at a single-license $299 package), but it’s a worthy investment for IT admins. It has the capability to report on, identify or virtually track minute details including application usage, hardware inventories, and user access. Unsurprisingly, it is one of the most essential tools on the list of must-haves in this category. With the ability to monitor the use of remote Mac computer including overall status, have access to troubleshooting shares,  remote (and completely hidden) control of systems, or even global messaging alert systems – it has a great feature set for you to consider. Of course, you could also use other applications such as CorD for remote access, but the ARD is the most stable OS X desktop management system out there – not just for remote assistance, but software distribution and management of systems.

AutoDMG

This is a useful tool for administrators looking to procure a fresh clean boot image for OS X. The system works on the basis of taking up an OS X Installer, building a clean system image, and then suitable deployment with the help of additional software like the DeployStudio. All you need is an installer to create an installer package for all the computers under your control, which makes it a useful tool to have for system managers and admins.

Active Directory Suites

OS X comes with a built-in Active Directory client for joining Active Directory domains, allowing single sign-on options. Of course, you can also use a dual-directory setup from Apple, joining the Open and Active directories for secure access to resources. With Apple’s Profile Manager Feature you have the choice of both iOS device management and MAC client management without the hassle of opting for directory services. Apple’s Active Directory tools are good, but not perfect – no client management support beyond basic passwords, no DFS browsing, etc. can be severely crippling.

To expand your access and management of your devices, you might consider other options such as the PowerBroker Identity Services Open-Ed, and Centrify Express, which has the capability of broader authentication and access abilities. If you desire to integrate your client management capabilities without having to go through complex dual directory setups or extensions of schemata, the Direct Control and Enterprise Editions respectively are a good option.

Author: Rahul Sharma

Image Courtesy: KROMKRATHOG, freedigitalphotos.net

Top Ten Monitoring Tools for System Admins

Local area network administrators, network admins, or system administrators, are the individuals within a company that oversee the performance of the organization’s networks. These professionals are expected to gather and assess information from network users so that they might identify and fix problems. Fortunately, admins do not have to struggle alone, there are some extremely valuable tools available today that can make the lives of system admins far easier. Here are the top ten tools.

  1. Microsoft Network Monitor

This is a packet analyzer that provides admins with the means to capture, view and assess network traffic. It’s especially handy for troubleshooting problems with applications and network issues. Some of the main features include support for over 100 Microsoft and public propriety protocols, capture sessions, and more. Moreover, Microsoft Network monitor is surprisingly easy to use. Simply choose which adapter to bind to and from the main window and then click on new capture, to initiate a new tab.

  1. Pandora FMS

Pandora is a network monitoring, performance monitoring and availability management service that can be used to watch your communications, applications and servers. It has a detailed correlation system for events that allows users to design alerts that are based on events taken from different sources. It also ensures that administrators are alerted before any issues begin to escalate too far.

  1. Splunk

Splunk is a data analysis and collection platform that allows system admins to gather, monitor and analyze data that has been taken from a number of different sources within your network, such as your devices, services, and event logs. You can create alerts that will notify you when something goes wrong, or use the extensive search function to make the most of any data that you do collect. Additionally, Splunk also supports the installation of apps to extend functionality within the system.

  1. Nagios

Nagios is a great tool for network monitoring which helps to ensure that all applications, critical systems and services are consistently up and running. It comes with features such as event handling, reporting and alerting. With the Nagios core, you can implement plugins that will allow you to further monitor metrics, applications and services, as well as add-ons for graphs, data visualization, load distribution and database support. The free version of Nagios is generally a good option for smaller organizations and can monitor as many as seven nodes at once.

  1. BandwidthD

BandwidthD oversees the IP/TCP network usage in your business, as well as displaying the data that has been gathered in various forms, such as tables and graphs, throughout disparate time periods. Each protocol, such as UDP or HTTP, will be color-coded to ensure easy reading. What’s more, this service can run discretely in the background without disrupting your normal activities. It is east to download and install. Once the program is up and running, give it a few moments to monitor your network traffic.

  1. EasyNetMonitor

This tool is incredibly lightweight and simple for those who want to monitor remote and local hosts, to determine whether they are active or not. It is especially useful when it comes to monitoring critical servers from a desktop, and provides immediate notifications via popups and log files if a specific host does not respond to a ping. Once you’ve added the machines that you wish to monitor to the system, remember to configure your notification setting, and the ping delay time.

  1. Fiddler

Fiddler is a tool for web debugging that can capture HTTP traffic as it moves between specific computers and through the internet. This tool allows you to carefully evaluate any outgoing and incoming data, as well as giving you the means to modify responses and requests before they hit your browser. The service also gives you detailed information regarding your HTTP traffic, meaning that it can be used to test your website performance and your web application security.

  1. Angry IP Scanner

IP Scanners are important tools, and the Angry IP Scanner is a free standalone application that allows you to scan ports and IP addresses. It is used to find out which hosts are active, as well as obtaining information about them, including their host name, ping time, MAC address and so on. By going into the Tools tab, you can decide which information you want to collect from any scan.

  1. NetXMS

NetXMS is a multiplatform network for monitoring and management, that provides performance monitoring, event management, altering, reporting and graphing for your complete infrastructure IT model. The main features of this service include support for database engines and operating systems, distributed network monitoring, business analysis tools and auto-discovery. The program also allows you to run a management console or web-based interface if necessary. Once you’ve downloaded and logged into XMS, you should go into the Server configuration window and change settings according to the requirements of your network. Then, you’ll be able to run the network discovery option which will cause NetXMS to automatically find devices that exist on your network.

  1. Xirrus Wi-Fi Inspector

The Xirrus Wi-Fi inspector can be used to search for networks in your area, as well as controlling, troubleshooting and managing various connections. Xirrus verifies Wi-Fi coverage, locates Wi-Fi enabled devices and identifies any rogue access points around your business. What’s more, the Xirrus program comes equipped with quality tests, speed tests and tests for connection efficiency. Once you have launched the inspector and chosen an adapter, a list of Wi-Fi connections should be displayed in your Network pane.

 Author: Rahul Sharma