Archive for the ‘Productivity’ Category

What Every Admin Must Know About Windows 10

Windows administration isn’t easy, not by any stretch of the imagination. Thankfully, there are native tools to assist administrators in getting stuff done. Then, there are plugins and tools to do what you believe is not done best using Windows built-in functions. Windows admin, however, extends far beyond basic checks such as managing multiple user accounts with their specific privileges, running disk defragmentation, clearing out caches, and keeping the system safe from viruses. To be truly an expert, you need to be aware of all the system admin tools Windows 10 offers. Here’s a guide to help you out.

Task Scheduler

Windows uses Task Scheduler internally to manage the execution of tasks that need to be run only occasionally, or at very specific times. Of course, admins can use Task Scheduler to take control of time-specific tasks. Another useful application that Task Scheduler can be used for is to find potential malware running in the background. Cleaning auto-start locations is a basic activity, and malware have become adept at hiding their startup locations. Checking Task Scheduler helps admins identify potential malware and weed them out of the system.

Event Viewer

Windows 10 Event Viewer is all a system admin needs to get complete visibility of what’s going on inside the computer. Event Viewer provides all the insight you need to troubleshoot an issue. You can type ‘event’ into the search box, and then open Event Viewer to load it. The window has three panes – the leftmost houses log types and views, the middle pane houses logs, and the right pane shows a list of action items. The five types of events listed in the left pane are:

  • Application events – These are related to programs.
  • Security events – Events related to security audit.
  • Setup events – These are domain control events.
  • Forwarded events – Events forwarded via networks devices.
  • System events – Windows system file events.

Mostly, you will need to depend on Event Viewer to get basic info about a problematic process, and then conduct deeper research on how to solve it.

Disk Management

Windows 10 Disk Management is the most upgraded version of the well-known disk management utility included in all previous Windows versions. This tool is invaluable for system admins to manage hard disk partitioning without rebooting the Windows system. Also, this tool helps you create, delete, and format disk partitions. You can change drive paths, set partitions as active, extend or shrink partitions, and initialize a new disk before using it.
With the disk management utility, you can convert empty dynamic disks to basic disks. Also, system admins can convert empty MBR disks to GPT disks. If you wish to, for instance, change the device letter for your USB drive, you can make then show us as U: here, instead of the default letter. Also, for issues such as a drive not working, Disk Management is the first point of check for a system admin.

Resource Monitor

For a deep dive into the processes going on in a computer and to understand where the resources are being consumed, trust Resource Monitor. It’s easier to use than PerfMon, and has more insights than Task Manager; hence, it’s a useful resource for a system admin. Trust Resource Monitor to help you understand resource consumption when you run applications or test different configuration settings. Also, for troubleshooting Windows performance issues, Resource Monitor becomes a key source of insight.
On the right side of the Resource Monitor Memory tab, you will see graphs for Used Physical Memory, Hard Faults, and Commit Charge. Check the Processes table on the Memory tab for a list of currently running processes, with their memory usage broken down for you. As long as you know what to look for, you can trust Resource Monitor to put together the info you need so you can debug all Windows performance issues.

Shared PC Modes

Windows 10 offers a pretty useful shared PC mode. This makes it easy for administrators to manage unique requirements such as use of a computer for customer access, as a reception or help desk computer, or as a kiosk computer. In scenarios where multiple users need to work on the same computer to perform vastly different tasks, shared PC modes emerges as a good option. In the shared PC mode, a Windows 10 computer is aimed at being maintenance and management free, making sure system admins have enough time and mind space to perform activities that add more value.

User Experience Virtualization (UE-V)

Complementing the shared PC concept is the User Experience Virtualization (UE-V) feature. It allows system admins to set a computer up for customized usage by individual users who don’t wish to use a roaming user profile. With User Experience Virtualization (UE-V), it’s possible to use different settings for Microsoft Store appearance, background picture, accent colors, font sizes, languages, and language for different users. In User Experience Virtualization (UE-V), the custom settings info is stored in a centrally managed network file, and when users log in, their corresponding settings are activated.

AAD Joined Machines

Bring your own device (BYOD) is an enterprise reality. Also, it’s common for enterprises to seek contractor services, and have several employees working from their personal computers from their homes. When so many computers that don’t exist on the enterprise domain are used to perform routine work, the system admin job becomes rather cumbersome. However, Microsoft’s Azure Active Directory (AAD) can help admins manage and secure systems that can’t be joined to the domain. This also makes remote support easier for employees.

Concluding Remarks: The world of system administration for Windows computers is expansive, and the tools we have covered in this guide are certainly not all comprehensive. However, being comfortable in using these tools can help admins perform most of the routine responsibilities they’re likely to face in an enterprise setup.

Auhtor: Rahul Sharma

Sources
https://techtalk.gfi.com/10-new-windows-10-features-for-sysadmins/
https://docs.microsoft.com/en-gb/windows/configuration/ue-v/uev-for-windows
https://techtalk.gfi.com/how-to-get-the-most-out-of-resource-monitor-in-windows-10/
https://www.disk-partition.com/windows-10/partition-hard-drive-windows-10-0528.html

FileCloud Brings Browser-Based Editing and Coauthoring to On-Premises Microsoft Office Files

FileCloud Microsoft Cloud Storage Partner

We are excited to announce that FileCloud has officially joined the Microsoft Cloud Storage Partner Program. As part of Microsoft’s Cloud Storage Partner Program, FileCloud will enable Office 365 business users to open, edit and save changes to remote Office files stored in FileCloud using a web browser. Importantly, users can co-author and collaborate on documents in real time on Office documents stored in FileCloud from anywhere.

Even today, when we claim ubiquitous Cloud adoption, billions of files are on-premise, behind a firewall. In today’s mobile-first world, employees and customers want to access, edit and collaborate on files from any device. Currently, users can’t remotely co-author or edit  Microsoft Office files that are behind a firewall, unless the organization moves the files to a cloud storage service such as Dropbox, OneDrive.  FileCloud has joined Microsoft Cloud Storage Partner program to integrate with Office 365 to offer a simpler solution. With this integration, users can remotely coauthor and collaborate on Office documents that are even stored behind a firewall.

Since its inception, FileCloud continues to break barriers and offers unique customer choices. FileCloud gives Office customers a seamless experience when working with Office documents regardless of how they are accessed or where they are stored.

“We are excited to be part of the Microsoft Cloud Storage Partner Program and to use the Microsoft Office 365 capabilities for our customers across the globe,” said Madhan Kanagavel, CEO of CodeLathe. “Browser-based editing of Microsoft Word, Microsoft PowerPoint and Microsoft Excel files, as well as working simultaneously on a document via co-authoring features of Office 365, will allow our customers to be instantly more productive and will be a great addition to many existing integrations we have developed for Office, including Microsoft Outlook and mobile apps.”

Even when files and workloads are rapidly moving to the cloud, the majority of organizations continue to store significant portions of their data on-premises. FileCloud offers flexible options where customers can pick a storage model (on-premises, the cloud or a hybrid option) that suits their business needs. With this new Office integration, FileCloud brings real-time collaboration and remote co-authoring to any file regardless of where it is stored.

Rob Howard, Director, Microsoft Office Ecosystem at Microsoft Corp. said, “We are excited to have FileCloud’s participation in the Cloud Storage Partner Program to further extend the availability of Microsoft Office 365 to FileCloud’s users. It’s a unique integration that connects FileCloud’s technology directly to Office Online, so that customers have a great experience for reading and editing Office documents stored within their environment.”

,

Three Ways to Increase Your Office 365 Sales Revenue

Managed services providers (MSPs) have always played a prominent role in the information technology (IT) industry. They have managed and taken responsibility for offering a defined set of services to clients, either proactively or as they see it. However, innovations in big data, social media, and mobile applications have led private cloud use to rise from 63 percent to 72 percent, while hybrid cloud use increased from 58 percent to 67 percent. This has driven MSPs to transform into managed application and cloud services.

As various components of their clients’ IT infrastructure migrate to the cloud, they have had no choice but to provide their own cloud services, resell cloud capabilities, and manage hybrid cloud environments. The sale of cloud-based IT solutions continues to be a major challenge for various MSPs, and this has become a cause for concern, as the very idea of replacing sizeable projects with a few hundred dollars a month in MS Office 365 subscription revenue makes no sense. But it is possible for an MSP to increase their Office 365 sales revenue; all that’s required is a change in perspective and the three tips mentioned below.

Help in Migration – On-premise to Cloud

Most companies are used to running all their collaborations and workflow in an on-premise arrangement. However, because of the changing IT industry, companies realize they could scale quicker and more efficiently if they moved to the cloud with MS Office 365. They found they had neglected major IT upgrades as long as they stuck to the old infrastructure and were now trying to catch up. So, after shifting to cloud, they require someone capable of undertaking a mid-scale, challenging migration with no disruption in business activities. They need an IT services company that can handle every aspect from planning to training, implementation to support.

Hybrid migration is the best option in this case, providing users with a unified, seamless appearance whether operating on-premise or in the cloud. IT needs to adapt on the go with close to zero disruption to the business. Users must be moved carefully to Office 365 in the cloud. One of the major benefits of moving to Office 365 is financial savings. Not only does moving to the cloud eliminate storage, along with the associated maintenance and administrative costs, it also means the company can avoid expensive upgrades to its storage architecture to keep pace with the growth.

Microsoft Office 365 ensures companies no longer must worry about any future migrations or spend money on the most advanced software. Then, updates are implemented automatically and seamlessly. An Office 365 license enables BYOD (bring your own device) users to install the program on many computers, smartphones, and tablets – a major benefit to users.

Migrating to the cloud offered each user of the company access to Office 365 apps and documents. Due to the Office 365 interface being relatively consistent, irrespective of the device being used, employees required no training for various devices. This helped save money on training. Apart from cloud-powered email, spreadsheet applications, and word processing, the MS Office 365 suite comprises collaboration, online storage, video conferencing, instant message, and collaboration. What’s great about Office 365 is it is always improving. Usually, companies must pay for all upgrades, but thanks to Microsoft, updates occur automatically.

Become part of Microsoft Cloud Solutions Provider Program

Before a managed services provider can sell MS Office 365, it must determine which Microsoft program must be employed. Numerous value-added distributors and master cloud service providers advise channel partners on using the Microsoft Cloud Solution Provider (CSP) program. The main benefits of this program over other partner licensing models include customer relationship ownership retention and higher commissions.
Unlike the other Office 365 programs that need partners to turn over the sale to Microsoft, such as support and billing, the MS CSP program allows channel partners to hold on to the ownership of their customer relationships. This means it’s a lot simpler to bundle with professional services, like helpdesk support.
As per the present Microsoft Advisor program, channel partners can earn only 3 percent margins on Office 365 subscriptions. The CSP program offers partners with margins 11 percent or higher. Master cloud service providers and distributors sometimes add economic incentives to lure channel partners to sign up with them, which raises the first year CSP subscription commission to over 16 percent.

Offer value added service – Backup

While moving to the cloud, data associated with MS Word, Excel, Outlook, SharePoint, PowerPoint, and Skype are all synced in Microsoft’s cloud. However, this does not mean you can do away with a proper BDR strategy for your Office 365 data and apps. When value-added services are put in place around Office 365, it automatically increases revenue, while adding an extra layer of protection and security. This keeps customers’ data and apps safe during the move to the cloud. But Microsoft’s backup and recovery services might not always align with the company’s customer service level agreements (SLAs), data protection and security requirements, or business needs.
The best way to prevent cloud offerings from cutting into the company’s BDR margins is to avoid considering BDR as a task-based IT function. Rather, it should be viewed as a strategic business process. When customers are told about higher-level business ideas, like business continuity, RTO (recovery time objective), and RPO (recovery point objective), it will take care of many objections that could arise while backing up cloud services. Broadening the discussion from backup to superior solutions, business strategies, and services makes it easier to influence higher-level decision markers for better margins.

Final Thoughts

Microsoft Office 365 cannot be considered a passing fad; not only are they rising in importance, but they are slowly becoming the new standard for businesses. Most businesses now have a part of their IT on-premise and a portion in the cloud, and they require the two environments to work as one platform. A lot of skill and complexity are needed to achieve this outcome, which indicates the channel is going nowhere for a long time.

Author: Rahul Sharma

Sources:

http://mspmentor.net/blog/2-ways-boost-your-office-365-sales-revenue

http://affiliatedtechno.com/keep-up-with-growth-by-moving-to-the-cloud-with-microsoft-office-365/

https://www.getfilecloud.com/blog/2017/04/mcsp-the-future-of-managed-service-providers-msp/#.WP7iGWmGO1t

 

How to Pitch Managed Service Providers’ (MSPs) Role in Office 365

The market appetite for cloud based solutions is high. It is still important to pitch yourself properly if you want to make your mark as an MSP for a cloud based solution like Office 365. Marketing is a critical part of your success as an MSP for Office 365.

Most MSPs state they are getting clients mostly word of mouth. However, given the dominant presence of eCommerce, most consumers in the B2B space are moving towards online queries, web resources, and internet searches to find services. So how do you pitch yourself properly as an MSP? Here are some tips:

Why is an online presence important?

According to a study by Google, about 89% of all B2B buyers tap the internet for doing any research and gathering background information. So, the bottom line is it’s highly likely that your prospects are looking online, whether they are looking for a provider or for new services. It should be a top priority to generate new leads and establish an online presence. Even if you are not intending to build any new relationships and get new clients, you should still spend enough time upselling to and nurturing your client base.

Team up with the right provider

Getting and converting leads is no joke. You need tools like cost estimators, product comparisons, and online banner ads to position your offer in the right space in the market and be able to convince prospects about your credentials. That’s why it is important to tie up with a partner that can provide such tools and help you talk the right language with customers.

Come up with your unique value proposition

It is important to focus on coming up with your unique value proposition first when trying to build your brand identity. It is the most fundamental foundation of all your marketing efforts. The unique value proposition is a clear statement that mentions all the benefits a client can gain by working with you. It should illustrate how you or your product solves business problems. It is statement made as a claim and an introduction, not a slogan. You need to build this statement by thinking about how a client sees you as a business partner.
A statement like “Because we are good”, “We get it done for you” would not be a good enough value proposition. Consider something like what Skype has as their UVP (Unique value proposition)- “Wherever you are, wherever they are – Skype keeps you together”, a good UVP.

Choose a target vertical

A philosophy of targeting anyone may not work. Instead, pick a target vertical. By picking a target vertical, you can ensure your value proposition is focused and received clearly by the prospects. For instance, consider you want to target the Information and Media sector. This focus will be reflected in your case studies, marketing, communications, and UVP. It will also be easier for you to be perceived as an expert if you are focusing on one sector.

Identify the problems faced by your clients

The ability to ask good questions and listen to the answers differentiates a masterful salesperson from one who is not. There is something called the SPIN (Situation, Problem, Implication, Need-payoff) approach, which can help you ask the right questions. The way you build your questions is a very important factor in getting quality answers. The SPIN approach is composed of 4 types of questions asked in a particular order.

  • Situation questions

This refers to questions asked to know your prospect’s current situation. Here, you ask for facts and background to decide the potential problems that can be explored. These are some questions you can try:

  • Please tell me about your current solution for email and collaboration.
  • Indicate your satisfaction level with the current solution.
  • How do you estimate the equivalent value of employee productivity?
  • Do the employees at your organization work on mobile devices and remotely?
  • Please provide more details on these answers.
  • Problem questions

Here, you ask about the difficulties and problems faced by the client and her issues and needs. Questions of this type can help you identify and understand the solutions and alternatives you can offer.

  • How much trouble do you have handling the workload associated with the solution?
  • How worried are you about exceeding or reaching your current capacity?
  • How much time have your employees or you lost due to availability issues? Please characteristic as High, Medium, or Low.
  • Do you believe information security related issues are a bottleneck to your company’s adoption of cloud based solutions for collaboration and email?
  • Are you wasting considerable time trying to find the right version and the right document?
  • Implication questions

These are the questions where you ask about effects, consequences, and implications.

  • As far as you can estimate, how much time is being lost from the employees’ point of view due to inefficient business processes?
  • Do you understand how valuable the files related to your employees are?
  • Would it be helpful to automate and centralize the business processes into one location from where they would be accessible anywhere?
  • Would you like to reduce the time and costs associated with your email system and make the system more reliable?
  • Would your business be more productive if you could provide secure access to all your employees, so they could work wherever they want and whenever they want?
  • Would it be a useful capability for employees to work at the same time on the same file from two locations?

Author: Rahul Sharma

Image Courtesy: nokhoog_buchachon , freedigitalphotos.net

Sources

http://info.sherweb.com/Office-365-partner-profit-ebook.html

http://mspmentor.net/blog/10-office-365-sites-every-msp-should-bookmark

http://mspmentor.net/managed-services/032614/dont-sell-microsoft-office-365-sell-your-brand

http://blog.kaseya.com/blog/2016/07/12/how-msps-can-make-office-365-management-easy/

Can Artificial Intelligence (AI) Enrich Content Collaboration? Or Is It Just a Lipstick?

Artificial Intelligence (AI) enrich Content Collaboration

Is Artificial Intelligence (AI) the new lipstick? Sure, it is being put on many pigs. Can artificial intelligence improve Enterprise File Sharing and Sync (EFSS), Enterprise Content Management (ECM) and Collaboration?  We want to explore if we could find some obvious collaboration use cases that can be improved using machine learning. In this article, we will not venture into AI techniques or impact of AI or evolution of AI. We are interested in exploring how EFSS benefits from “machine learning” – a technique that allows systems to learn by digesting data. The key is ‘learning’ – a system that can learn and evolve vs. explicitly programmed.  Machine learning is not a new technology; many applications, such as search engines (Google, Bing), already use machine learning.

In the past year, many large players, such as Google, Amazon, and Microsoft, have started offering AI tools and infrastructure as service. With many of the basic building blocks available, developers can focus on building right models and applications. Here are a few scenarios in Enterprise File Sharing and Sync (EFSS), and Enterprise Content Collaboration, where we can apply machine learning soon.

Search

Search is a significant part of our everyday life. Google and Amazon have made the search box the center of navigation. For instance, a decade ago, the top half of the Amazon homepage was filled with links, which is now replaced by a search box at the top.  However, search hasn’t taken a significant role in enterprise collaboration, yet. Every day, we search for files that don’t fit in a simple search criteria. Think of search that goes ‘looking for a design proposal from a vendor x I received six months back.’ Today, we manually sort through files to find an image that satisfies the above search criteria.  We could use a simple query processing,  a crawler, and a sophisticated ranker to surface file search results, based on estimated relevance. Such a search feature can continue to learn and improve to provide better results each time. Already, we have many such machine learning algorithms and techniques available to index files, identify relevance, and rank search results per relevance. Hence, applying to enterprise scenarios requires a focused effort from the solution providers.

Predict and organize relevant content

A technique in machine learning, called unsupervised learning, involves building a model by supplying it with many examples, but without telling it what to look for. The model learns to recognize logical groups (cluster), based on certain unspecified factors, revealing patterns within a data set.  Imagine your files are automatically organized, based on the projects you are working on. Any file will have a set of related files just one click away. Won’t such a feature have a significant productivity boost?

Collaboration

Collaboration across different languages will be simplified with many advanced translation tools available today. Google Cloud Translation API provides a straightforward API to translate a string from and to many languages. Translation of user comments and meta data, such as tags, image information, can be very useful for any large organization that involves working with partners and vendors across the globe. With translation combined with machine learning, translation within an enterprise can improve by learning domain knowledge (medical, law, technology etc.) and internal jargon. Systems can extract right meta data, apply domain knowledge, and translate them for employees, partners, and customers, so they easily communicate and collaborate.

User Interface

Interaction with EFSS applications need not be just clicks and texts.  Users can have more engaging user experiences that include conversational interactions, e.g., users could just say “open the sales report that I shared with my manager last week.” Personal assistants, such Siri, Cortana, and Alexa, already provide such conversational interfaces for many personal and home scenarios. Though it sounds complex, some of the technology, such as automatic speech recognition for converting speech to text and natural language understanding, are available in Amazon APIs. Converting the conversation into an actual query might not be as complex as it sounds.

Security and Risk Assessment

Machine learning has an excellent application in monitoring network traffic patterns to spot abnormal activities that might be caused by cyber-attack or malicious activities. Solutions like FileCloud use some of these techniques to identify ransomware and highlight potential threats. Similar techniques can identify compliance risks to analyze if any documents being shared have any personal identifiable information (credit card) PII or personal health information PHI. Systems can predict and warn security risks before the breach happens.

These ideas are just a linear extrapolation of the near future. Even these simple linear extrapolations look promising and interesting. Many predict that, within a few years, almost every device and service will have intelligence embedded in them. In future, the concept of file and folders might be replaced by some other form of data abstraction. As AI and collaboration continue to evolve, resulting applications evolve exponentially better than our linear extrapolations, and our current thoughts could appear naïve. Hope it doesn’t evolve, as Musk puts it, “with artificial intelligence, we’re summoning the demon.”


Why You Need To Stop Using FTP Right Now

ftp vs efss

How FTP Started

FTP dates as back as the inception of the internet. At this time, developers were working on various experiments such as Transmission Control Protocol and Internet Protocol (TCP/IP). It is through this work that they were able to divide methods of network use into two categories, indirect and direct.

Users could use the direct network applications to access a remote host as though it were local, thus creating an illusion that a network was nonexistent. As work progressed, advancement in direct network applications gave rise to innovations such as Telnet. On the other hand, users could use indirect network applications to get resources from a remote host, using these resources on a local system of their choice and transferring them back to the remote host. One of these indirect networks was FTP.

If you understand the history of FTP development, you will notice that this was a platform to assist users share files across networks and computers. Remember that this was a time when internet development was in its early developmental stages, so everyone knew everyone, eliminating the need to focus too much on security. At this time, the early 70’s and 80’s, FTP standards such as 114,172,265 and 354 basically focused  on definition of basic commands, development of devices users could use to access FTP and creation of formal client-server functions. The issue of security for example, was slightly touched on when the developers defined firewall friendly transfers and allowed users to authenticate file transfer using passwords.

Why You Need To Stop Using It Now

At a time when internet users are faced with 35 million brute force attacks per day, whereby 88% of passwords can be hacked within 14 days, you cannot ignore the major security shortcoming that FTP has.  Actually, lack of security is one the many shortcomings that FTP has.

Here are a few reasons why you need to stop using FTP now:

  • No Encryption

One of the design flaws in FTP that make it insecure for all your corporate transfers is its lack of encryption. The most standard forms of FTP do not give room for encryption of important details such as your username, password and file contents, leaving your business unprotected.

Well, one would argue that FTP has been upgraded and there exist FTPS, a secure extension that allows you to encrypt data through FTP. A quick observation on this upgraded system will show you that it is encryption unfriendly, therefore if you have just basic level of IT knowledge, then you could be sending your vulnerable details over an unsafe network, in plain text.

Lack of encryption not only exposes your business to hackers, but also risks a network sniffing incident. Through this process, hackers use your personal details to access the network, and attack other unsuspecting users within that network, causing a data breach. In case you didn’t know, data breaches are one of the most expensive IT attacks that a business could go through, especially after the cost of data breach in 2015, as reported by IBM and Ponemon Institute in their report, “2015 Cost of Data Breach Study: Global Analysis” was estimated to be $3.79 million USD, a 23% increase over the past two years.

  • Transferring Big Files Is Problematic

FTP has a mound of problems when it comes to big file transfers. The process is very slow and sometimes after waiting for a long time, FTP fails to do the job altogether.  For a busy and fast-moving corporate world, this inefficiency is unwelcome.

Worse still, in the event that a large file transfer fails, FTP does not send a message notifying you of the incident. This means that your operators will not receive an alert and such a failure will go unnoticed until a physical check is done. Worst case scenario of such an incident would be loss of a high-end client, something you would not risk in your business.

  • Automation Can Expose You To Security Risks

FTP automation, which is script based, was designed to save businesses time and make their work easier, by allowing the system administrator to add a username and password right into a script to prevent any hold ups. The script was also designed to contain details about which files to be sent and where to be sent.

While it is okay to have a script-based automation process, it is dangerous in the case of FTP. This is because the script contains very sensitive business information such as your username and password, therefore when it is shared, you significantly expose yourself to hackers.

  • Proof Of Data Security Compliance Is Difficult

The biggest problem with FTP is its inability to offer traceability due to a limited logging activity. This means that your business would find it very hard to prove compliance with Sarbanes Oxley, PCI and HIPAA for example. Worse still, not all vendors are compliant with industry regulations.

What Is The Alternative?

FileCloud is a good alternative to FTP for two main reasons:

  • You Control Compliance

FileCloud is an on-premises Enterprise File Sharing and Sync (EFSS) solution that is self-hosted and privately run by your company administrators, giving them the ability to set your own protection limits. Better yet, the service is regulated by corporate IT security policy and it is one of the few cloud services compliant with the EU data residency rule.

  • Easier File Sharing And Access

You can use FileCloud Apps for android, iOs and Windows to access your files via your mobile phones. Additionally, you can sync your documents with different devices. This can be done both online and offline, whereby Mac, Windows, Linux and even Netgear ReadyNAS NAS devices support the offline sync for example.

Conclusion

You might dismiss security as an issue, thinking that large businesses are the ones more susceptible to cybercrime. Well, it has been reported by CNBC that hackers are increasingly targeting small businesses, maybe because they don’t have the IT and security resources that a larger corporation has.

As seen, FTP does not offer file security and is cumbersome with large file transfers. On the other hand, FileCloud not only guarantees you security but also easier file sharing and access as well as compliance with industry regulations. If you are looking for a secure FTP replacement, click here to learn more about FileCloud – a modern alternative to FTP. Try for free!

Author: Davis Porter

Avoid innovation blind spots – Unshackle from your customers

ID-100192979

Tightly aligning your innovation pipeline to your customer feedback would lead to innovation blind spots. In famous words from Ford

“If I had asked people what they wanted, they would have said faster horses.”

Recently, I re-read Innovator’s Dilemma by Clayton M. Christensen. When I first read it 10 years back, I was in bschool and I thought I had a good grasp of the concepts discussed in the book. To my surprise, when I re-read it recently, after having worked for ~10 years on various products and business problems, I took a lot more from the book. I want to explore one of the topics from the book, unshackling companies from their customers.

Customers literally make or break businesses; without them, companies will have no revenue and we will all be closing shops to pursue something else. With such an unyielding power, customers influence every aspect of the company from day-to-day decisions to long term direction. Sometimes it is easy not to notice how customers drive companies’ operation and strategy. Influence of the customers becomes very clear, when we observe how we make decisions such as what features to implement, when to release, why we praise our employees, how we decide bonuses and the list goes on. For every decision, we have our customers in our mind and hard to escape our urge to make them happy by delivering value. What is the problem is just doing so?

This mode of operation works great for companies riding on an innovation curve. A established market space provides the guidelines to improve the products and satisfying customers. However, when a disruption emerges, first, new product will be inferior in performance in almost all known dimensions such price, performance etc. except for one new dimension. Established companies would talk to existing customers, conduct a customer study and would conclude that nobody wants the new product. And they would continue to polish their existing products by listening to customers and continue to over-deliver. For example, Comcast offers 500+ channels while Netflix doesn’t have those selection but continues to improve selection at a lower price point and on-demand. Key point to note is customers may not want 400+ channels i.e Comcast is over delivering and will continue to add new channels. Eventually, Netflix’s offering improves and might be good enough for mainstream to switch from cable.

How can we identify and overcome this problem?

One symptom that shows that you are over delivering and might be on the edge of the cross over is your ability to price. If it is getting harder to make customer pay for the new features, then your red flags should go up. This is an important symptom that shows the efforts vs. value add vs. willingness to pay is reaching its saturation. This might be time to think of what new disruptions, and dimensions emerging in this space.

If you can’t listen to customers, what else can you do? One option is to watch what customers do and understand what they are trying to accomplish vs. asking them what they want. By observing their tasks, you don’t limit yourself to current product, and you can think beyond product features. You can truly understand what the customer needs are. Still this won’t completely solve the problem but you might be better off. Often, you might have to venture to areas knowingly it will cannibalize your current product. Developing such a culture to innovate is hard. How you setup such an organization to foster innovation and risk taking requires a whole another set of discussions that we will cover later.

image:usamedeniz, freedigitalphotos.net

Free might not be right for you – Put a price tag!

Put A Price Tag

Among startups in tech world, conventional wisdom to succeed involves these steps 1) introduce a product for free; 2) get tons of users; and 3) finally, figure out a business model later when you have enough users.  Since such an approach has worked for Google, FB and other companies (e.g Whatsapp) that got acquired by these companies, startups often assume this principle is universal and applies to them as well. This general model may work for many companies but might not work for all companies.

Alternative approach is to try step 1 and 3 together. Yes, put a price tag either when you launch a product or very soon after launching. Here are the top reasons why you should introduce a product with a price tag.

  1. Paying customers provide stronger feedback

When customers get something for free, they perceive value of the product or service to be low and they have low or no expectations from it. If the product fails, they would just assume product is bad and stop using the product. However, if a customer pays for a product and it fails, they would send a flaming email or pick up a phone to call. Startups shouldn’t underestimate the importance such a feedback early in the product cycle.

  1. ‘Free’ world is polluted

Tons of products are offered free, pricing a product might help it stand out. Charging a price for a product or service shows your confidence in your product and signals market that product is mature enough to command a premium.

  1. Start testing business models early

Startups often have to re-pivot multiple times to find the right product vs. market vs. business model fit. Rarely, a company has continued with same product and business model with which it started. Testing business models helps companies to test and innovate fast. Testing a business model after the product matures would limit options and might be hard to change course hence it is better to start testing early, even from the launch.

  1. Gauge value that you create

Pricing is an important tool to gauge value that a product or service delivers. Experimenting with price teaches a lot more about the value that a product creates, elasticity among customers and gauge your value vs. competition.

  1. Culture of accountability

Not that free products don’t drive culture of accountability within a company but paid product has a stronger contract to drive accountability. Employees within your company would feel the pain when they see a paid customer asks for refund or drops your subscription pointing to a defect.

Charging a price doesn’t mean that you should be rigid and haggle for price with your customers. While clearly stating your business model and pricing structure, you should offer free trials and limited offers as appropriate to attract customers. Good Luck!!

Image Courtesy: Sira Anamwong, freedigitalphotos.net

A Comprehensive Guide to Cloud Containers

Definitely, one of the hottest topics in the world of cloud computing is cloud containers. This evolving technology is changing the way IT operations are conducted just as virtualization technology did a couple of years ago. However, the use of containers is not an entirely new concept. Like VM technology, containerization also originated on big iron systems. The ability to create running instances that abstracts an application from an underlying platform by providing it with an isolated environment has been around since the distributed object and container movement of the 90s with J2EE and Java. The first commercial implementation of containers was pioneered as a feature within the Sun (currently Oracle) Solaris 10 UNIX operating systems.

What are containers?

But the question still remains, what are containers and what role do they play in the cloud? Simply put, a container is an isolated, portable runtime environment where you can run an application along with all its dependencies, libraries and other binaries; it contains all the configuration files needed to run the application. By containerizing the application platform and its dependencies, differences in underlying infrastructure and OS distributions are abstracted away. This makes the application portable from platform to platform with ease.

Despite their subtle similarities, containers are different from VMs in multiple ways. The both offer a discrete, isolated and separate space for applications that creates the illusion of an individual system. However, unlike a VM, a container does not include a full image or instance of an operating system, with drivers, kernels and shared libraries. Instead, containers on the same host can share the same OS kernel, and keep runtimes and other services separated from each other using kernel features referred to as cgroups and namespaces. Containers use up less resources and are more lightweight compared to virtual machines. One server is capable of hosting more containers compared to virtual machines. While a virtual machine will take several minutes to boot up their operating systems and start running the hosted applications, containerized apps can be started almost instantly.

Containerization in the Cloud

Containers mainly add value to the enterprise by bundling and running applications in a more portable way. They can be used to break down applications into isolate micro services, which facilitate enhanced security configurations, simplified management and more granular scaling. In essence, containers are positioned to solve a wealth of problems previously addressed with configuration management (CM) tools. However, they are not a direct replacement to CM or virtualization. Virtualization has played a crucial role in enabling workload consolidation in the cloud, subsequently ensuring that money spent on hardware is not wasted. Containerization simply takes it a step further.

The portable nature of containers means they can effectively run on any infrastructure or platform that runs the relevant OS. For developers, containers means saying goodbye to the burdensome processes, limited lifecycle automation, the same old problems with patches and absolutely no tooling integration. A developer can simply run a container on a workstation, create an application within the container, save it in a container image, and then deploy the application on any physical or virtual server running a similar operating system. The basic idea is to build it once and run it anywhere.

Containerization provides mechanisms to hold portions of an application inside, and then distribute them across public or private clouds, from the same vendor or from different vendors. Containers offer deterministic software packaging, this means that the network topology might be different, or the security policies and storage might be different but the application will still run on it.

Along Came Docker

Docker is responsible for popularizing the idea of the container image. The momentum behind it has made Docker synonymous with container technology, as it continues to drive more interest into the cloud. Cloud vendors have also showed interest in using Docker to provide infrastructures that support the container standard. Docker offers a way for developers to package an application and its dependencies in a container image based on Linux system images. All instances basally run on the host systems Kernel, but remain isolated within individual runtime environments, away from the host environment. Once a Docker container is created, it only remains active if active processes are running within the container.

The Docker Engine runs on all the major Linux distributions, including Arch, SuSE, Gentoo, Fedora, Ubuntu, Debian and Red Hat, and soon Windows – Microsoft announced that it will bring Docker container technology to Windows, and introduce Windows Server Containers which will run on Windows Server. Docker has been tested and hardened for enterprise production deployments, its containers are simple to deploy in a cloud. It has been built in a way that it can be incorporated into most DevOps apps, including Ansible, Vagrant, Chef, and Puppet, or it can be utilized on its own to manage development environments.

Docker also offers added tools for container deployments, such as Docker Swarm, Docker Compose, and Docker Machine. At the highest level, Compose facilitates the quick and easy deployment of complex distributed applications, Swarm provides native clustering for Docker, and Machine makes it easy to spin up Docker hosts. Docker has undoubtedly established a container standard with a solid design that works well out of the gate. However, Docker isn’t necessarily the right pick for all applications, it’s important to consider the right ones for its containers/platform.

The other players

Choosing a technology solely based on adoption rate can lead to long-term issues. Exploring all the available options is the best way to guarantee maximum performance and reliability during the lifecycle of your projects.

I. CoreOS Rocket

CoreOS includes an alternative choice to the Docker runtime called Rocket. Rocket has been built for server environments with the most resolved security, speed, composability and production requirements. While Docker has expanded the scope of the features it offers, CoreOS aims to provide a minimalist implementation of a container manager and builder. The Software is composed of two elements: Actool – administers the building of containers and handles container discovery and validation, and Rkt – takes care of the running and fetching of container images.

A major difference between Docker and Rocket is that the latter does not necessitate an exterior daemon, whenever the Rkt component is called forth to run a container, it does so without any delay within the range of its own process tree and cgroup. On the other hand, Docker runtime utilizes a daemon the needs root privileges, this opens up APIs to exploitations for malicious activities, such as running unauthorized containers.  From an enterprise perspective, Rocket may seem like the better alternative due to its increased portability and customization options. Docker is more ideal for smaller teams because it offers more functionality out of the gate.

II. Kubernetes

Kubernetes was created by Google as a helper tool for managing containerized applications across private, public and hybrid cloud environments. It handles deployment, scheduling, maintenance, scaling and operation of nodes within a compute cluster. The load balancing, orchestration, and service discovery tools contained within Kubernetes can be used with Rocket and Docker containers. Simply put, while the container provides the lifecycle management, Kubernetes takes it to the next level by providing orchestration and managing the clusters of containers.

Kubernetes has the ability to launch containers in existing Virtual Machines or even provision new VMs. It does everything from booting containers to managing and monitoring them. System administrators can use Kubernetes to create pods –logical collections of containers that belong to an application. The Pods can then be provisioned within bare metal servers or VMs. Kubernetes can be used as an alternative to Docker Swarm, which provides native clustering capabilities.

Author: Gabriel Lando

Simple Rules for Customer Driven Software Development

Customer driven software development

Understanding customer problems is the most difficult aspect of creating new products and services. Traditional ways of gaining this understanding include talking to a focus customer group or doing market research. But the downside to these one-time research methods is that they fail to account for evolving customer needs within changing business contexts. Being aware of these changing contexts is one of the most crucial factors of product design. This principle is more relevant in software development than the development of physical products.

At Codelathe, we follow a unique software development methodology that exposes the developers to customer problems throughout the software development process. This helps the developers empathize with the customers and create the right solutions. It also helps us to select the right set of features and keeps the product relevant in an evolving market.

We religiously follow this rule, and we don’t hire anybody who doesn’t believe in this process. We have this printed and posted in heavily trafficked places in our office. This method has worked very well for us, so we thought these simple rules would benefit other software companies as well.

Here are the rules we follow to create phenomenal products in Enterprise Information Management.

5 SIMPLE RULES FOR CUSTOMER DRIVEN SOFTWARE DEVELOPMENT

  1. Every developer needs to do customer support at least 2 days a month. This applies to the leadership team as well.

  2. Every customer request needs to be recorded, discussed, assigned priority and tracked ASAP. We do this weekly.

  3. Product roadmap meetings require a customer success representatives to be present. They have the final say.

  4. Features/functionality that create the most impact for the most customers get higher priority.

  5. Every new feature has to pass the following litmus test – “Will this feature help create customer success?”

You can also download this 5 rules of customer driven software development as a pdf document .
Want to be part of something bigger than yourself? We are hiring.