Archive for the ‘Artificial intelligence’ Category

Pushing for AI? How to Manage AI Investments to Get a Good ROI?

It’s almost the end of 2018 and everywhere you look there’s news of artificial intelligence (AI) revolutionizing different industries, whether it’s by curing deadly diseases or replacing manual labor. But it’s not all hype.

The growing number of AI startup acquisitions is nothing short of remarkable, and corporate giants like Google, Amazon, Facebook, and Apple, are spending millions of dollars on AI research and development. It makes sense that these heavy hitters are at the forefront of the change. But they are not the only ones who stand to profit and increase their ROI from AI.

Other companies too can harness the power of AI innovation, but it is not simple. A gap exists between AI research and delivering actual tangible business outcomes. However, the right AI strategy can help organizations set up the cheapest, most effective method of harnessing machine learning and AI, and reap ROI benefits.

 

Derive Optimization Paths Across Customer Journeys

Your AI budget may range from less than a million to over a $100 million annually; in both cases, the goal is to make performance and outcomes more predictable. The output of your organization enjoys a lot of visibility, and employees count on the decision makers to take charge and deliver. But, to achieve the goals, smart decisions must be made frequently throughout the whole customer journey.

The unlimited scale of data in a pre-AI environment has prohibited us from seeing opportunities and acting on them, but if the results have to be predictable, our pipeline of insights must also be predictable. And with AI, it is going to be. From billions of rows of data, insights must be created, and the AI bots work to identify what affects performance. At the same time, they give us an idea of why it’s happening and what can be done to prevent it. Companies are presented with these choices to flip the switch to “always on” optimization.

 

Training is Key

To draw value from AI, your organization must train the employees in new skills and create a comprehensive plan to show how investing in such technologies can produce ROI or fulfill certain business goals. While the process of integrating AI into the existing business model encounters strategic problems without a one-size-fits-all solution, organizations can allot dedicated teams to market these initiatives quickly, extract key learnings, and optimize the same to deliver best value and impact. AI can’t just be added to the existing product roadmap of your business; the teams must work together from the start, otherwise, it will lead to costly tech investments that cut into profits rather than building ROI.

Organizations can win at AI by establishing a thorough, strategic set of objectives and handing the execution over to data and technology professionals. When you assign the responsibility of AI deployment to an experienced data executive and arm the person with organizational support and strategic business goals, your business gets empowered by the talent, mindset, and tools needed to execute and define these initiatives successfully.

 

Organize Data Efficiently

Thanks to AI, businesses now have access to self-organized data that imitate their way of thinking regarding business. A company that wishes to measures and monitor its goals or contrast performances will require data from numerous sources to line up by product, region, or teams. You should be able to drill from the topmost level of every ROI, loyalty KPI, or growth to the level underneath. You must be able to do this for every campaign, channel, audience, customer segment, and program.

 

Nurture the Right People in Your Organization

Since the goal of every business differs for AI deployments, it gets difficult to outsource initiatives. But you must adopt the current teams to the task. They are familiar with the ins and outs of your business and possess both the vision and experience to know where you’re coming from and where you have to go. While the complexity of these systems indicates that you’ll have to hire dedicated AI experts for moving your company in the right direction and filling the gaps in knowledge. So, it’s time for businesses to think about the acquisition and retention of talent needed to successfully gain ROI from AI.

While the number of AI professionals is limited, and there is fierce demand for their skills, hiring top talent must be a priority along with the focus on establishing a culture that gives them a reason to stay. But at the same time, you should not forget that initiatives like these are a team sport that requires both new blood and experienced individuals. Thus, you should invest in maintaining your existing talent pool so you’re able to set, implement, and enhance your AI plans without any hassle.

 

Centralize Company Data Throughout the Different Touch Points

Industry professionals have access to thousands of solutions nowadays. AI can help businesses by providing a panoramic view of the customer touchpoints in a single platform for a more holistic business view, reports, multiple systems, and dashboards. Smart AI is capable of moving your business into the world of unified intelligence, where the organization adds the data of their choice to the picture. Having every data in a single position forms the initial step towards merging the various teams into one cohesive unit rapidly, efficiently, and keeping the skillset of the team in mind.

 

 Locate the Right Partner

AI transforms the industry landscape with different apps, but there are advanced computer networks that still find it difficult to keep up with the abilities of average human beings in areas like analytical capability and context. For these systems to work, they require advanced predictive analytics that are capable of rapidly understanding large data volumes. Also, you have to find a scalable and flexible end-to-end data and analytics partner – someone who can dig into the past and look at the future. Take your pick carefully, since you will have to rely on the expertise of that company in the future.

 

Until a few years ago, harnessing the power of AI was the strategy followed by only the business leaders, while others considered it a futuristic idea. They adopted a “wait and see” plan, but that is no longer feasible. As AI develops in leaps and bounds, companies need to invest in their own AI offerings and manage them properly to amass the desired ROI.

How Natural Language Processing (NLP) Can Augment Collaboration

 

natural language processing

Natural language processing (NLP) refers to a set of techniques that enable computers and people to interact. Most of the activities humans perform are done via language, whether communicated directly or delineated using natural language. Human language, developed over millennia, has become a nuanced form of communication that carries a wealth of information that typically surpasses the words alone. As technology progressively makes the platforms and methods through which we communicate more accessible, the need to understand the languages we use to communicate becomes greater. By amalgamating the power of artificial intelligence (AI), computer science and computational linguistics, NLP helps machines ‘read’ text by mimicking the human ability to comprehend language.

The aspects that make a language natural is exactly what makes NLP difficult; the rules that dictate the representation of information in natural languages evolved without predetermination. These rules can be abstract and high level, like how sarcasm is used to denote meaning; or low level, like the use of the character ‘s’ to convey plurality of nouns. NLP involves identifying and making use of these rules with code to translate unstructured language data into information with a schema. There are still very challenging issues to solve in natural language. However, deep learning methods are accomplishing state of the art results in some specific language problems.

Early computational outlooks of language research focused on automating the analysis of the linguistic structure of language and creating basic technologies like machine translation, speech synthesis, and speech recognition. Today’s researchers hone and utilize such tools in real-world apps, creating speech-to-speech translation engines and spoken dialogue systems, identifying emotion and sentiment toward services and products, and mining social media information about finance or health.

While NLP may not be as mainstream as Machine Learning or Big Data, we utilize natural language apps or benefit from them on a daily basis. A 2017 report by Tractica on NLP estimated that the total NLP hardware, software and service market opportunity could reach $22.3 billion by 2025. The report also predicts that NLP based software solutions that leverage AI will record market growth from $136 million in 2016 to $5.4 billion by 2025. It’s quite clear that NLP is here to stay, and it’s likely to have a larger impact on how humans interact with machines. Here are some examples of how NLP can change the way we collaborate in the enterprise.

Classification

Data classification simply refers to the process of organizing data by relevant categories so that it can be used and secured more efficiently. The classification process not only simplifies the retrieval of data, but also plays a crucial role in compliance, risk management, and data security. Data classification entails tagging data in order to make it trackable and searchable. It also curbs multiple duplications of data, which decreases backup and storage costs. Deep learning, which is used in natural language processing, is well suited for automated classification because it can learn the complex underlying structure of sentences and the semantic proximity of different words.

NLP classification algorithms can’t work ‘out -the-box’; they have to be trained to make specific predictions for texts. The algorithms are give a set of categorized/tagged text based on which the generate machine learning models, the models will then be able to automatically classify untagged text. Utilizing NLP to automate content classification makes the entire collaborative process efficient and fast.

Semantic Search and Discovery

The average enterprise generates massive amounts of data on a daily basis. In this digital age, information overload is a real phenomenon. Our access to information and knowledge has exceeded our capacity to make sense of it. When NLP is applied during the data ingestion, searchable indexes are automatically added to the document’s composition. In keyword based search, text and documents are searched based on the words found in the query. The returned results are typically based on the number of matches of the query words with documents.

In semantic search, the syntactic structure of the natural language, the frequency of the words and other linguistic elements are considered. An NLP algorithm can understand the specific requirements of the search query by identifying events, brands, people, places or phrases; understands how negative or positive the text is; and automatically curates a collection of results by topic. For easier discovery, personalized content recommendations can be generated related to the same topic.

In Closing

Today’s workforce interacts with vast amounts of text all the time; constantly scrolling through files, and sharing documents. If they can extract intelligence from text, they will become more efficient and productive. Despite the fact that natural language processing is not a new science, the technology is rapidly advancing thanks to a growing interest in human-to-machine communications, powerful computing, enhanced algorithms, and the availability of big data. Utilizing Natural Language Processing (NLP) to create an interactive and smooth interface between machine and humans will continue being a top priority for increasingly cognitive applications.

Author: Gabriel Lando

image courtesy of freepik

Practical Machine Learning Tips and Tricks to Achieve Success Quicker

Raise your hand if you’re tired of reading and hearing how AI will solve the world’s problems. Give us a shout out if you’ve been led into believing that AI is going to displace most people from their jobs soon enough. We’re exhausted too, particularly considering how AI essentially is a 60+-year-old tech concept!

 

 

What’s the cause of the hype overdrive?

Experts in the ‘real no-nonsense AI’ space say that it’s the exponential progress in the effectiveness of machine learning algorithms that’s caused the AI buzz. This, however, is the phase where the buzz will deplete (because it’s already peaked) and focus will move to real-world applications of AI at a scale. If your organization’s AI strategy is not checked out yet, or if you’re looking for a course correction, here are some practical tips to make machine learning the superstar of your present and future.

 

‘Good’ Shouldn’t Be Beaten Down By the Pursuit of ‘Great’

We strongly recommend you make a mental note to go through machine learning success case studies. Many of them will clearly showcase how machine learning success often largely depends on data quality, data scientists’ ability to understand the business use case, and a bit of luck.

Often enough, 60-70% of the desired functionality can be served by a ‘good enough’ machine learning algorithm. The journey from ‘good’ to ‘great’ is long, expensive, ambiguous, and tough. Hence, the additional effort must be backed by a business justification (a 5% improvement in performance is invaluable for an algorithm that detects tumours from medical scans, but isn’t half as useful for an algorithm that predicts songs that a listener might like).

 

Business First, Algorithm Second. Or the other way around?

Considering a team of machine learning experts in charge of an email marketing personalization algorithm. Consider another team in charge of an algorithm that makes an unmanned areal vehicle navigate safely in rainy conditions. Now, assuming both teams are able to build a minimum viable algorithm that delivers the core expected utility, where should the incremental effort go.

In the first case, algorithmic fine-tuning might not return half the value that the engineers would achieve by optimizing the data and logistics related to the process. In the second case, the algorithm comes first, because the cost of failure is massive. So, ML experts need to be able to understand their development in the context of the business use case and the logistics, for channelling their resource in the right direction.

 

Mathematica

It’s The Data, Not the Algorithm

Got resounding success for an ML algorithm? Scored a memorable failure with another algorithm? Chances are, it’s the data that’s driving results, and not the algorithm.

Several promising ML developments never saw the light of a day in production, because the developers’ ego made the algorithm undergo dozens of iterations, each making it more complicated and twisted. If you anticipate such a situation within your organization’s machine learning teams, take control, and begin by questioning the data, and not the algorithm.

 

An ordinary ML algorithm can deliver good results if it’s learning data is robust. An extraordinary algorithm, however, will deliver garbage if it’s learning data is not good.

 

 

Add the Human Element

Let’s face it; you’ll be enviably lucky to have machine learning algorithm developers and data scientists with domain and business experience. To make sure that your organization’s machine learning projects don’t end up being mere laboratory successes, you need people who are masters of business processes, understand data pipelines, and appreciate the basics of machine learning. This is how you build a team that knows whether it’s necessary to create new data capture processes, eliminate expensive and laborious features that business folks won’t use, and how to evaluate the progress of the development in the context of the use case.

 

Feature engineering experts are a key element of any ML team. Though the feature selection process can be automated to a great extent, it’s the availability of insightful human oversight that helps ML teams create groundbreaking algorithms. This is equally true for all decisions related to data selection and processing. In fact, ML experts believe that the ability of a team to prove or disprove the effectiveness of an idea is a key differentiator for machine learning success. You need an insightful human presence for that.

 

Be Adroit With Tools

The average machine learning project will require developers to use anything between 5 to 8 tools, invariably. That’s natural, considering the wide range of niche tools available for the machine learning community.

For starters, subscription-based ML tools such as Amazon Machine Learning and Microsoft Azure Machine Learning Studio will help you scale up quickly. BigML is another platform slowly gaining traction. Then, there are the Apache Software Foundation (ASF) projects like Singa and Mahout. Open-source frameworks for machine learning, such as H2O, TensorFlow, and Shogun, of course, are already prominent in the market.

The point is – the speed of innovation in the machine learning market will require ML teams to be up and awake, and comfortable with a wide range of ML tools. Niche ML tools are mastering their art quickly, which means developers need to show agility in moving from tool to tool to realize businesses use cases.

 

Measure Small Successes

Here’s the hard truth. Soon enough, the pomp and show around ML powered applications will die down, and stakeholders will clamour for proof of business value. A great method to prepare yourself for such times is to always measure the success of your ML projects, aim at quick delivery of several small improvements, and to make the small improvements work in tandem to deliver significant business advantages.

 

Concluding Remarks

Machine learning is already solving business problems, enabling people to do their jobs better, and improving business processes exponentially. A practical approach to translating data and algorithm into valuable insights is what will ensure your organization’s machine learning projects’ success.

 

 

Author : Rahul Sharma

Machine Vs Machine: A Look at AI-Powered Ransomware

Cyber-crime is a fast growing industry because it’s a simple way for nefarious people with computer skills to make money. Ransomware in particular, has been an ongoing security nightmare for the last couple of years. With attacks like WannaCry, which infected about 400,000 computers in 150 countries, making headlines for their ability to fuel fears about the vulnerability of data. It has gone from the 22nd most common form of malware to the 5th most prevalent type.

According to a recent survey from Sophos, 54 percent of surveyed companies reported having being hit by a Ransomware attack in 2017. Another 31 percent expect it to happen again in the near future. The data collected indicated that the average cost of a single Ransomware attack (including downtime, manpower, and network costs) was $133,000. Five percent of the respondents blazoned total costs of up to $6 million, exclusive of the ransom paid.

Ransomware is not necessarily more dangerous or trickier compared to other forms of malware that finds its way into your computer, but it can definitely be more aggravating, and often times devastating. As concerns around the weaponized use of Artificial Intelligence (AI) rises, one can’t help but imagine what an AI powered Ransomware attack would look like.

An AI-driven Arms Race

While some analyst tout AI as the key to overcoming security gaps within the enterprise, its actually a double-edged sword. As the maturity and abilities of AI, machine learning, and natural language processing improve, an arms race between security professionals and hackers is on the horizon. Researchers and security firms have been using machine learning models and other AI technologies for some time to better forecast attacks, and identify ones that are already underway.

Its highly probable that criminal collectives and hackers will use the technology to strike back. Security experts surmise that once AI development reaches consumer level adoption, cases of its use in malicious attacks will skyrocket. Ultimately, malware authors may begin creating machine learning models that learn from disruption detection models, defensive responses, and exploiting new vulnerabilities quicker than defenders can patch them.

According to a 2018 McAfee Labs threats predictions report, the only way to win the ensuing arms race is to – “effectively augment machine judgment with human strategic intellect”. Only then will companies be able to understand and anticipate the patterns of how the attacks will play out.

AI-driven Ransomware Attacks

AI-driven Ransomware is capable of turbo-charging the risks associated with an attack by self organizing to cause maximum damage, and moving on to new, more lucrative targets. Attackers can utilize artificial intelligence to automate multiple processes, mainly in the areas of targeting and evasion.

  • Intelligent Targeting – Phishing remains the most popular method of distributing Ransomware. Machine learning models are capable of matching humans at the art of drafting convincing fake emails. And they can create thousands of malware-loaded, fake messages at much faster pace without tiring. Much like a human, a machine learning model with the right ‘training data’ about a target could constantly change the words in a phishing message, till it finds the most effective combination. Ultimately tricking the victim into clicking anything or sending personal data. By going through your correspondence and learning how you communicate, messages crafted by an AI will easily bypass spam filters. And then mimic you in order to infect other unsuspecting targets.
    Intelligent Evasion – AI has the ability to make destructive hacks far less visible. An ML model can be used to hide a Ransomware attack by manipulating the system and disabling any active security measures. In this age of IoT, a self-targeting, self-hunting malware attack could easily high-jack IoT endpoints, manipulate data, and simultaneously infect millions of systems with ever being detected.

AI-driven Cyber Security

As more advancements are made in the field of artificial intelligence, it will become more accessible and inevitably used for ill. However, the upside is that the use of intelligent agents in cybersecurity services and applications offer an adequate and effective protection against incoming Ransomware and other related threats.

  • Early detection – Mainstream anti-malware and anti-virus products identify malicious software by matching it against a database containing digital signatures of identified malware. Machine learning enables the creation of a continually vigilant system that is capable of making decisions on the fly, based on complex algorithms and computational formulas. As more data is collected, the system learns by experience. Effectively preventing attacks by stopping the payload at download. And in the event is was successfully downloaded onto an endpoint, the additional steps of running exploits and running scripts and attacks in memory can be stopped.
    Effective Monitoring – Since AI has the ability to automate and self-learn, it significantly raises the effectiveness of guarding systems from attacks. The pro-active nature of a machine learning model allows it to anticipate attacks by monitoring glitches and patterns related to malicious content. A heuristic analysis can be performed to determine whether the behavior being observed is more likely to be malicious or legitimate, thus reducing the number of false positives or misdiagnoses. ML capabilities guarantee that any results that slip through are used to improve the system during subsequent monitoring.

Not all AI solutions are created equal. Despite the looming threat of a weaponized AI-driven attack, the key takeaway should be that prevention is possible. And as much as AI can be used to prevent Ransomware, the fight against malware threats is not all about software and security mechanisms. The first point of contact between the perpetrators and victims is usually a baited email. A lack of security awareness on the victim’s part is a huge part of the equation. In the fight of machine vs machine, the human element plays a crucial role. Measures to increase enterprise knowledge on the best practices to adopt and tricks to avoid has to be included in the overall defensive strategy.

Author: Gabriel Lando
image courtesy of freepik.com

Top End User Computing Trends that IT Leaders Need to Sync up within 2018

The world of end computing continues to evolve at breakneck speed. Enterprise employees want the flexibility to work anytime, anywhere, using any device, any web browser, and experience the kind of UX they like. Enterprises have every reason to work towards delivering this experience, because it boosts productivity, enables workplace mobility, and enhances the work experience of employees. In a world where innovations from personal space are seeping into business workplaces, it’s imperative for IT leaders to be on the top of the trends in end-user computing. We’ve covered some of the most important ones in this guide.

Contextualized Security

End-user computing security hardly requires any underscoring as one of the most important trends that IT leaders need to track and stay on the top of. In 2018, however, it’s likely that enterprises will start making the move to implement contextual end user computing security mechanisms.

Contextual security takes several user-specific variables into account to determine the right security-related action. These variables include:

  • The roles and privileges assigned to the user
  • The roles and privileges assigned to the parent group the user is a part of
  • The most commonly used transactions, applications, and processes for a user
  • The average session length
  • The general set of locations from which secured application access is made by the user
  • The browser, IP, and device the user uses to access enterprise data

These technologies are hence able to create unique user profiles, match user behavior to the profile, and based on any deviations, can initiate tasks such as:

  • Blocking access
  • Reporting potentially malicious activity to the IT security team
  • Alter the user profile to allow the deviation, following a proper mechanism of authorizations and approvals.

The Role of AI, ML, and DA in Enhancing End-User Security

Masters of the end user computing market are focusing on enhancing the state of existing security technologies by leveraging the prowess of data analytics (DA), artificial intelligence (AI), and machine learning (ML). Organizations are already spending a lot on AI technologies, and hence many of them already have a strong base to build their futuristic end user computing security towers on. IT now has more sophisticated, data backed, and pattern dependent methods of detecting intrusion. Security technologies in 2018 will start offering inbuilt analytics and machine learning capabilities to transform the end user computing world for the better.

Managing Complexity of Device Diversity

Gone are the days when the average enterprise had merely desktops, laptops, and VoIP phones on employee desks. Today, the range of devices used by employees in a tech-powered enterprise is expansive, to say the least. There are satellite phones, handheld devices to record information, barcode scanners, tablets, smartphones, smart speakers, and whatnot. And, we’re at the edge of the transition to Industrial Revolution 4.0 powered by IoT. To make the ecosystem more complex, there are too many operating systems, web browsers, and communication protocols in play.

Managing this complexity has been a challenge for enterprises for some time now. It’s just that in 2018, they will see the giants in the end user computing market release products that help them out. Today, enterprises want that their employees should have access to their personalized desktops on whichever computer they use for their work, anywhere. These are virtual desktops, and already being used aplenty by enterprises across markets. In 2018, the leading vendors will look to make their VDI services available across device types, and address operating system variations.

Diminishing Lines Between Commercial and Business Apps

Dimension Data’s End User Computing Insights Report in 2016 highlighted how several enterprise rates their state of business apps maturity lowest among six areas. This stat truly captured the need for businesses to start focusing on delivering superior app experience to end users. Because these end users are accustomed to using terrific apps for their routine life management (productivity, transportation, note taking, bookings, communication, data management, etc.), their expectations from equivalent business apps are as expansive. This makes it important for IT leaders to keep a stern eye on the progress of business apps at their workplaces. An important trend in this space is the use of personalized app stores for user groups, as made possible by Microsoft Windows 10 app store.

Increased Adoption of Desktop as a Service

DaaS must be an important component of the virtualization strategy of any enterprises. Now, traditionally it’s been seen that services such as Amazon WorkSpaces have been restricted to only being viable for disaster recovery and business continuity planning. However, it’s going to be exciting to watch for developments in this space throughout 2018.

  • Vendors are likely to release updates that will help address challenges around desktop image management and application delivery.
  • DaaS applications that can act as extensions to your data centre will appear on the surface, driving adoption.
  • Integrated services such as Active Directory, application manager, and Microsoft System Center will also help further the adoption of DaaS.
  • New services, functionalities, and configurations will be kept on adding to DaaS solutions.

A useful option for enterprises will be to seek the services of consultants with experience in VDI strategy.

Video Content Management

Video has become a crucial content format for enterprises. The importance of video content is clear from the kind of estimates tech giants are making; for instance, Cisco estimates that 80% web traffic will be made up of video content access, by the end of 2020.

Enterprise end users require video content for DIY guidance, or to share the same with their end customers. Also, field employees need tech support via video content to be able to quickly manage their technical issues. Video Content Management Systems, hence, will become important for the enterprise from an end user experience point of view.

Concluding Remarks

Throughout 2018, expect to witness pressing changes in the end user computing market. Primarily, vendors will push to increase adoption of advanced, feature rich, and power packed end-user computing solutions.

 

 

Author – Rahul Sharma

Best Practices for ITAR Compliance in the Cloud

The cloud has become part and parcel of todays Enterprise. However, remaining compliant with the International Traffic in Arms regulation (ITAR) demands extensive data management aptness. Most of the regulatory details covered by ITAR aim to guarantee that an organization’s materials and information regarding military and defense technologies on the US munitions list (USML) is only shared within the US, with US authorized entities. While this may seem like a simple precept, in practice, attaining it can be extremely difficult for most companies. Defense contractors and other organizations that primarily handle ITAR controlled technical data have been unable to collaborate on projects while utilizing cloud computing practices that have a proven track record fostering high performance and productivity. Nevertheless, the hurdles impeding the productivity opportunities of the cloud can be overcome. Practices that govern the processing and storage of export controlled technical data are evolving.

Full ITAR compliance in the cloud is not an end result, but a continual odyssey in protecting information assets. In the long run, being ITAR compliant boils down to having a solid data security strategy and defensive technology execution in place.

Utilize End-to-End Encryption

On September 2016, the DDTC published a rule that established a ‘carve out’ for the transmission of export controlled software and technology within a cloud service infrastructure, necessitating the ‘end-to-end’ encryption of data. The proviso is that the data has to be encrypted before it crosses any boarder, and has to remain encrypted at all times during transmission. Likewise, any technical data potentially accessed by a non-US person outside or within the United States has to be encrypted ‘end-to-end’; which the rule delineates as the provision of continual cryptographic protection of data between the originator and the intended recipient. In a nutshell, the mechanism of decrypting the data can’t be given to a third party before it reaches the recipient.

The native encryption of data at rest offered by most cloud providers fails to meet the definition of end-to-end encryption, because the cloud provider likely has access to both the encryption key and data. The cloud provider inadvertently has the ability to access export controlled information. Organizations have to ensure that DDTC definition of ‘end-to-end’ encryption is met before storing their technical data in a public or private cloud environment. Otherwise they will be in violation of ITAR.

Classify Data Accordingly

Most technologies are not limited to single use. Whenever an organization that handles technical data related to defense articles shares information regarding a service or product; steps have to to be taken to make sure that any ITAR controlled data is carefully purged in its entirety. Classification entails reviewing existing business activities and contracts to establish if they fall under ITAR. The process requires a good understanding of licensing terms, court interpretations, agency directives and other guidance. In order to successfully navigate the nuances and complexities of ITAR, organizations have to collect enough metadata to catalog, separate and classify information. For easy identification, the data should be classified into categories such as ‘Public Use’, ‘Confidential’, and ‘Internal Use Only’. Classifying data is a requisite to creating a full-proof Data Leakage Prevention (DLP) implementation.

Develop a Data Leak Prevention (DLP) Strategy

Accidental leaks owing to user error and other oversights occur more often that most would care to admit. Mistakes that can happen, will happen. Establishing a set of stringent policies to obviate users from mishandling data, whether fortuitously or intentionally is crucial to ITAR compliance. Organizations should have a strategy in place to guarantee the continual flow of data across their supply chains, while protecting said data from the following employee scenarios:
Well meaning insiders – employees who makes an innocent mistake.
Malicious insiders – employees with ill intention
Malicious Outsiders – individuals looking to commit cooperate espionage, hackers, enemy states, and competitors among others.

Control Access to Technical Data

Access control is well known technique that is used to regulate who can view or use the resources in a computing environment. Access control can be employed on a logical or physical level. Physical access control restricts access to physical areas and IT assets. Logical access control allows IT administrators to establish who is accessing information, what information they are accessing and where they are accessing it from. Roles, permissions are security restrictions should be established before hand to ensure that only authorized U.S persons have access to export controlled technical information. Multifactor authentication strengthens access control by making it extremely difficult for unauthorized individuals to access ITAR controlled information by compromising an employees access details.

Establish Security Policies and Train the Staff Well

An ITAR specific security stratagem is the corner stone of data security practices. The policies should handle network and physical security considerations. ITAR is riddled with complications that make it easy for organizations to make mistakes if they don’t remain keen. The organization is as secure as it’s weakest link, in most cases it’s usually the staff. A solid security policy on paper simply does not cut it. Without proper staff training, a compliance strategy will be largely ineffective since it doesn’t tie in with the actual organizational procedures. Investing in end-user training is the only way to ensure security policies are implemented.

In Closing

Organizations have turned to government clouds to manage the complex regulatory issues associated with the cloud. Platforms like AWS Gov Cloud has developed substantial capabilities that enable organizations subject to ITAR to effectuate robust document management and access control solutions. When paired with FileCloud organizations can build and operate document and information management systems that satisfy the strictest security and compliance requirements.

 

Author : Gabriel Lando

Will Machine Learning Replace Data Scientists?

data science

People have begun to get jumpy at the possibility of Artificial Intelligence being used to automate anything and everything. Now that AI has proven it has the propensity to push out blue-collar jobs (via robotics) and white collar professions (via Natural Language Processing), cultural susceptivity surrounding this technology is on the rise. After decades of exploring symbolic AI methods, the field has transposed towards statistical approaches, that have recently begun working in a vast array of ways, largely due to the wave of data and computing power. This has inadvertently lead to the rise of machine learning.

In todays digital world, machine learning and big data analytics have become staples in business, and are increasingly being incorporated into business strategies by organizations. The ‘data-driven enterprise’ makes all it’s decisions based on the insights they get from collected data. However, as A.I and machine learning continue to develop a larger role in the enterprise, there is a lot of talk about the role of the data scientists becoming antiquated. The advances made in machine learning by industry titans like Microsoft and Google evinces that most of the work currently being handled by data scientists will be automated in the near future. Gartner also recently reported that 40 percent of data science tasks will be automated by 2020.

The difference between Machine Learning and Data Science

Data science is primarily a concept used to tackle big data and is inclusive of data preparation, cleansing and analysis. The rise of big data sparked the rise of data science to support the need for businesses to gain insights from their massive unstructured data sets. While the typical data scientist is envisioned as a programmer experienced in Hadoop, SQL, Python, R and statistics, this is just the tip of the data science iceberg. Essentially, data scientists are tasked with solving real company problems by analyzing them and developing data driven answers, how they do it is irrelevant. The Journal of data science describes it as “almost everything that has something to do with data … yet the most important part is its applications – all sorts of applications”. One of the applications being machine learning.

The rise of big data has also made it possible to train machines with a data driven approach as opposed to a knowledge driven approach. Theoretical research relating to recurring neural networks has become feasible; transitioning deep learning from an academic concept to a tangible, useful class of machine learning that is affecting out every day lives. Machine learning and A.I has now dominated the media, overshadowing every other aspects of data science. So now the prevalent view of a data scientist is a researcher focused on machine learning and A.I. In real sense data science transcends machine learning.

Machine learning is basically a set of algorithms that train on a set of data to fine tune their parameters. Obtaining training data is reliant on multiple data science techniques like supervised clustering and regression. On the other hand, ‘data’ in data science may or may not evolve from a mechanical process or a machine. The main difference between the two is that data science covers the entire spectrum of data processing, not just the statistical or algorithmic aspects.

Human Intuition Cannot Be Automated

Data science is distinguishable from machine learning due to the fact that its goal is especially human focused – to gain insight and understanding. There always has to be a human in the loop. Data scientists utilize a combination of engineering, statistics and human expertise to understand data from a business point of view and provide accurate insights and predictions. While ML algorithms can help identify organizational trends, their role in a data driven process is limited to making predictions about future outcomes. They are not yet fully capable of understanding what specific data means for an enterprise and its relationships, or even the relationships between varying, unconnected operations.

The judgment and critical thinking of a data scientist is indispensable in monitoring the parameters and making sure that the customized needs of a business are met. Once all the questions have been asked, data has been gathered, and ran through necessary algorithms. A discerning data scientist will have to figure out what the larger business implications are and present takeaways to management. Ultimately, the interactive interpersonal conversations driving these initiatives is fueled by abstract, creative thinking that cannot be replaced by any modern-day machine.

Advances in AI Is driving Talent Demand

As the transformational AI wave cuts across end-markets from enterprise to consumer platforms, from robotics to cyber security, the demand for data scientists is only likely to grow. The role of a data scientist will probably assume a new level of importance and evolve in typical computer science fashion. As the machines ability to accurately analyze data increases, with the help from expert statistical modeling and solid algorithms created by data scientists. Data scientists will move up the ‘abstraction scale’ and begin tackling higher level and more complex tasks. The current demand clearly outpaces the supply. McKinsey Global Institute estimates that the United States could have about 250,000 open data science positions by 2024. This data science skill gap is likely to leave companies scrambling to hire candidates who can meet their analytical needs.

In Closing

Will machine learning replace data scientists? The short answer is no, or at least not yet. Certain aspects of low-level data science can and should be automated. However, machine learning is creating a real need for data scientists. As AI advances to analyze and establish cause as well as correlations, software will be used in collect and analyze data; but ML tools don’t yet possess the human curiosity or desire to create and validate experiments. That aspect of data science will probably never be automated any time soon. Human intelligence is crucial to the data science field, despite the fact that machine learning can help, it can’t completely take over.

 

Author – Gabriel Lando

Imagining a Blockchain + AI Hybrid

Blockchain

This past year has seen the implementation of artificial intelligence (AI) and blockchain, across a varied range of solutions and application within several industries. What is yet to be explored is a fusion of the two. This merger can allow enterprises to create a composable business that has a high service delivery. Blockchain is basically an immutable ledger that is open and decentralized, with strong controls for privacy and data encryption. Smart contracts, trusted computing and proof of work are some of the features that contravene traditional centralized transactions, making blockchain truly transformative.

AI, on the other hand, is a general term for varying subsets of technologies, it revolves around the premise of building machines that can perform tasks requiring some level of human intelligence. Some of the technologies striving to make this a reality include deep learning, artificial neural networks and machine learning. By utilizing AI-based technologies, organizations have been able to accomplish everything from the automation of mundane tasks, to the study of black holes. These technologies are disruptive in their own rights, a framework that harnesses the best of both worlds could be radically transformational.

A Trustworthy AI

As the use of AI continues to become more mainstream in high profile and public facing services, like defense, medical treatment and self-driving cars, multiple concerns have been raised what is going on under the hood. The AI black-box suffers from an explainability problem. If people are willing to place their lives in the hands of AI powered devices and applications, then they naturally want to understand how the technology makes its decisions. Having a clear audit trail not only improves the trustworthiness of the data and the models, but also gives a clear route to trace back the machine learning process.

Additionally, machine learning algorithms rely on information fed into it to help shape the decision making process. A shift from fragmented databases maintained by individual entities to comprehensive databases maintained by consumers can increase the amount of data available to predictive marketing systems and other recommendation engines. Resulting in a measurable improvement of accuracy. Google’s Deep Mind is already deploying blockchains to offer improved protection for user’s health data used in their AI engines, to make their infrastructure mode transparent. Currently, intelligent personal assistants are appealing to consumers, and users will generally sacrifice privacy for the sake of convenience; overlooking what data is being collected from their device, how it is secured, or how it compromises their privacy. An amalgamation of AI and blockchain can reinvent the way information is exchanged. Machine learning can swift through and digest vast amounts of data shared via blockchain’s decentralized structure.

Decentralized Intelligence

The synthesis of blockchain and AI opens the door for the decentralization of authentication, compute power and data. When data is centrally stored, a breach will always be an imminent threat. Blockchain decentralizes user data, thus reducing the number of fraudsters or hackers trying to gain access and take advantage of the systems. Machine learning algorithms are capable of monitoring the system for any behavioral anomalies, becoming more accurate as their intelligence improves. This completely dismantles the inherent vulnerability of centralized databases, forcing cyber attackers to challenge not one, but multiple points of access, which is exponentially more difficult. Blockchain and AI combine to offer a strong shield against cyber attacks. Aside from enhanced attack-defense mechanisms, decentralization translates to higher amounts of data being processed and more efficient AI networks begin built. Imagine a peer-to-peer connection that has been streamlined with natural language processing, image recognition, and multidimensional data transformations in real time.

Access to More Computing Power

An AI-powered blockchain is scalable based on the number of users. AI adds an aspect of computational intelligence that can optimize transaction data in blocks and make the entire process faster. A 2016 report from Deloitte estimated that the cost of validating transactions on a blockchain stood at a whooping $600m annually. A large potion of the cost was generated by specialized computing components which consume a lot of energy while performing mining operations. An AI based blockchain model has the ability to help enterprises set up a low energy consumption model, by allowing specific nodes to initially perform larger tasks and alert miners to halt less crucial transactions. Enterprises will be able to achieve the latency required for performing transactions faster without making any structural changes in their architecture. A machine learning and blockchain combo might also be the key to figuring out how to leverage the worlds idle computing power.

The Ultimate Framework

Both AI and blockchains are technologies that aim to improve the capabilities of the other, while also providing opportunities for better accountability and oversight. AI reinforces blockchains framework, and together they solve several of the challenges that come with securely sharing information over the IoT. Blockchain provides a decentralized ledger of all transactions, while AI offers intelligent analytics and real-time decision-making. This allows users to take back control and ownership of their personal data and open the door for more effective security measures. Datasets that are currently only available to tech giants will be put in the hands of the community, subsequently accelerating AI adoption. Sectors such as telecom, financial services, supply chain intelligence, and retail in general are primed for the adoption of both technologies, with health care following suit.

 

Author: Gabriel Lando

Photo by Hitesh Choudhary on Unsplash

Predictions for Enterprise AI: 2018 and Beyond

The last couple of years has seen Artificial Intelligence (AI) become a house hold topic with many being curious about how far the technology can go. Consumer products like Amazon Alexa and Google Home have long used machine learning as a selling point; However, AI applications in the enterprise remain limited to narrow machine learning tasks. With progressive improvements in the convergence of of hardware and algorithms happening on a daily basis, AI is likely to have a larger impact on business and industry in the coming years.

A multi-sector research study conducted by Cowen and Company revealed that 81 percent of IT decision makers are already investing in, or planning to invest in AI. Furthermore, CIOs are currently integrating AI into their tech stacks with 43 percent reporting that they are in the evaluation phase, while an additional 38 percent have already implemented AI and plan to invest more. Research firm McKinsey estimates that large tech companies spent close to $30 billion on AI in 2016 alone. IDC predicts that AI will grow to become a $47 billion behemoth by 2020, with a compound annual growth rate of 55 percent. With market forecasts predicting explosive growth for the artificial intelligence market, it is quite clear that the future of the enterprise will be defined by artificial intelligence. Below are the top 10 predictions for AI in the enterprise.

Search Will Become More Intelligent

According to a Forrester report, 54 percent of global information workers are interrupted a couple of times a month to spend time looking for insights, information and answers. There are now more file formats and types than ever before, and the bulk of it is unstructured data, making it hard for traditional CRM platforms to recognize. AI powered cognitive search returns more relevant results to users by analyzing search behavior, the content they read, the pages they visited, or files they downloaded – to establish the searchers intent or the context of the search query. The machine learning algorithm’s ability to self-learn improves search relevance over time and subsequently the user experience.

Hackers Will Get More Crafty

ML is especially suited for cyber security since hard-coding rules to detect whenever a hacker is trying to get into your system is quite challenging. However, AI is a double edged sword when in comes to data security. The more AI advances, the more its potential for attacks grow. Neural networks and deep learning techniques enable computers to identify and interpret patterns, and they can also find and exploit vulnerabilities. 2018 will likely see the rise of intelligent Ransomware or malware that learns as it spreads. As a result, data security concerns will speed up the acceptance of AI forcing companies to adopt it as a cyber security measure. A recent survey conducted by PWC indicates that 27 percent of executives say that their organization plans to invest in cyber security safe guards that use machine learning.

AI Will Redefine How Data is Approached

The general feeling over the last few years has been that data is the lifeblood of any organization. A recently concluded study by Oxford Economics and SAP revealed that 94 percent of business tech decision makers are investing in Big Data and analytics, driving more access to real-time data. Through out 2018 and beyond, data will remain a priority as companies aim to digitally transform their processes and turn insights into actions for real-time results. Additionally, machine learning will also play a huge role as companies aim to meet regulatory requirements such as the GDPR. Individuals will be empowered to demand that their personal data be legally recognized as their IP. If or when this happens, both parties will turn to AI to provide answers as to how the data should be used.

AI Will Change the Way We Work

Whenever artificial intelligence and jobs are mentioned in the same breath; one side views it as the destroyer of jobs while the other sees it as the liberator of menial tasks in the workplace. A 2013 working paper from the University of Oxford suggests that half of all jobs in the US economy can be rendered obsolete by ‘computerization’. Others argue that intelligent machines will give rise to new jobs. According to a Gartner report, by 2019 more than 10% of hires in customer service will mostly be writing scripts for chatbot interactions. The same report also predicts that by 2020, 20 percent of all organizations will dedicate employees to guide and monitor neural networks. What’s certain is that AI will elevate the enterprise by completely transforming the way we work, collaborate and secure data.

The AI Talent Race Will Intensify

A major challenge in the AI space has been finding talent. You require people on your team with the necessary ability to train AI based systems. Organizations with access to substantial R&D dollars are still trying to fill their ranks with qualified candidates who can take on ‘industry-disrupting’ projects. In 2018, as some businesses look to re-skill their existing workforce to achieve broader machine learning literacy; larger organizations may look to add data science and AI related officers in or close to the C-suite. These senior-level decision makers will be responsible for guiding how machine learning and AI can be integrated into the company’s existing strategy and products. Others will consider hiring practitioners in algorithms, math and AI techniques to offer input.

The Citizen Data Scientist Will Settle In

As AI based tools become more user friendly, users will no longer have to understand how to write code in order to work with them. Gartner defines a citizen data scientist as an individual who generates or creates models that utilize predictive capabilities or advanced diagnostic capabilities, but whose primary role falls outside the fields of analytics and statistics. As stated by the IDC 2018 IT industry predictions, over 75 percent of commercial enterprise applications will utilize AI in some form, by 2019. As business use cases for AI become more mainstream, the need for functional expertise across the organization will be important; to the point that the skill sets AI specialists typically lack will be required. As AI is integrated into every facet of the enterprise, citizen data scientists may end up being more important than computer scientists.

An AI Blockchain Power House Will Emerge

AI and blockchain are ground-breaking technological trends in their own rights; when combined, they have the potential to become even more revolutionary. Both serve to improve the capabilities of the other, while also providing opportunities for enhanced accountability and oversight. In the coming year, we can expect to see blockchain combined with AI to create a new level of deep learning that learns faster than previously imagined. The immutable nature of data stored on a blockchain could enhance the accuracy of AI predictions. Sectors such as the telecom, financial services and retail are among the key industries that are best suited for the adoption of these technologies.

Consumer AI Will Drive Enterprise Adoption

AI already plays a key role in shaping consumer experience. Chatbots have become one of the most recognizable forms of AI, with 80 percent of marketing leaders citing the use of chatbots to enhance customer experience. Despite the fact that the market for enterprise AI has recorded substantial growth, they still require more complex solutions. Some consumer products have already made their way into the enterprise, a good example being voice-activated digital assistants. With Amazon’s recent announcement of Alexa for Business, we can expect employees to start relying on smart assistants to manage their calendars, make calls, schedule reminders, and run to-do lists without lifting a finger.

MLaaS Will Rise

Now that machine learning has proven its value, as the technology matures, more businesses will turn to the cloud for Machine Learning as a Service. The adoption of MLaaS will increase starting in private clouds within large organizations and in multi-tenant public cloud environments for medium sized enterprises. This will enable a wider range of enterprises to take advantage of machine learning without heavily investing in additional hardware or training their own algorithms.

Machine Learning Will Have a Mainstream Appeal

Every forward-thinking innovative organization currently has an initiative or project around digital transformation, with AI usually being the focus. AI represents a significant change in the way enterprises do business. Predictive algorithms, translators and chatbots have already become mainstream and multiple businesses across the globe are utilizing them to boost profitability by reducing costs and understanding their customers better. Expect an even higher level of personalization to become ubiquitous and enhance customer experience everywhere.

 

Author: Gabriel Lando

Photo by Franck V. on Unsplash

A.I Meets B.I : The New Age of Business Analytics

The dawn of the digital age was marked by the monumental shift in the way information is processed and analyzed. The widespread use of the internet further resulted in the increased production of data in the form of text sharing, videos, photos and internet log records, which is where big data (large data sets) emanated from. Data is now deeply embedded in the fabric of society. Over the last couple of years, the world has been introduced to Artificial Intelligence in the form of mobile assistants (Siri, Alexa, Google assistant), smart devices, self-driving cars, robotic manufacturing and even selfie drones. The ubiquitous availability of open source Machine learning (ML) and AI frameworks and their cognate automation simplicity is redefining how digital information is being processed. AI has already begun impacting how we live, work and play in profound ways.

With the realization that big data on its own is not enough to provide valuable insights. Businesses are now turning to Machine learning to uncover the hidden potentials of big data by supercharging performance and implementing innovative solutions to complex business problems. Judging by the massive rise in popularity for venture investment in recent years, It’s no secret that AI and ML are conceivably the most instrumental technologies to have gained momentum in recent times. Here are some ways by which coupling A.I with big data has helped improve business intelligence.

Automated Classification is the First Step Towards Big Data Analytics

Content classification is fundamentally used to predict the grouping or category that a new or incoming data object belongs to. Data streams are continually becoming more and more complex and varied. Simply structuring, organizing, and preparing the data for analysis can take up a lot of time and resources. Data classification challenges at this scale are relatively new. The more data a business has, the more strenuous it is to analyze; however, on the other side of the spectrum, the more data the business has, the more precise its predictions will be. Whether this data is in the form of technical documents, emails, user reviews, customer support tickets or even news articles. Finding that balance is crucial. Doing it manually is implausible because it will not scale and in some cases may lead to privacy violations.

Machine learning and big data analytics are a match made in heaven given the needs for operating on anonymized datasets and data analysis. With an artificially intelligent tool, data classification can be used to predict new data elements on the basis of groupings found via a data clustering process. Multi-label classification captures virtually everything and is handy for image and audio categorization, customer segmentation and text analysis. In an instant, the content is classified, analyzed, profiled and the appropriate policies required to keep data safe is applied.

A.I Marks an End to Intuition-Based Decision Making

The analytics maturity model is used to represent the stages of data analysis within a company. Analytics maturity traditionally starts with an intent to transform raw data into operational reporting insight to lessen intuition-based decision making. With mounds of data at your disposal, the assumption is that more decisions will be rooted in data analysis than on instinct. But, that is often not the case. Countless Excel models, PhDs, and MBAs have taken number crunching to an entirely new level and yet data analysis is becoming increasingly more complex. Data-driven decision tools often require manual development processes to aggregate sums, averages, and counts. In many instances, the findings lack a holistic reflection and don’t generally put statistical significance into consideration.

An AI and ML driven model facilitates automatic learning without any prior explicit programming. This basically means that they have the ability to efficiently analyze enormous volumes of data that may contain too many variables for traditional statistical analysis techniques or manual business intelligence. All the answers are in the data; you just have to apply AI to get them out. A machine learning algorithm automatically discovers the signal in the noise. Hidden patterns and trends in the data that a human mind would be unable to detect are easily identified. Additionally, the AI acquires skill as it finds regularities and structure in the data; becoming a predictor or classifier. The same way an algorithm can teach itself to play Go, it can teach itself what product to push next. And the best part about it is that the model adapts each time new data is introduced.

Accurate Predictive Analysis

Naturally, businesses are more interested in outcomes and action as opposed to data visualization, interpreting reports, and dashboards. The good news is that ‘forecasting’ doesn’t require crystal balls and tea leaves. After gaining insight into historical data, machine learning answers the question ‘what next?’. ML can be utilized to develop generalizations and go beyond knowing what has happened, to offering the best evaluation of what will occur in the future. Classification algorithms typically form the foundation for such predictions. They are trained by running specific sets of historical data through a classifier. The machine learning model learns behavior patterns from the data and determines how likely it is for an individual or a group of people to perform specific actions. This facilitates the anticipation of events to make futuristic decisions.

The Foundation for Risk Analysis

By powering high-performance behavioral analytics, machine learning has taken anomaly detection to greater heights; making it possible to examine multiple actions on a network in real-time. The self-learning abilities of AI algorithms allow them to offer an investigative context to risky behaviors, advanced persistent threats, and zero-day vulnerabilities. A good use case is in fraud detection – AI algorithms can adapt to varying claim patterns, learn from new unseen cases, and evaluate the legitimacy of a claim. Additionally, ML and AI algorithms can aid enterprises to conform to strict regulatory oversight by ensuring all regulations, policies, and security measures are being followed. By pinpointing the outliers in real time, AI gives businesses an opportunity to take immediate action and mitigate risks.

In Closing

In a data-driven world, machine learning will be a key differentiator. As business processes become reliant on digital information, organizations have to adopt next-gen automation technologies to not only survive but thrive. The beauty of combining Business Intelligence (BI) and Artificial Intelligence (AI) lies in the fact that business insights can be discovered at incredible speed. From detecting fraud attempts and cyber breaches to monitoring user behavior to establish patterns and predict customer actions, the purview to boost performance and streamline processes is prodigious. Nonetheless, machine learning tools are only as good as the data used to train them.

image courtesy of freepik

 

Author: Gabriel Lando