Start small, scale up: AI’s ‘not-so-secret’ to success

The pace and scope of AI breakthroughs is increasing, but enterprises that are just starting out are by no means lagging.

The biggest opportunities still lie ahead but a recent Harvard Business Review study revealed that ambitious, transformational moon shots “are less likely to be successful than “low-hanging fruit” projects that enhance business processes”.

Intel sees this in its work with customers too. The ones that have the most success with AI start their journey by spinning up smaller-scale proofs of concept (PoCs) with existing infrastructure. Their current Intel® Xeon® processor-based data center provides an ideal opportunity to prove the value of AI using a flexible, general purpose foundation with a competitive total cost of ownership. And then, when it comes time to scale up, they can turn their attention to fine-tuning the combination of compute, software optimizations and processor memory bandwidth so critical to AI performance.

In its AI work with customers, Intel sees the ones that have the most success with AI start their journey by spinning up smaller-scale proofs of concept with existing infrastructure.

Starting small and scaling for success

A small-scale natural language processing (NLP) pilot by a French Cancer Research Organization resulted in a transformative set of initial results and learnings that gave it the basis from which to expand the use case’s scope.

Before the organization developed its solution, it took 30 people six months to review patient records and identify patients suitable for clinical trials. The pilot project learned from data from 24 million records and 1.25 million patients, and classified new and previously unseen data based on these learnings. The organization believes this system could cut the whole process down to a day.

The team is now looking to scale its solution, which runs on high-performance Intel® Xeon® processor-based clusters, by deepening the search capabilities, refining the user interface, further optimizing performance, and expanding the number of users and locations.  

A French cancer research organization piloted a project seeking to use AI to identify patients suitable for clinical trials – they believe the system could do a job that normally takes 30 people six months in just one day.

Automation at the speed of life

Another example of how, once scaled up from successful initial proofs of concept, AI projects can have a transformative impact is found in image recognition.

In the healthcare industry, again, radiologists make clinical judgments every day about whether patients’ scans are symptomatic of cancer. However, it would be difficult for them to describe how to identify all cancers in any images associated with any patient, irrespective of image quality or rotation. Yet this is exactly what a deep learning image recognition system is able to learn and automate, given sufficient data – resulting in the ability to process images and flag potential cancers with life-saving speed and accuracy.1

Watch: Artificial intelligence is transforming the way enterprises automate image recognition to build better business solutions and lower operating costs. 

Deep learning-based image recognition systems can learn and automate the process of cancer identification – enabling them to flag potential cases with life-saving speed and accuracy.

The three technology factors underpinning AI performance2

AI performance is driven by a combination of compute, software optimizations and processing memory bandwidth, and the Intel® architecture in your data center can give you the AI you need on the hardware you already own.

Intel’s data science team recently wrote of its work on image classification for health and life sciences:

“Due to their support for greater memory footprints, CPU-based deep learning systems are uniquely equipped to handle the memory demand associated with training a neural network on large images and accommodating the size of the image batches… we demonstrated that a CPU-based system could handle a memory footprint in excess of 40GB for a real-world microscopy classification task.”

On the software side, by optimizing a number of deep learning libraries for many of the most popular AI frameworks, Intel has made it possible for data scientists and developers to work with their preferred tools on Intel hardware. These frameworks include TensorFlow*, Theano*, and more.

Additionally, BigDL is a distributed deep learning library for Spark* that can run directly on top of existing Spark or Apache Hadoop* clusters. It allows for loading of pre-trained Torch* models into the Spark framework and can efficiently scale out to perform data analytics at big data scale.

How ready is your organization for AI?

Intel works with many organizations looking to deploy artificial intelligence and its continued optimization of both hardware and software means AI is within reach of almost any business.

Wherever you are in your AI journey, Intel’s broad portfolio of hardware and software offers a rich toolkit for building the most cost-effective deployment architecture for AI workloads, and you can start today.

Select the Best Infrastructure Strategy to Support Your AI Solution


Discover how best to deploy your AI solution on existing and accessible infrastructure.

Artificial Intelligence

Intel® technology-based solutions help businesses accelerate solutions, automate operations, and improve insights.

Learn more

The Anatomy of an AI Proof of Concept

Discover what makes an effective AI proof of concept.

View the infographic

Infos sur le produit et ses performances

2

Les fonctionnalités et avantages des technologies Intel® dépendent de la configuration du système et peuvent nécessiter du matériel et des logiciels compatibles, ou l'activation de services. Les résultats varient selon la configuration. Aucun ordinateur ne saurait être totalement sécurisé. Consultez le constructeur ou le revendeur de votre ordinateur ou apprenez-en plus sur https://www.intel.ca.