AI Development Software
Develop, train, and deploy your AI solutions quickly with performance- and productivity-optimized tools from Intel.
Intel provides a comprehensive portfolio of AI development software for all your needs, including data preparation, training, inference, deployment, and scaling. All AI development software is built on the foundation of a standards-based, unified oneAPI programming model with interoperability, openness, and extensibility as core tenets.
End-to-End Python* Data Science and AI Acceleration
Products are grouped to meet common AI workloads like machine learning, deep learning, and inference optimization. You can also customize them to choose only the tools you need from conda*, pip, and Docker* repositories. A full offline installer is also available.
- Optimized frameworks, a model repository, and model optimization for deep learning
- Extensions for scikit-learn* and XGBoost for machine learning
- Accelerated data analytics through the Intel® Distribution of Modin* system
- Optimized core Python* libraries
- Samples for end-to-end workloads
Accelerate Data Analytics with Intel® Distribution of Modin*
Change one line of code to perform distributed pandas DataFrame processing. The library includes:
- Use of all available processing cores on your machine for DataFrame processing
- A choice of back-end distributed processing engines: built-in heterogeneous data kernels (HDK), Dask, Ray, or HEAVY.AI*
- API compatibility with pandas. So, just change import pandas as pd to import modin.pandas as pd
- Same notebook for running on your local machine and the cloud
Automate Model Optimization with Intel® Neural Compressor
Reduce model size and speed up inference for deployment on CPUs or GPUs. The open source library includes:
- Automation to help you get started using quantization techniques
- A variety of pruning approaches
- Knowledge distillation from a larger model to improve the accuracy of a smaller model
- Support for models created with PyTorch*, TensorFlow*, Open Neural Network Exchange (ONNX*) Runtime, and Apache MXNet*
Write Once, Deploy Anywhere with the Intel® Distribution of OpenVINO™ Toolkit
Deploy high-performance inference applications from device to cloud, powered by oneAPI. Optimize, tune, and run comprehensive AI inference using the included optimizer, runtime, and development tools. The toolkit includes:
- Repository of open source, pretrained, and preoptimized models ready for inference
- Model optimizer for your trained model
- Inference engine to run inference and output results on multiple processors, accelerators, and environments with a write-once, deploy-anywhere efficiency
AI Development Software Resources
Stay Up to Date on AI Workload Optimizations
Sign up to receive hand-curated technical articles, tutorials, developer tools, training opportunities, and more to help you accelerate and optimize your end-to-end AI and data science workflows. Take a chance and subscribe. You can change your mind at any time.