Product and Performance Information

1Intel® compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel® microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel® microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product user and reference guides for more information regarding the specific instruction sets covered by this notice.

“The potential for artificial intelligence in healthcare,” June 2019, Future Healthcare Journal,


“Advantages and limitations of total laboratory automation: a personal overview,” Clinical Chemistry and Laboratory Medicine (CCLM), February 2019,

4Configurations: Original model was trained using TensorFlow 1.6 for Python 2.7 without Intel® optimizations and converted by GE Healthcare to OpenVINO™ 2018 R4. Hardware and configurations used for testing: GE Gen6-P image compute node 3.10.0-862.el7.x86_64; processor: Intel® Xeon® processor E5-2680 v3; speed; 2.5 GHz; cores: 12 cores per socket, Docker container has access to 22 CPU cores; sockets: two; RAM: 96 GB (DDR4); hyperthreading: enabled; security updates: Spectre and Meltdown updates applied. Software used for testing: TensorFlow version: 1.6 without Intel® MKL-DNN optimizations; Gcc version: 2.8.5; Python version: 2.7; OpenVINO™ version: 2018 R4 (model server v0.2); OS: HeliOS 7.4 (Nitrogen).
5System test configuration disclosure: Intel® Core™ i5-4590S CPU @ 3.00 GHZ, x86_64, VT-x enabled, 16 GB memory, OS: Linux magic x86_64 GNU/Linux, Ubuntu 16.04 inferencing service docker container. Testing done by GE Healthcare, September 2018. Test compares TensorFlow model total inferencing time of 3.092 seconds to the same model optimized by the Intel® Distribution of OpenVINO™ toolkit optimized TF model resulting in a total inferencing time of 0.913 seconds.