AI and Deep Learning Libraries

AI pix.png

Including Utilizing TensorFlow, Google's Open Source Library

The healthcare industry is one of the many industries innovating with AI and deep learning technologies: healthcare is one of the leading applications of AI.

The number of AI and deep learning healthcare startups has skyrocketed in the last five years. AI is shaping the future of healthcare, from real-time pathology assessment to point-of-care interventions to predictive analytics for clinical decision-making.

Making these advancements possible are embedded deep learning libraries and cloud-based AI engines. Considered one of the best deep learning libraries available, TensorFlow is Google’s open source deep neural network that is suitable for use in embedded systems. The TensorFlow libraries make it far easier to incorporate self-learning elements and AI features like speech recognition, computer vision or natural language processing applications.

Additionally, TensorFlow allows algorithms to be highly parallelized across multiple processors, so it is ideal to support implementing neural networks and related algorithms on modern multicore embedded hardware. Tensor flow with a base in C++ is available with many different language bindings, such as C++, C#, Java, Go, Scala, etc.

There are other deep learning libraries that might prove a better choice for specific applications, including Caffe2, developed by The University of California Berkley, or Torch, which was initially released in 2002. Cloud-based engines, such as IBM's Watson, are also available if the device has connectivity, allowing extremely complex offline processing that might not be possible in an embedded system.

There are challenges to using AI in medical devices or healthcare situations, including validating the clinical performance of the algorithms, mitigation risk of out of bounds answers, having enough coverage in the training data set to ensure that sensitivity and specificity are guaranteed over the full expected clinical input data sets, and ensuring that the memory utilized for data calculations is bounded and deterministic especially on lower resource embedded platforms.

The availability of large data sets has improved the performance of AI systems, allowing a large coverage of the problem space. In addition, gathering data in real time during use, and providing a feedback mechanism, allows continuous learning and optimizing of the system, as long as the system can remain verifiable, safe and effective.

A regulatory agency is going to require that the algorithm has been trained and validated with a data set that at least includes:

•data that is in range and expected

•input data that includes confounding components, such as potential human factors variations and variations in the input signal, e.g. Line frequency noise or interference from non-relevant physiological sources

•clinical variation due to medications

Agencies will also require that the sensitivity and specificity of the system against the set of expected and unexpected input yields the performance and safety requirements of the system.

This functionality is also available for commercial and industrial systems, opening the door for a deep learning presence in what would be deeply embedded systems, such as appliance, industrial controls, IOT endpoints, and energy monitoring.

AI is also driving optimization of the radio spectrum. Given the projected increase in the demand for data, the spectrum will be managed by intelligent systems, continuously updating the spectrum usage and modulation mechanisms for increased communication throughout. DARPA recently held a competition for teams to compete against each other, demonstrating optimization and maximization of cooperative spectrum use.

Deep Leaning technologies are allowing truly amazing advancements across industries.