Technology has transformed healthcare delivery and pharmacology helping to drive up patient outcomes and provider efficiency. But our adoption has led to wide gaps in clinical safety and cybersecurity. Much of this transformation has been fueled by the development of artificial intelligence and IoMT, both of which are becoming ever more prevalent across California providers. In fact, IoT now accounts for more than 75% of connected hospital endpoints, while many new clinical applications are AI-enabled. Do we have a good understanding of the risks that new technology is introducing?
Clinical decision support, medical imaging, and precision medicine all rely heavily upon the use of AI, Machine Learning and Deep Learning. Yet, AI requires vast lakes of PHI data for training, much of which cannot be de-identified for effective learning purposes. However, training data can be tainted, and AI models poisoned leading to model corruption, safety, and security concerns. AI has been applied in many positive ways, but it has also more recently been applied in some very nefarious ways by criminals, some of which may lead to significant cybersecurity, patient safety, and regulatory compliance concerns in the future.
This presentation will look at healthcare technology transformation and the development of offensive AI, while making the case for greater development of defensive AI cybersecurity tools. It will also examine the degree of implicit trust that clinicians have in their medical applications and devices and question whether new technology should require a higher level of security and safety training.