Algorithm Development for Medical Devices
Simbex’s Disciplined Approach to Algorithm Development
Algorithm development today has never been better. Open source tools, algorithms, and online learning opportunities have broken down many of the implementation barriers by lowering investment costs in tools and talent, resulting in new opportunities and the ability to quickly advance products to market.
Along with the exponential growth of algorithm development, we’ve seen an increase in misplaced trust in algorithm output, inability to replicate results, and lack of controls leading to increased risks to patients/users. A core belief to Simbex has always been that data are sacred. This means we take a holistic perspective and disciplined approach for successful algorithm development and deployment. In this post we will discuss multiple methodological approaches for developing an algorithm and provide a series of principles that guide our development process.
Algorithm Development Pathways
We categorize algorithm development methods into two primary categories; derived using analytical models and machine learning.
A myriad of analytical models have been created to describe phenomena ranging from astrodynamics, quantum mechanics, electrical dynamics, and physiology (heart rate, breath rate, etc). Experimentally or theoretically derived, these models can be used to predict behavior and describe the observations from sensor measurements. Commonly, we build an algorithm based on mathematical models that describe underlying system dynamics models (e.g. pendulum model to simulate human walking), use wearable sensor measurements to drive these models (e.g. Inertial Measurement Units mounted at center-of-mass of body), and then use statistical methods to validate the algorithm against reference standards (e.g. comparing walking to motion capture systems). Some advantages of the analytical model approach include:
- Gaining deeper insights into the underlying mechanisms that drive system behavior.
- Constraining the variable space which reduces the chance of falsely identifying trends when data outside of expected ranges are presented to the model.
- Understanding the problem, which results in additional algorithms to support the product. Signal processing of data to filter out noise or unwanted sources of signal requires some level of understanding of system dynamics.
As an example of this approach, we developed a 3D rigid body model of human head dynamics to measure head impacts from multiple sensors located in a helmet. As you can imagine, measuring impacts in the field is challenging, the data are noisy, we have unexpected actions (e.g. dropping helmets on the ground), helmet can be ill-fitting, etc.
To overcome these issues, we had to develop a system with an understanding of the dynamics of helmet compression on a head during an impact event.
These models helped us identify the materials required to obtain stiffness and damping properties for our system, develop a noise cancellation algorithm based on the physics of what the accelerometer sensors measured during an impact event, and create a 3D rigid body model using the accelerometer sensors in the helmet to quantify impact magnitude and direction.
Machine learning is a method of data analysis that automates analytical model building using a series of rules that are used to identify patterns and make decisions with minimal human intervention. Common machine learning algorithms are associated with image classification, natural language processing and time series prediction analysis. Machine learning models are useful when:
- Building a mathematical or statistical equation to map data inputs to outputs is unclear. You can include as many feature dimensions as you want if it’s not clear what’s important.
- Exploratory data analysis didn’t yield meaningful insights or the patterns that differentiate classifications are complex and difficult to see.
- The output of the model is all that is needed and understanding the underlying factors that drive the algorithm aren’t valuable
- Speed. Training a machine learning algorithm is incredibly fast if you have sufficient training and test data.
Machine learning and analytical models are not mutually exclusive. Going to back to our example above, we also developed multi-layer perception neural network to help with data quality verification before uploading data to the cloud. In our early work, we didn’t really have a clean understanding of what a real-world field impact looked like and how this was distinguishable from a dropped helmet, but we did have video data that was synched with accelerometer measurements from the helmet. We were able to extract many features from the accelerometer data (e.g. peak magnitudes, time to peak, rate of peak, impact velocity, etc) and we used these features as input to our neural network to qualify this impact as either valid or invalid.
A holistic approach to development
With either approach, we adhere to a series of principles and processes for the algorithm development life cycle:
- KISS (Keep It Simple, Simbex). This principle holds considerable weight for us as it pertains to development, deployment, and maintenance efforts. In our experience, overly complex models tend not to be robust for real world use. These models are not sufficiently generalizable and tend to overfit data, resulting in acceptable results in very specific circumstances, but falling apart when unexpected data are entered.
- Discipline in data and algorithm management: All results are reproducible, traceable and verifiable through our DataOps processes. These processes include:
- Source controlling analytics code through repositories, ensuring traceability between data, and algorithm versioning. Formatting and best practice guidelines are also defined to ensure consistency across developers.
- Master data management. This includes source controlling and protecting the development data. Standardized data formatting and requiring associated meta-data provide context for the developer.
- Peer review of analysis code and algorithm output to verify the approach.
- Communication with the client. Establish clear marketing and algorithm requirements that define user need, benefit, and value.
- Algorithms cannot be developed in isolation; good design extends beyond the outputs.
- When designing algorithms for wearables, power budget and user experience are key drivers for user engagement.
- Tradeoffs need to be made between performance and implementation, which requires a full understanding of the product, electronics, computational capacity, sensor limitations, etc.
Successful algorithm and analytics development require a holistic view and disciplined approach from development to deployment and oversight. These principles have been established through years of algorithm development across a wide range of biomedical problems and result in higher quality and finer standards. The principles are universal and work for both analytical and machine learning approaches.