Fall Detection: Design Challenges, Physics Modeling, and Targeted Testing
Wearable motion sensors are everywhere, including in smart watches and fitness trackers. Because of this, it is now possible to design wrist-worn products that detect when a person falls and sends an alert signal. These fall detection devices, if they work as hoped, would be invaluable in hospital and nursing home settings and for elderly people who live alone.
However, the complexity of human movement makes it very challenging to design a device that understands what movements indicate an actual fall. Falls can happen in a variety of ways and detecting them with a device worn on the wrist, located far away from the body’s center of mass, adds yet another layer of complexity. Using human test subjects in the device design process has its own limitations; it is difficult to make generalizations about falls when using a small number of testers, yet conducting tests on hundreds or thousands of falls is not feasible due to safety concerns. To simplify this complex task, we need to rely on a couple of techniques: physics modeling and targeted testing.
Physics Modeling
How to Constrain the Fall Detection Model – No Falls Required!
When we use physics to model what falls look like, we consider constraints such as gravity (human beings can never fall up) and knowledge that body segments can only move in certain directions.
In the example of a fall detection device worn on a wrist, the engineer must take into account the types of movements that are impossible for a human arm to make.
This leaves a much smaller subset of possibilities.
By modeling human motion based on the known center of mass motion, we can identify specific rules such as:
1. Arm accelerations cannot go in the opposite directions of a human hand.
2. Typically, the center of mass will have little horizontal movement unless there is also an altitude change (i.e., falling on a hill).
3. An impact of the fall will create acceleration values in x, y, and z.
We then can use these rules to create constraints that help us identify impossible falls and make assumptions about movement. It is important to get input from an interdisciplinary development team to verify the list of assumptions that you have made about movement.
Interested in Fall Detection and Wearables?
Read the Silvertree Case Study:
How to Ensure a Representative and Balanced Dataset During Testing
Once the group of possible and probable body segment movements have been identified, it is time to create a list of targeted tests that will be used to collect data about each possible type of movement. Testing is a difficult, but key, activity in order to recognize points of failure. Creating these targeted tests involves categorizing remaining human motions into different classifications. Some examples of classifications of falls could be: falling forward, arms going forward, falling backward (i.e., slipping vs tripping), arms going backward, and post-fall movements like rocking in pain. Knowing the types of falls that need to be observed can help determine the correct testing protocol to be sure that each type of fall happens during data collection. We keep our fall testing subjects safe with safety harnesses and cushioned mats to make the landings more comfortable, but it is still a time-consuming and demanding process to recreate each fall category. Here are further areas to consider when testing a fall detection device:
- Targeted tests must also take false positive incidents into consideration. A false positive is an event, (i.e., an alert for a fall) that turns out to be false (i.e., a fall did not occur). Failed assumptions for fall detection include assuming that any time a person accelerates downward equals a fall (rather, they might be riding in a car or an elevator), or that people will flail their arms as they are falling (many fainters don’t!). It would be easy to assume that the biggest product development obstacle was whether the device would be sensitive enough to detect falls, but it turns out to be the opposite; humans have a variety of movements that can generate false positives that look very similar to falls. This can be annoying if a fall detection device is constantly signaling that a fall has occurred, when in reality the wearer was simply doing some movement that the engineering team had failed to consider. False positives can also be dangerous because it may lead to alarm fatigue and too many “cry wolf” scenarios such that a real fall might be ignored by the caregivers who receive the alarm.
- Testing protocols should include an even number of trigger events and non-trigger events (more generally, positive and negative events). In the falling example, training must occur on an equal number of: 1) non-fall events, such as jumping, sitting on the floor without falling, or any other action that might be confused for falling, and 2) fall events, such as fainting or tripping or falling off a chair. Otherwise, your device will be biased toward detecting the events that were seen most often during algorithm training. In an extreme example, imagine if a training dataset looked at 2 faints and 98 burpee exercise events (where a person jumps from standing to a pushup position). The algorithm will be trained to generally lean towards classifying events as non-falls if any doubt exists, since 98% of the time, that leads to an accurate classification. Algorithms, after all, are only concerned with being accurate overall, rather than making a correct decision in a moment in time. This is a dangerous assumption to make in a fall scenario. A more even dataset would allow for the algorithm to make the decision based on the movements themselves, since it cannot maximize accuracy by just falling into one direction.
- When recruiting test subjects, they must be as varied as possible to observe how your device behaves on different people (young/old, tall/short, athletic/un-athletic, male/female). A literature review could be helpful to identify how falls might vary in different populations.
- Finally, it is important to understand the limitations of the testing protocol to guide future iterations. During development, common limitations are budgetary and time. In the drive to get an MVP to market, data collection may be kept to a minimum. Frequently, fall detection studies have the limitation that the test subjects are taken from the engineering team, and tend to be primarily young, healthy males. However, people who are more likely to fall may have mobility differences or use an assistive device such as a cane that may cause them to fall differently. They will probably have slower response times and their original positions and recovery would look very different than that of a 20-year-old man falling. For ethical reasons, it is often impossible to ask older subjects to fall, so that is a limitation to consider and potentially plan to augment with data collected during real world usage of the devices.
At Simbex, our product development processes focus on sound science, design, testing and iteration to ensure high quality products that meet the needs of the user and remain effective over time. In order to accomplish this mission, we have to become comfortable with solving complicated problems by breaking them down systematically. In this example, we used physics modeling and targeted testing to achieve a goal of accurately detecting falls while minimizing false positives. We hope that our work in this space will be beneficial for the health and well-being of individuals with health challenges and the elderly who want to remain living at home. We are excited to harness the skills used in this project for future projects with movement detection using wearable motion sensors.
Developing a Wearable Device?
About the Author:
Aroob Adbelhamid has over a decade of experimental design experience. In that time, she has dealt with a variety of different data sources and synthesized findings to elucidate previously unobserved or misunderstood phenomena. Aroob has a PhD in Chemistry from the University of Colorado, Boulder, and a BA in Natural Sciences: Chemistry from Fresno State.