Mocap in action: In conversation with Adam Cyr, Biomechanist at Mary Bridge Children’s Hospital
A long-standing client, Mary Bridge Children’s Research and Movement Laboratory (RML) is a multidisciplinary facility that houses a team of engineers and clinicians who conduct research and use the latest technologies to identify, diagnose, and treat individuals with movement challenges.
We caught up with Adam Cyr, a biomechanist at the facility, who has a keen interest in applying engineering principles and techniques to understand how the human body performs. His goal is to improve injury prevention and treatment.
Here, we share what he had to say about his work and how he is using mocap as part of the biomechanics research he does on a daily basis.
Could you give us a quick overview of your background as it relates to the world of biomechanics and biomechanics research?
After completing my studies, I briefly worked at a company doing forensic biomechanics before I found myself at the Research and Movement Lab at Mary Bridge Children’s Hospital. At the RML, we see patients with a wide variety of concerns, including neurological, muscular, and orthopedic disorders. We also see people who are looking to enhance their performance or who suffer from sports-related injuries.
How do you use motion capture technology in the work you do every day?
The more data we can collect, the better. We want to look at kids doing functional tasks. If we see a patient today and collect data on how they move in their preferred way and then they have some sort of intervention, we have data we can use to assess if there’s been an improvement because they will be moving better than before. Our goal is to inform the clinical providers, whether they’re surgeons or physical therapists, and provide them with objective data so they can make better decisions.
On a typical day, we’ll spend a few hours with a patient either in the morning or the afternoon. We’ll prep the room to make sure that the motion capture system is ready and that the markers are ready to go. We’ll do a subjective history and a physical exam. And then we’ll put the markers on and get the patient to do basic movements. If there’s any particular activity that is causing a problem, we will have them do that activity specifically. After they leave, I compile the data, process it and turn it into graphs and meaningful insights for our therapists to review. It’s great to work this closely with clinicians to see the data and graphs transform into information that means something.
Can you walk us through your experience using Motion Analysis and share some of the features you find most useful?
The motion capture system I inherited in my current position was an older one. We were very fortunate to be able to upgrade to some newer Motion Analysis cameras recently. The new tech is very impressive. From a size perspective, everything is getting smaller, the optics are better, the speed is better and these cameras can track much smaller markers.
The cameras are also more advanced, which makes it easier to do things right the first time and not waste time cleaning up the data. This speeds up patient processing times. We want to get a report back to our patients within a couple weeks and if I’m spending a day cleaning up data, that isn’t possible.
When I do have to clean up data, there are some great features on the backend that make it easier to do so. For example, if a marker dropped off and you didn’t notice, you can use virtual markers to fill in the data gap. I’ve also started to go down the road of playing with what they call the Sky Interface. This allows me to build my own scripts using a batch process. I’ve been working closely with the Motion Analysis team on this and they’ve been hugely helpful. When we collect EMG data, there’s a delay in time so we need to shift the data over for it to line up correctly. With the Sky Interface, I can code something so that I just have to hit one button and it goes through all of my captures and automatically shifts the data over.
We’re also starting to get into real-time feedback using Cortex software. In a clinical setting, we’d use this to better understand upper body motion. For example, we’d put markers on the elbow, the arm and the torso and ask children to reach around so we can see how far they can reach. With real-time feedback, it’s possible to have them reach for virtual markers on a screen, a bit like they are playing a video game. It would all be done in real time using the Motion Analysis workflows I’ve learned. In the work I do, it’s been enormously helpful for me to be able to pick up a phone and connect with the Motion Analysis customer support team or their engineering and technical teams because they are so willing to help out when I have a problem that I need to figure out right away.
If you, like Adam, want to leverage motion capture innovation to better understand movement-related conditions or improve how you monitor the tendencies and patterns of biomechanical movements, we can help. Learn more about how our team can support your mocap needs by scheduling a demo today.