10 Surprisingly cool career paths in motion analysis

You might think motion capture is all about Hollywood stars prancing around in spandex suits, but the applications of this cutting-edge technology go far beyond the silver screen. In fact, motion analysis experts are in high demand across a diverse range of sectors, each offering its own unique brand of fun and fulfillment. Let’s take a look:

1. Biomechanist barnstormers

As a motion analysis pro in the world of biomechanics, you’ll get to study the mechanics of the human body in mind-bending detail. Whether you’re helping athletes optimize their performance or assisting doctors in rehabilitation, your work will have a tangible impact on people’s lives. Plus, you get to geek out over fancy terms like “joint kinematics” and “ground reaction forces” – what’s not to love?

2. Virtual virtuoso

Love the idea of creating immersive virtual worlds? Motion analysis is the key to unlocking the next generation of gaming, VR, and animation. Become a motion-capturing maverick, and you could be the mastermind behind the captivating movements of your favorite video game characters or the lifelike animations that wow audiences.

3. Robotic rockstar

Ever dreamed of programming robots to move with the grace and dexterity of a human? Motion analysis is your ticket to the cutting edge of robotics and automation. Analyze movement patterns, optimize trajectories, and bring a touch of humanity to the machines of the future.

4. Sports sensation

For the athletically inclined, motion analysis offers a front-row seat to the inner workings of elite sports. Whether you’re helping coaches fine-tune training regimes or identifying injury risk factors, your work will give you an insider’s view of the high-stakes world of professional athletics.

5. Dance dynamo

Who says motion analysis is all about crunching numbers? If you’ve got a passion for the performing arts, you can put your movement expertise to work choreographing captivating dance routines or analyzing the technique of prima ballerinas. Get ready to pirouette your way into an exciting new career.

6. Accident investigator

When things go wrong, motion analysis can be a game-changer. From reconstructing car crashes to analyzing workplace incidents, your ability to break down complex movements can help uncover the truth and prevent future accidents.

7. Fashion forward

Haute couture may seem like an unlikely destination for a motion analysis pro, but the industry is actually teeming with opportunities. Leverage your movement expertise to design ergonomic clothing, optimize garment fit, and even enhance the runway experience with cutting-edge motion capture.

8. Medical maverick

In the world of healthcare, motion analysis is revolutionizing the way we diagnose, treat, and rehabilitate patients. From analyzing gait patterns to monitoring neurological conditions, your skills can make a real difference in people’s lives.

9. Industrial innovator

Motion analysis isn’t just for the glitz and glamor – it’s also transforming the way we approach industrial processes. Optimize manufacturing workflows, improve product design, and even enhance workplace safety through the power of movement data.

10. Wildlife wizard

For the nature enthusiasts out there, motion analysis can open the door to a career studying the remarkable movements of the animal kingdom. From tracking the migratory patterns of majestic creatures to analyzing the biomechanics of our furry, feathered, and finned friends, the possibilities are endless.

So, there you have it – ten surprisingly awesome career paths in the world of motion analysis. Whether you’re a data-crunching dynamo or a movement-loving maverick, the opportunities are endless. So why not strap on your motion capture suit and get ready to shake up the world?

11. Mocap manufacturer

If you’re technically-inclined, why not consider a role in the motion capture manufacturing industry? We employee all of the above, as well as high-end hardware and superb software engineers, marketing maestros, sales specialists, admirable administrators, terrific technicians and many more.

7 Ways movement tracking enhances sports performance

Movement tracking technologies, such as motion capture systems, have long been recognized for their valuable applications in sports performance analysis. However, beyond the obvious uses, these advanced tools can unlock a wealth of unexpected insights that can truly transform an athlete’s training and competitive edge.

1. Injury prevention and rehabilitation

By capturing detailed movement data, sports scientists can identify subtle biomechanical imbalances or movement patterns that predispose athletes to certain injuries. This allows for targeted interventions and adjustments to training regimes to mitigate injury risk. Similarly, motion tracking is invaluable in monitoring an athlete’s progress during rehabilitation, ensuring a safe and effective return to play.

2. Technique refinement

The granular data provided by movement tracking enables coaches and athletes to scrutinize technique with unprecedented precision. This allows for the identification of minute flaws or inefficiencies that may be hampering performance, leading to tailored technique adjustments that can unlock new levels of skill and efficiency.

3. Talent identification

Analyzing the movement signatures of elite athletes can provide a blueprint for the key physical attributes and motor control patterns that underpin success in a given sport. By applying this knowledge to the movement data of aspiring athletes, coaches can identify promising talent with greater accuracy, ensuring they nurture the right individuals for long-term development.

4. Psychomotor skills assessment

Movement tracking can reveal insights into an athlete’s cognitive and decision-making abilities, not just their physical skills. By studying how athletes respond to dynamic, game-like scenarios, researchers can assess psychomotor skills such as reaction time, spatial awareness, and anticipation – critical factors in many sports.

5. Fatigue monitoring

Continuous monitoring of an athlete’s movement patterns can provide early warning signs of neuromuscular fatigue, allowing coaches to optimize training loads and recovery periods. This helps prevent overtraining and ensures athletes reach competition day in peak condition.

6. Quantifying the effects of equipment and apparel

Motion capture allows sports scientists to precisely measure the impact of equipment, apparel, and even environmental factors on an athlete’s biomechanics and movement efficiency. This data can drive evidence-based decisions on the most performance-enhancing gear and playing surfaces.

7. Enhancing coaching effectiveness

Beyond the athlete, motion tracking technologies can enhance the effectiveness of coaches themselves. By providing objective, data-driven insights, coaches can make more informed decisions, refine their training methodologies, and better communicate with athletes to drive continuous improvement.

These are just a few of the unexpected ways that movement tracking is transforming the world of sports performance. As these technologies continue to evolve, the opportunities to gain a competitive edge will only expand, making them an increasingly indispensable tool for any serious athlete or coach.

Motion capture systems for animal studies

What is motion capture for animal studies?

A motion capture system is a mix of hardware and software that records the movement and positioning of objects or animals in three-dimensional space. It is used in fields such as animal behavior, biomechanics, and zoology to accurately analyze and study the movement and dynamics of various species.

How can a motion capture system enhance the work of an animal researcher?

Motion tracking systems provide animal researchers with valuable data and insights that can enhance their understanding of animal behavior, locomotion, and biomechanics. By capturing precise, three-dimensional movement data, researchers can gain a deeper understanding of factors such as gait patterns, joint kinematics, and the biomechanics of specific animal species.

What does a motion capture system consist of?

A typical motion capture system for animal studies includes the following key components:

Important considerations when purchasing a motion capture system for animal studies

When evaluating and purchasing a motion capture system for animal research, consider the following factors:

Conclusion

Selecting the right motion capture system is crucial for animal researchers to effectively conduct studies, assess animal behavior and biomechanics, and gain valuable insights. By considering the key factors outlined in this checklist, you can make an informed decision that aligns with your specific animal research needs and enhances the quality and impact of your work.

The biomechanist’s motion capture purchasing checklist

What is a motion capture system?

A motion capture system is a technology that records the movement and positioning of objects or individuals in three-dimensional space. It is widely used in fields such as biomechanics, movement science, and animation to accurately analyze and study human or object motion.

How can a motion capture system enhance the work of a biomechanist?

Motion capture systems provide biomechanists and movement scientists with valuable data and insights that can enhance research, clinical assessments, and the development of interventions. By capturing precise, three-dimensional movement data, researchers can gain a deeper understanding of factors such as joint kinematics, muscle activation patterns, and overall movement efficiency.

What does a motion capture system consist of?

A typical motion capture system includes the following key components:

Important considerations when purchasing a motion capture system

When evaluating and purchasing a motion capture system, consider the following factors:

Conclusion

Selecting the right motion capture system is crucial for biomechanists and movement scientists to effectively conduct research, assess clinical interventions, and gain valuable insights. By considering the key factors outlined in this checklist, you can make an informed decision that aligns with your specific needs and enhances the quality and impact of your work.

Motion capture suit, camera & action! What goes into a mocap performance?

There’s more to mocap than rolling around in a lycra suit!

We’ve already looked at the acting skills needed for a successful mocap performance, now let’s dive into the technical side of things to better understand each piece of tech that makes a performance work. 

1. The motion capture suit

The motion capture suit is really just a lycra outfit to hold the markers  onto the actor’s skin so they can move naturally without feeling inhibited. But the markers attached to these suits are the real star of the show. 

These retro-reflective 3D tracking dots are small spheres positioned strategically on the performer to record their real-life movements. Imagine the markers as computerized puppet strings – pulling the skeleton of the character through frames that create animated motion. 

2. The cameras 

The retro-reflective markers are tracked by specialized motion capture cameras. The more cameras you use, the more complete and accurate the outcome will be.

Cameras such as the Kestrel produce marker coordinate data rather than an image. They detect only infrared or near-infrared light and are able to pass information at a much higher frame rate than a typical television camera could. 

The Kestrel 4200 is one of the best pieces of hardware out there when it comes to mocap tech, and is an excellent investment for large and complex mocap systems. But if you’re working on a limited budget then the Kestrel 300 will still deliver a high quality motion capture.

Related: Choose the motion capture hardware that’s best suited for you

3. The software

An animation studio, game maker or filmmaker will use professional 3D animation software – Autodesk’s Maya is one of the more popular ones – which provides all the modeling, rendering, simulation, texturing, and animation tools that need to be added once motion is captured.

4. The rig

Before tracking movement for animation, animators need to have a basic skeleton mapped out for the character they are creating. This skeleton will help them to determine how many markers they need to use, and what levels of movement they need to track. For example, an acrobatic dancer who is going to be doing backflips will require more markers than a rigid-limbed robot that stomps around. 

The cameras and markers capture the motion and the data driving the character’s skeleton rig is sent back to the animation program where it’s transformed with fur, clothing, or skin. 

Our Cortex system is capable of solving the skeletons of any structure with any number of segments, including bipeds, quadrupeds, props, facial animation and more.

Because most humanoid characters have similar skeletons and move in similar ways, it’s possible to develop marker sets that can be used on a number of skeletons. 

Our Basix Go software has a built-in, constrained and tracked human skeleton at its core, which works for almost all humanoid characters. The six active markers strapped to the performer’s waist, feet, hands and head, are enough to track a human’s motion very accurately and precisely. Then within our software, (or in the receiving package), this rig can be mapped to the creator’s humanoid skeleton. 

Having this built-in solver-skeleton that’s ready to be tracked, means our BaSix system setup time is minimal compared to other traditional mocap systems. You simply need to walk into the studio once cameras are set up, strap on your six markers, stand in a “T” pose, press “reset skeleton” in the software, and voila – you’re tracking movement and data is being streamed live into your animation package in real-time, ready to be recorded. 

Interested in finding out more about our motion capture suits and technology? Find out more about our systems and book a demo today.

In the field: a chat with Thomas Kernozek, Professor, University of Wisconsin, LaCrosse

After a long-running fascination with athletics and injury mechanisms, Prof. Thomas Kernozek has implemented many motion capture systems to fuel his work in physical therapy and the study of movement-related conditions. Using two systems at the University of Wisconsin, LaCrosse, where he is a Professor in the Health Professions—Physical Therapy faculty, Thomas’s teaching gives students valuable experience with advanced motion capture technology, while also gaining evidence-based data for his own clinical research. 

We caught up with Thomas to discover more about his specializations; his experience using real-time feedback and which future mocap features can help nurture the next generation of talent for biomechanics in sports medicine. 

How did you get into biomechanics in human movement, and what inspires your work? 

Like many people that grew up being active and enjoying many forms of sport and exercise—or  becoming injured!—I was driven to understand why some injuries may occur and how it gets examined in a clinical setting. That led to a career in biomechanics, where my research specializes in some common lower extremity injury types: anterior cruciate ligament (ACL) injury, patellofemoral joint and Achilles tendon injuries. 

Physical therapy was once a Bachelor’s degree here in the US but the professional knowledge base has changed drastically since. It became a Master’s degree when I was hired at LaCrosse in 1996, and I now teach and work alongside entry-level clinical students in the doctoral program in physical therapy. Our university laboratory spaces allow our students to engage fully with robust technology, which really helps them develop their own perspectives on how they understand and treat movement-related injuries. I always aim to craft students into striving to become scholarly clinicians by using our mocap systems in my teaching and scholarship.

How did you discover Motion Analysis, and why did you choose it for your own clinical research?

I discovered Motion Analysis while visiting other universities and medical institutions during a sabbatical. When I was “growing up as a biomechanist”, video technology was just in the beginning stages and the use of high speed film was phasing out. I’d used an earlier video based motion capture system before joining LaCrosse that did not have the same capabilities as the Motion Analysis system, so I jumped at the chance to implement this equipment once we had opened the Strzelczyk Clinical Biomechanics Laboratory in our new Health Science Center. 

Its compatibility is a huge plus, as the software and hardware can be upgraded and integrated with existing systems easily. Older Motion Analysis camera models we purchased are still operational and compatible with our software but the overall evolution of these systems has been great to see. We now use mostly Kestrel cameras and Cortex for both systems we have set up in two laboratories—one surrounding an instrumented treadmill—for examining physical activities with human subjects and using data gathered to inform computer models to estimate joint and soft tissue loading.  

Your work at the university covers many roles, including Director of the LaCrosse Institute for Movement Science, so how do Motion Analysis systems help you practically achieve your goals? 

We work with collegiate athletes in jumping sports here at the university, including volleyball and basketball. We’ve also targeted female athletes because we see ACL injuries and related maladies being more prevalent in those performers. We also study a lot of runners. Ultimately, we want to prevent these athletes from getting hurt.  

Our students get practical first-hand access to advanced mocap in classes, so it is used in teaching and research, which is somewhat unique to our physical therapy curriculum. The mocap cameras help identify, measure and track movement, which supplies evidence to inform answers to clinical research questions related to physiotherapy.

One thing we’ve done with Motion Analysis systems is use musculoskeletal models to measure Achilles tendon stress or patellofemoral stress related to running performance. These data are particularly useful for clinical research, as we attempt to drill down to the anatomical structures and tissues to examine how varied athletic movements (such as stride patterns) affect loading. Excessive loading may be associated with the performer’s pain symptoms. We have also used biomechanics within a motor control paradigm to provide augmented feedback to participants to alter their movement performance.

What are your favorite projects involving Motion Analysis technology?

A notable project involved test subjects with patellofemoral pain (pain around the knee cap) performing squats. After a physical therapist made sure that these test participants met certain criteria following a clinical assessment for patellofemoral pain, we streamed their motion capture data into a musculoskeletal model while they were performing squats. The load data between the patella and the femur during the exercise was displayed as augmented feedback.  Participants were able respond to this augmented feedback to alter their squat performance to reduce loading. 

And finally… What excites you most about the future of biomechanics in sports medicine? 

Our capabilities are still evolving, and mocap technology not only shapes our understanding of therapeutic exercise and injury, but contributes to medical literature in the physical therapy profession. Computer modeling approaches informed by Motion Analysis data helps to get a clearer picture of injury mechanisms during movement, and we’re excited to see modeling and motor control capabilities grow quite rapidly.

Wearables and other portable systems are another exciting market to inform clinical practice and provide testing opportunities outside the lab. From a teaching point of view, we’re proud to inform our clinical students on the power of these new technologies and how they may open opportunities for them. We’ve had our students go on to study PhDs or work in residency or clinical practice where they are adept at using motion capture.
 
If Thomas’ use of innovative mocap technology has inspired your own biomechanics testing, talk to our team to find out how Motion Analysis can help you achieve your own goals.

Tech tips: How to do camera calibration for Cortex and BaSix

Before any motion capture project begins, a thorough calibration process must take place. No matter which cameras you use, making sure that they are receptive to markers and synchronized properly has a distinct effect on the accuracy of your captured data. Plus, when the initial basics are completed successfully, it smooths the next stages of using mocap software.

Calibrating cameras for our Cortex and BaSix software is a quick step-by-step process. Here’s how it works, with some handy insider insights about our advanced features. 

Simple setup, rapid results

Considering the individual specifications of multiple cameras, mocap system operators are required to align them properly in order to track movement effectively. This means lenses may get readjusted, while cameras situated in places likely to get knocked repeatedly may need to be repositioned, so it’s best to perform an all-new calibration process to ensure high-quality data capture.

Luckily, calibration typically takes only a couple of minutes, although this can depend on the number of cameras you are using, their capture volume, and whether they are fixed. Likewise, the precision of the capture movement data will be improved when using the system shortly after completing the calibration. 

Camera calibration explained in two simple steps

Calibrating cameras for Cortex and BaSix is a two stage process requiring just a couple of pieces of equipment.

  1. Map the space using the L-frame

This is a simple L-shaped apparatus complete with four markers used to establish the capture space’s coordinate system. 

During initial set up, the corner marker – which defines the volume origin – is typically placed at the center of the intended capture space. If any minimal adjustments need to be made, it is simple to make “spot checks” of each camera within the software to make sure the cameras can only see the L-frame’s four markers before moving on to the next step. 

It’s a misnomer that all cameras have to see the L-frame – it is better if most can, but that may not be possible in an extra large space.

  1. Standardize measurements with the wand

The second stage involves dynamic calibration using a handheld wand, which has a standard 500 mm length between the markers at each end of the wand. This provides a reference point for the cameras to map out the entire capture space using dynamic calibration.

By waving the wand in the cameras’ field of vision, they can measure precise lengths from the wand’s end markers to the surrounding volume, and then correct themselves according to those measurements.

When all the parameters – including focal length, camera orientation, L-frame measurements and wand lengths – are input correctly and calibration converges correctly, the mocap system can be used.

Watch our quick how-to video

Advanced Cortex features

Initial setup is available to customers using BaSix software, with these following extra calibration features available within Cortex:

  1. Gain feedback on camera status

For both the L-frame and wand steps above, Cortex’s 2D view identifies how many centroids a camera sees. This makes it easy to identify if the cameras can only see the L-frame’s four markers, or the wand’s three markers. When the camera  has sufficient wand data for lens calibration, the camera’s 2D view in Cortex changes color from white to green as a form of visual feedback. Similarly, for both steps, camera tabs change color to indicate a camera being uncalibrated, ‘seeded’ (if it sees the correct markers), or fully calibrated. 

  1. Remove the need to restart with Update Calibration

Restarting an entire calibration doesn’t take too long, but the Update Calibration tool requires fewer steps by amending camera calibration information according to pre-calculated values. It is especially helpful when the volume is an odd shape to implement the L-frame properly, or in a bigger capture space where not every camera can see the L-frame’s markers.

  1. Reduce residuals fast using Quick Refine

Cortex processes areas of the space where the wand’s markers are being reconstructed in 3D. These resulting ‘residuals’ are good indicators of calibration success. You are looking to gain a 3D residual average across each camera during calibration.

It is possible that these 3D residuals can increase over time since the initial calibration. For example, any knocks can cause camera vibrations that can disturb the equipment. If you’re rushed for time, rather than completing the full calibration process again, Cortex allows for Quick Refine. Using any markers in the volume – including those attached to a subject for instance – you can record the mocap actor covering the whole space, while performing a quick refine, and the system will then update the originally saved calibration values accordingly. 

  1. Personalize the process using Custom Calibration

Within Cortex’s Custom Calibration wizard in Live Mode, you can toggle both general settings (e.g. frame rate and shutter speed) and individual camera settings (e.g. threshold, brightness, min/max lines).

Custom calibration settings are saved in the check box which, when enabled, applies the user-defined settings after starting the calibration process, and saves them at the end to be applied automatically when the next calibration is started. This is useful when different camera settings are needed for calibration compared to collection, for example when wanting to save time and collect less data by implementing lower frame rates.

Cortex’s settings also allow you to ‘mask’ areas in the 2D view of any given camera, which filters out any ‘noise’ such as bright lights or reflections that may distract from the markers.

  1.  Reusing collected Raw Files

Raw Files get saved during the two-step L-frame and wand calibration process as ‘calframe’ and ‘calwand’ for each step.

If there is any problem causing a diverging calibration (whereby cameras cannot understand spatial positions), these files can be used to recreate the calibration with different settings and gain a more successful result, even for offline calibration. 

  1.  Track moving cameras with Continuous Calibration

If you use a roving camera (or if the room or volume space is moving) Continuous Calibration utilizes stationary markers in the space for the camera to correct its own position while continuing to track subject marker movements, as shown in this demonstration.

We’re here to support you

It is possible that some small details could get overlooked during the calibration process, but there’s usually a quick fix. It could be as simple as a typo when inputting a lens specification. We’re here to assist you with any troubleshooting problems that might occur. 

With a range of options for calibrating cameras for Cortex and BaSix, it is simple to prepare your mocap system quickly and efficiently. If you need help with the calibration process, chat to our Customer Support team

If you’re exploring mocap solutions and would like to find out more about our systems, please book a demo.

Mocap in action: In conversation with Adam Cyr, Biomechanist at Mary Bridge Children’s Hospital

A long-standing client, Mary Bridge Children’s Research and Movement Laboratory (RML) is a multidisciplinary facility that houses a team of engineers and clinicians who conduct research and use the latest technologies to identify, diagnose, and treat individuals with movement challenges.

We caught up with Adam Cyr, a biomechanist at the facility, who has a keen interest in applying engineering principles and techniques to understand how the human body performs. His goal is to improve injury prevention and treatment.

Here, we share what he had to say about his work and how he is using mocap as part of the biomechanics research he does on a daily basis.

Could you give us a quick overview of your background as it relates to the world of biomechanics and biomechanics research?

After completing my studies, I briefly worked at a company doing forensic biomechanics before I found myself at the Research and Movement Lab at Mary Bridge Children’s Hospital. At the RML, we see patients with a wide variety of concerns, including neurological, muscular, and orthopedic disorders. We also see people who are looking to enhance their performance or who suffer from sports-related injuries.

How do you use motion capture technology in the work you do every day?

The more data we can collect, the better. We want to look at kids doing functional tasks. If we see a patient today and collect data on how they move in their preferred way and then they have some sort of intervention, we have data we can use to assess if there’s been an improvement because they will be moving better than before. Our goal is to inform the clinical providers, whether they’re surgeons or physical therapists, and provide them with objective data so they can make better decisions. 

On a typical day, we’ll spend a few hours with a patient either in the morning or the afternoon. We’ll prep the room to make sure that the motion capture system is ready and that the markers are ready to go. We’ll do a subjective history and a physical exam. And then we’ll put the markers on and get the patient to do basic movements. If there’s any particular activity that is causing a problem, we will have them do that activity specifically. After they leave, I compile the data, process it and turn it into graphs and meaningful insights for our therapists to review. It’s great to work this closely with clinicians to see the data and graphs transform into information that means something.  

Can you walk us through your experience using Motion Analysis and share some of the features you find most useful?

The motion capture system I inherited in my current position was an older one. We were very fortunate to be able to upgrade to some newer Motion Analysis cameras recently. The new tech is very impressive. From a size perspective, everything is getting smaller, the optics are better, the speed is better and these cameras can track much smaller markers. 

The cameras are also more advanced, which makes it easier to do things right the first time and not waste time cleaning up the data. This speeds up patient processing times. We want to get a report back to our patients within a couple weeks and if I’m spending a day cleaning up data, that isn’t possible. 

When I do have to clean up data, there are some great features on the backend that make it easier to do so. For example, if a marker dropped off and you didn’t notice, you can use virtual markers to fill in the data gap. I’ve also started to go down the road of playing with what they call the Sky Interface. This allows me to build my own scripts using a batch process. I’ve been working closely with the Motion Analysis team on this and they’ve been hugely helpful. When we collect EMG data, there’s a delay in time so we need to shift the data over for it to line up correctly. With the Sky Interface, I can code something so that I just have to hit one button and it goes through all of my captures and automatically shifts the data over.

We’re also starting to get into real-time feedback using Cortex software. In a clinical setting, we’d use this to better understand upper body motion. For example, we’d put markers on the elbow, the arm and the torso and ask children to reach around so we can see how far they can reach. With real-time feedback, it’s possible to have them reach for virtual markers on a screen, a bit like they are playing a video game. It would all be done in real time using the Motion Analysis workflows I’ve learned. In the work I do, it’s been enormously helpful for me to be able to pick up a phone and connect with the Motion Analysis customer support team or their engineering and technical teams because they are so willing to help out when I have a problem that I need to figure out right away.

If you, like Adam, want to leverage motion capture innovation to better understand movement-related conditions or improve how you monitor the tendencies and patterns of biomechanical movements, we can help. Learn more about how our team can support your mocap needs by scheduling a demo today.

Join us at these upcoming biomechanics conferences

As we speed through the year, biomechanics conferences are well underway and buzzing with innovative ideas. Providing a welcome experience to network with researchers, practitioners, and clinicians face to face, industry events inspire the sharing of insightful perspectives and findings to anyone involved in biomechanics—or those looking to move into the sector.

Alongside advancements in motion capture technology, forward steps in sport product testing, robotics, medicine, gait analysis, rehabilitation, and data collection are all current trends fuelling discussion in 2023 and highlighting the outstanding work of leading and up-and-coming biomechanics academics.

After compiling a list of the year’s best industry events for biomechanics, we are looking forward to meeting valued colleagues old and new at these two US-based conferences in the near future.

ACSM 2023

When: May 30th – June 2nd
Where: Denver, CO, USA

This Tuesday sees the start of the ACSM Annual Meeting and World Congresses. The yearly conference, hosted by the American College of Sports Medicine, is a flagship event for sport fitness, healthcare, and treatment professionals. 

We will be exhibiting at Booth #100, and we are again proud to sponsor the ACSM Biomechanics Interest Group’s Career Achievement Award, which recognizes the great achievements of upcoming scientists in the field of biomechanics. 

The event is an invaluable opportunity for budding students and experts to network, share career advice and watch mocap applications for biomechanics in action. Attendees can immerse themselves in personal workshops, learn from over 1,500 case presentations, or join online to watch 13 hours of unique content and recorded live sessions. 

We are excited to meet you and to explore the future of the industry in Denver very soon. 

Human Movement Variability and Great Plains Biomechanics Conferences

When: June 5th – 6th
Where: Omaha, NE, USA

Shortly after, our team will also be exhibiting at a dual event at the University of Nebraska Omaha

Home to the Human Movement Variability Conference for its eighth edition, the annual event focuses on student-centred discussion regarding human movement research and is run by the university’s Center for Research in Human Movement Variability and the Department of Biomechanics. 

The event space will also host the Great Plains Biomechanics Conference. Keynote speakers and podium sessions bring together around 100 academics, investigators, and scientists from around the world to explore progressive biomechanics topics including vascular mechanics, bioprinting and much more. Guest speakers include Dr. Beatrix Vereijken from Norwegian University of Science and Technology and Dr. Bill Baltzopoulos from Liverpool John Moores University.

These two Omaha-based biomechanics conferences are being held solely on premises and in-person for the first time since the pandemic, and we are keen to discuss motion capture for biomechanics with the wider community. Come say hi!

See you soon

With plenty more mocap industry events still to come in 2023, there are a range of excellent opportunities to learn new trends, applications and technologies that continue to move the biomechanics world forward.

If you are attending these or any other biomechanics conferences in 2023, let us know via LinkedIn or Twitter.

Here’s how real-time feedback in Cortex works

When conducting real-time motion capture (for training, research, animation or other purposes), it helps to understand whether a required movement is performed correctly for its use case, and to get that insight directly while the action is carried out. To do that, a feedback indicator can alert the system operator whether the exact motion has been exhibited, as soon as it happens.

We’ve received a lot of interest from users of our Cortex software to better understand how to use real-time feedback in the platform, with this useful functionality built in. While it is possible to send captured data to third parties, BioFeedTrak in Cortex instead allows you to set up a range of desired feedback loops to suit your needs. 

Real-time streaming in practise

Many of our customers can achieve their desired motion capture outcomes during post-process analysis, particularly useful for data clean up and modeling. But there are a vast number of mocap users in other cases or industries that benefit from identifying when correct motions are achieved in real-time. Some examples include (but are not limited to) researching joint functions or the rehabilitation of movement disorders in laboratory settings, where those carrying out this work may need to repeat actions to understand when progress is being made.

Multiple users are aware of BioFeedTrak being a tool to indicate a particular frame where an event occurs – a heel strike for gait analysis, or ball release in a pitching test as some examples – but its function is extensive. BioFeedTrak also gives you the ability to track and measure motions performed by a subject, kitted out with markers, and automate feedback cues to alert the subject to perform that action differently, or continue in the same way until a certain height, distance, or time, for example, is reached.

These cues could be a pop up window, a flashing icon, or a sound, but the BioFeedTrak interface is fairly limitless in providing whatever form of feedback best fits the user or environment. 2D and 3D motion capture can be controlled with immediate effect using real-time feedback, with the added benefit that the BioFeedTrack functionality is available in Cortex.

How BioFeedTrack works

BioFeedTrak can simply be found by following Tools>BioFeedTrak Event Editor:

The BioFeedTrak Event Editor creates events, whereas the BioFeedTrak Event Timeline allows users to assign an event to a frame, or discover on which frame a particular event has occurred:

While movement data gets tracked in Cortex, BioFeedTrak can control how and when your feedback will be shown to the motion capture subject. 

BioFeedTrak Event Editor allows you to provide a description of your feedback, according to the action it is following along with. These “Events” can be enabled in “Live” mode. Imported scripts can then instruct and trigger the program to perform the desired feedback. Using a visual feedback example, a script could indicate that a running subject needs to change direction once they have reached a certain marker in the room, tracked by mocap cameras. 

The “Presentation Graphs” functionality in Cortex also provides a real-time graphical representation of a subject’s movement data within the interface. When linked to the script, the program follows this data so that it knows to provide feedback when numerical touchstones are reached on the graph. BioFeedTrak can also train a motion actor to reach a specific point using a “Threshold Event”, which will apply the desired feedback once a certain event is completed. 

Visual and audio forms of feedback make up the biggest use cases which can be shown to the motion subjects: directional arrows like the example above, or beeping sounds. But as scripts are written by the user, forms of feedback are open-ended to cater to any environment and measure a range of variables. 

Video example: Tracking distance measurement with audio feedback

Here, BioFeedTrak and Presentation Graphs have both been set up and synchronized to adjust a sound’s pitch based on the distance between a thumb and forefinger. The script tracks the measurement between the digits’ mocap markers and provides the pitch adjustment feedback based on that distance. 

Video example: Measuring maximum value with visual feedback

In this video, BioFeedTrack Event is measuring the maximum ground reaction force according to the peak value that is seen during the capture. When greater values are reached, the pop-up message is updated until capture is completed, whereby the remaining value displayed in the popup indicates that topmost ground reaction force. 

Video example: Assessing body weight percentage with visual feedback

In another visual example, the script is written to identify when a certain body weight percentage gets applied onto a plate scale. When the desired force achieves an optimal weight (the “Threshold Event”) the subject gets informed by a pop-up text window changing from red to green. This method is especially useful for rehabilitation practice; in this case, to check weight percentage exerted onto a prosthetic leg.

Giving motion actors the ability to realize a full range of motion with instant feedback is a helpful way for you to improve a range of processes and save time. Using the BioFeedTrak function within Cortex software along with motion capture cameras means that rich data gets collected simultaneously, and feedback is provided instantly in a live setting without the need to send data to a third-party application for similar feedback functions. Whether you need greater real-time feedback for biomechanics research, rehabilitation efforts or workplace training, BioFeedTrak is a handy built-in tool to best streamline your motion capture projects

If you’re a Cortex user and would like to find out more about using BioFeedTrak, chat to our Customer Support team. If you’re exploring mocap solutions and would like to find out more about Cortex, please book a demo.