Meet the Thunderbird

We’re pleased to announce the release of the Thunderbird motion capture camera range. These state-of-the-art cameras are designed to provide unparalleled precision and versatility for motion capture professionals in a range of applications.

The Thunderbird range consists of five state-of-the-art cameras that support both active and passive markers, offering unparalleled versatility for a cohesive motion capture experience.

Precision and versatility

The Thunderbird range consists of five advanced cameras compatible with both active and passive markers.

Higher resolutions

Featuring resolutions of up to 12MP, Thunderbird ensures clarity and precision in every frame. Ideal for various environments, these cameras guarantee exceptional detail capture, whether in a lab, studio, or other confined space.

A range of lenses

Understanding the need for customization, Thunderbird offers a diverse range of lenses, allowing users to choose the perfect lens to meet their creative vision and specific requirements.

Cutting-edge core technology

Underpinning Thunderbird’s performance is the latest core technology, including GigE camera standard communication, advanced field-programmable gate arrays (FPGAs), and built-in PTP-based output synchronization for immediate success and long-term innovation.

Precise timing with PTP

Eliminating the need for timing “windows,”  PTP technology ensures seamless integration with your other devices, setting a new standard for precision timing in motion capture.

Built for durability and reliability

Thunderbird’s robust housing, passive cooling, and sealed sensor/FPGA unit are designed to withstand challenging environments, ensuring reliability in any condition. They offer enhanced features such as improved robustness, environmental protection, new firmware, and a state-of-the-art ring-light design.

Explore the full range here

How to develop a marker set that meets your needs

A marker set is far more than the floating points recorded in a capture space. Curating a full marker set in Cortex is an integral stage to define the markers’ properties, and their relation to each other, in order to develop a model that can be used for a range of motion capture studies. Marker sets need to be identified to drive underlying skeletons that can be reused or modified in live mode or during post-processing. 

We run through the various components of a Cortex “MarkerSet” and how to construct them to best suit your motion capture research and project needs for biomechanics, clinical trials, gait analysis, and character animation.  

What makes up a MarkerSet?

MarkerSet components can be found and edited in the right-hand panel of the Cortex platform, titled Properties, before being saved and exported in a comprehensive marker set file. The listed MarkerSet properties are as follows:

Markers are small points attached to a test subject and tracked by cameras to capture movement. When displayed as raw data in Cortex, these markers are unnamed, but you can name them based on their positions on the body to identify them easily. 

Virtual markers define central locations where ‘real’ markers cannot be placed—the middle of a joint, for example. Virtual markers can calculate a location relative to up to three ‘real’ markers or other virtual markers, which is very useful when needing to define the endpoints of a segment.  

Segments represent different portions of the body. Each segment’s movement is driven by the positions of identified markers, which can calculate its rotation across three axes. Segments can be automatically adjusted, or you can manipulate segments saved in the MarkerSet to fit various motion capture subjects.

Links “connect the dots” between markers to map their relative distance. Each link has an allowable distance (how close or far the markers can be to each other) where markers outside of this range cannot be identified. Links are critical to the real-time identification process, where you can elongate or shorten links to fit different test subjects, and they allow you to identify markers during the post-processing stage using templating tools.

Rigid Subsets can be created using markers that do not move relative to each other, such as those on a rigid plate attached to a test subject. When a subject is first in the capture space, Cortex first tries to identify the rigid subsets in the MarkerSet before the other markers. This adds another layer of accuracy for identification either during live mode or post-process.

The Template allows you to automatically assign the identified markers from one MarkerSet to the raw data’s unnamed points in one go. This part of a MarkerSet also allows you to select a repeatable Model Pose—a “standard position” that can be chosen from a single captured frame that visualizes identified markers.

Considerations for biomechanics

For biomechanics motion capture research or clinical analysis, more detailed markers are needed to drag the underlying skeleton and gain precise data to construct the MarkerSet.

Markers should be placed on accurate anatomical locations throughout the body based on which physical activity is being evaluated. Studying baseball pitching would require detailed markers on the upper extremities, whereas running or jumping activities may require more markers on the lower extremities. Either way, the marker positions drive the movement of the segments. Without identifying these markers, it’s impossible to work out joint kinematics and the subsequent kinetics for the skeleton. 

Biomechanical work utilizes the Skeleton Builder engine (SkB) to accurately define the movement of every segment. You need at least three markers on a segment (real, virtual, or combinations of both) in order to calculate rotation using a three-point axis. This 3D coordinate system helps to assess limb movements including joint flexion/extension, abduction/adduction, and internal/external rotations.

Considerations for animators 

Animation generally relies on the same anatomical marker locations as above, but accuracy is not as crucial. For character animators, it is important that the resulting skeleton mimics the actor’s movements as best as possible, and every segment identified in Cortex’s MarkerSet has to match the animated character it is driving. 

Animators use the Calcium Solver in Cortex, which defines segments differently and more flexibly. This software uses a globally optimized solution to drive an underlying skeleton rather than using three fixed marker points, and utilizes joint types and limitations to constrain the skeleton movement. Each segment is attached to a marker with an attachment. These attachments act like springs, telling the software which markers are driving the motion so that related segments can move in a similar way. This solution allows you to control the full skeleton according to these segment preferences determined in the MarkerSets.

The hybrid skeleton builder is also useful for creating a MarkerSet in that it combines the functionality of the two engines listed above. For the initial stage, it offers the scalability options offered by the SkB engine, but completes the remainder of the process using Calcium’s globally optimized solution to define a subject’s dynamic movements.

All set for future capture

Cortex displays all the MarkerSet information upfront, allowing you define its properties as you see fit, with file names, marker names and even links colors being fully customizable. 

Once all of a MarkerSet’s components are saved, the resulting template can be viewed during post-processing or be loaded into a live capture and tweaked accordingly to fit different motion capture subjects. Using a defined marker set as the first port of call, motion capture research and analysis can be conducted faster, with marker sets fully adaptable for your specific industry use case.

If you’re inspired to collate your own marker set for a particular motion capture project or if you’d like more info, feel free to reach out to our team today.

Motion Analysis Corporation Unveils Cortex 9.5 Software Upgrade

November 8 2023, California – Motion Analysis Corporation is excited to announce the highly-anticipated release of Cortex 9.5, the latest edition of its cutting-edge motion capture software. This update is now available for download and is accessible to all customers with active warranties or current software maintenance contracts.

Cortex 9.5 introduces a range of exceptional features and improvements that elevate the motion capture experience to new heights, providing users with greater flexibility, efficiency, and accuracy. Here are the key highlights of this remarkable update:

Quick Files Capture Status: Cortex 9.5 introduces Quick Files Capture Status indicators, simplifying the assessment of dataset status. Users can easily classify captures as “Unedited,” “In Progress,” or “Complete.” Customization options are also available, allowing users to create their own status names and icons, providing a user-friendly experience.

Kestrel Plus Cameras: With Cortex 9.5, Motion Analysis Corporation introduces the Kestrel Plus camera line, featuring the Kestrel Plus 3, Kestrel Plus 22, and Kestrel Plus 42. These new cameras seamlessly integrate with Cortex 9, expanding your capture capabilities and delivering high-quality results.

Trim Capture Modifications: Cortex 9.5 enhances the Trim Capture feature, enabling users to modify names, generate captures on a per-markerset basis, and add timecode support. This streamlined process facilitates the extraction of relevant data from capture files and offers improved post-processing options.

Workflow Improvements: Cortex 9.5 enhances the Workflow feature, making task execution even more efficient. Users can now utilize a search tool and a workflow repository, enabling easy access and management of workflows, optimizing productivity.

Live Detailed Hand Identification: Advanced hand tracking techniques have been integrated into Cortex 9.5, reducing marker swapping during live collection and post-processing of intricate finger movements. Users can contact the support team for a sample markerset to enable this feature.

Automatic Wand Identification for Reference Video Overlay Calibration: In a significant time-saving move, Cortex 9.5 automates the marker selection process for reference video overlay calibration, eliminating manual marker selection and potential user errors. This feature can be applied in both Live Mode and Post Process.

Bertec Digital Integration: Cortex 9.5 now offers support for Bertec AM6800 digital amplifiers, simplifying setup and reducing the number of required components, thus enhancing the overall user experience.

National Instruments New Device Compatibility: Cortex 9.5 continues its support for National Instruments A/D board data collection and expands compatibility to their next generation of DAQs, maintaining flexibility and ensuring compatibility with previously supported devices.

Additional Updates and Features: Several additional updates and features, such as the renaming of the Post Process X panel to Tracks, improved contrast in Dark Mode, and an increased marker slot limit, are included in this feature-rich update.

Cortex 9.5 marks a significant milestone in the field of motion capture, empowering users with advanced tools, enhanced workflows, and improved performance.

To learn more about Cortex 9.5 and take advantage of these exciting new features, download the full release notes here, or contact our sales and support teams for further information and assistance.

Motion Analysis Corporation continues to lead the way in motion capture technology, and Cortex 9.5 is a testament to our commitment to delivering innovative solutions that meet the evolving needs of our customers.

About Motion Analysis Corporation

Motion Analysis Corporation is a leading provider of motion capture technology solutions for various industries, including entertainment, sports, healthcare, and research. With a focus on innovation and customer satisfaction, Motion Analysis Corporation strives to make motion capture more accessible and versatile.

Client spotlight: How Mizuno accelerates sport testing with Motion Analysis

Rigorous product testing and research and development (R&D) in sport require two major factors. First, human subjects to perform actions, then advanced technology to record and analyze data. This is why motion capture for sports is so vital – it provides accurately tracked athletic movements for clinicians, apparel and footwear designers, sport coaches, and biomechanics experts to evaluate. 

One of the world’s leading sportswear, shoe and equipment manufacturers, Mizuno, has streamlined mocap processes at its new facility, with the Osaka-based company working with us at Motion Analysis to gather quantitative performance data to launch better sporting goods, faster. 

Here’s how Mizuno upgraded its mocap system to advance its capabilities in R&D in sport, a leading trend in the biomechanics space:

A partnership in motion, powering R&D at Mizuno’s new facility

Mizuno, fittingly bearing the brand slogan ‘Reach Beyond’, provides sportspeople with the highest quality equipment and clothing to improve athletic performance. Working within soccer, track and field, golf, volleyball and many other sporting disciplines, Mizuno’s researchers require the ability to track unique movements in both indoor and outdoor environments using prototypes and real athletes.

Having used Motion Analysis’ 3D motion capture for product testing since 2005, Mizuno opened its state-of-the-art innovation center, MIZUNO ENGINE, in 2022—a space for designers and R&D units to create, test, and fine-tune its product range. The company also upgraded its Motion Analysis camera setup, in line with its drive to continuously innovate.

Behind the scenes at the Mizuno facility

Originally, Mizuno used 3D motion capture for computer graphics purposes, applying its data to unique digital models during apparel design, as well as to observe changes in performance before and after a human actor tried out new sportswear or equipment. 

With work undertaken by product testers, R&D specialists and sport coaches diagnosing athletes’ conditions, a flexible solution is vital to present large volumes of accurate, shareable data within an intuitive user interface, and to recognize markers in real time.

Mizuno now uses two systems to facilitate motion capture for sports, equipped with Kestrel 2200 cameras housed in the larger 6,500 m² Mizuno facility. The established R&D center includes a laboratory to measure product durability in controlled environments, and motion capture data can be collected from athletes actively testing prototypes on a running track or in a gym. 

On the running track, the system gathers movement data from 8 fitted force plates buried in the track. 3D motion cameras are attached to a lane, enabling research to move to another indoor area quickly. The Kestrel 2200 system is also used with a Bertec instrumented treadmill, purposely built to obtain specialized running and walking data. It has high rigidity to maintain a natural-feeling environment for its product testers, with the aim of gathering data as close to a normal running situation as possible.

The need for speed, propelled by innovative technology

Before working with Motion Analysis, Mizuno’s previous 3D movement analysis system only received images from two units of high-speed cameras, where its lack of accuracy led to material testing rather than applications involving real people. Product performance in the past was also measured subjectively. 

Now, Mizuno uses the Kestrel system to produce fast, quantitative data that can more precisely prove R&D methodology than the original camera images. It helps to understand how kinematics relate to sport performance, and highlights individual features which need to be practically improved according to the preferences of test subjects. 

Prototypes are developed on the second floor of the facility and can immediately be tested on-site by R&D teams using the mocap systems on the first floor. Before, previous prototyping took place in overseas factories. The development process has been sped up, particularly for shoes where performance is easily affected by their materials. 

The future of motion capture for sports

Up to 50 research efforts have already been undertaken at the new Mizuno facility, including the successful testing of track spikes, walking shoes, and sporting apparel.

Mizuno next aims to utilize FBX data from Cortex’s Calcium Solver optimization tool to work with its 3D fashion design software. Resulting motion data can also be used to expand various product development methods including musculoskeletal simulation, computer-aided apparel design, and motion classification across a range of sports.

Research and development in sport is rapidly picking up speed with motion capture. If you’re looking to achieve mocap success like Mizuno, book a demo to see how we can assist your team today.

What is optical motion capture?

Motion capture’s light-speed development has seen it branch out into more unexpected paths than anyone could have anticipated. Since its initial use for biomechanics research and clinical gait analysis in universities and hospitals, that same technology would soon go on to animate the world’s most memorable characters in film and gaming, revolutionize industrial practices, develop military hardware, and even help to build out virtual reality worlds including the metaverse.

The mocap world’s list of technical terminology has also grown exponentially. While it can be tough to keep up, it’s worth going back to basics to the most widely practiced format: optical motion capture. In this blog, we’ll delve into what optical motion capture means, and how it brings human movement to virtual life across a range of industries.

The importance of marker sets in optical motion capture

Motion capture is an example of photogrammetry, the practice of using photography for surveying purposes. In this case, cameras are used to measure small, bright dots of light within a whole capture space emitting from markers which are carefully attached to a person or object. Optical motion capture, which can also be referred to as ‘marker-based tracking’, uses a set of cameras to track the coordinates of these markers to construct a detailed three-dimensional view of a moving subject.

The majority of mocap systems use passive markers which ‘bounce’ light emitting from infrared LEDs circled around the cameras’ lenses, while other marker sets may use active LEDs, which instead give off their own light. The brightness of these markers ensure that they are the only images the cameras are able to pick up, rather than the test subject or any background “noise”.

Passive markers are usually retro-reflective and spherical, making it easier for a computer to work out their central points. When these central points are tracked by multiple cameras from different angles, they can be triangulated to produce 3D coordinates of the motion being performed. The resulting data can then be transposed onto a model or skeleton using mocap software.

Where to spot optical motion capture in action

Given the great level of detail gained by optical motion capture, using high-resolution cameras and involving minimal data cleanup, it is usually reserved for large-scale projects. It underpins the 3D animated characters featured in many big-budget films and TV shows such as Lord of the Rings, Avatar and Stargate SG1, as well as ‘Triple A’ computer games. These highly flexible systems can be used in large-scale indoor or outdoor spaces where a range of cameras can operate, such as a movie set or a laboratory. Biomechanics researchers, for example, can use optical motion capture to precisely measure the athletic movements of certain joints or muscles, or test the effectiveness of sports equipment. 

Optical motion capture is a common method of marker-based tracking, and the quality of the capture is determined by the number of cameras. Some practitioners, however, use other mocap methods depending on their use case or project. Markerless systems, for instance, aim to complete the same task using software alone, rather than specialized tracking devices. However, they may not be as accurate as optical motion capture’s marker-based tracking for mapping high-resolution human movement.

Optical motion capture also differs from inertial motion capture, where subjects wear inertial measurement units (IMUs)––sensors strapped to their bodies or wearables that measure accelerations. Several stages translate movement into animation data. It’s a smaller setup acceptable for quick and easy motion capture but is limited in quality due to not measuring position directly.

Other methods include mechanical motion capture systems, which consist of an exo-skeleton structure attached to the test subject to approximate joint angles. Magnetic motion capture systems, less common nowadays, use sensors attached to the subject, which act as receivers to measure the low-frequency magnetic field generated by a transmitter. A computer then correlates the field strength within the capture space to calculate position, which is susceptible to errors caused by metal in the capture space.

While there is not a one-size-fits-all method, optical motion capture is an effective option for a range of use cases.

Getting started with optical motion capture

Recording motion data using optical motion capture requires multiple cameras for tracking purposes, a marker set, and processing software.

Cortex is our flagship motion capture processing software that uses optical systems for biomechanics, character animation, VFX, robotics, broadcasting, and more. Its compatibility with our Kestrel cameras allows for complex optical motion capture in large areas where robust equipment is needed for precise marker tracking. Alternatively, BaSix Go offers animators and other mocap artists a more affordable, lightweight optical motion capture option. Its range of accurate upgradable cameras are cross-compatible with various systems and work with active marker rigs.

Optical motion capture extends to every facet of movement analysis no matter the industry, letting filmmakers, visual artists, clinicians, sports coaches and more track motion, record mocap data, and construct valuable skeleton models for post-production and further research.

If you’re feeling inspired to find out more or explore our optical motion capture solutions, get in touch with our team today.

How to set up the seamless Noraxon integration with Cortex

You may have seen from our recent rollout of Cortex 9.2 that we’re focused on delivering better digital integrations for our customers. In this blog, we’re going to talk more about the intuitive Noraxon integration with Cortex for easy and accurate motion capture to better understand complex biomechanics data.

The Noraxon integration explained

Noraxon is a leader in the field of biomechanics research and human movement metrics, offering a combination of software and hardware to record and measure both 2D and 3D human motion.

Electromyography (EMG) measures underlying muscle responses in relation to nerve stimulation during movement. Noraxon’s Ultium EMG sensors monitor muscle activation and synchronize with inertial measurement units (IMUs), which are able to map a dynamic range of motion during exercise. IMUs add another real-time method to measure 3D movement, used for validation studies and particularly to track activities that require high velocity (such as baseball pitching, for instance). 

Noraxon’s myoRESEARCH® gathers biomechanics data from various sensory devices, then feeds it into one easy-to-use interface. In our case, Cortex takes data from Noraxon’s EMG and IMU sensors, which can be recorded alongside the 3D motion capture data provided by the Cortex system. This seamless integration provides you with a holistic high-level overview of why movements in the human body occur the way they do.

From the lab to the field

This workflow can be applied in a range of use cases, from sports training and improving athletic performance to rehabilitation and physiotherapy, or gait pattern analysis. 

Both sensor types are wearables attached to a test subject. The Ultium EMG sensor is applied to the skin using an electrode, which then monitors how and when muscles get activated during exercise. The data can be used to identify if there are certain imbalances or weaknesses on one side of a body, or to understand whether the intended muscle is being activated by tracking electric signals between the brain and subsequent bodily motion.

The other sensory device, the Ultium Motion System, uses wearable IMU-based sensors to see how a body moves around in a 3D space, where the tracked data can be visualized in graphs or as a skeletal avatar. Noraxon’s IMU sensors are useful in that they are portable, able to perform motion capture in any space outside of a laboratory setting and without using cameras. This could highlight the biomechanical differences from a subject performing in indoor or outdoor environments.

Get the complete Noraxon and Cortex setup

To use the Noraxon integration, the only requirement is to be up to date with Cortex 9.2, which has been tested with the latest version of Noraxon’s myoRESEARCH® software. 

Once Ultium EMG and IMU sensors are added to the Motion Analysis system, with cameras connected and calibrated, Cortex will start the data collection having already been linked to the present EMG and IMU sensors. It takes the push of a button to record motion capture and get a comprehensive singular view of both Noraxon and Cortex data—this integration aims to make initial setup and motion capture as fluent as possible, where the software guides you throughout the process.

If you’re feeling inspired to get to grips with the Noraxon integration, or if you are interested in a demo of Cortex software from our team, talk to us to learn more.

See Cortex in action this August at the American Society of Biomechanics

With every passing month, the biomechanics industry undertakes all-new projects requiring advanced motion capture technology. We get to experience more and more demonstrations of innovations first-hand now that the conference season is in full swing, with more upcoming events in August 2023 and beyond.

Next up in the mocap events calendar, we will be attending the American Society of Biomechanics 2023 conference hosted in Knoxville, Tennessee from August 8 to 11.

What you can expect at this year’s event

The American Society of Biomechanics fosters an inclusive community of like-minded researchers, with around 850 members representing the field across five main disciplines: biological sciences; exercise and sports science; health sciences; ergonomics and human factors; and engineering and applied science.

Furthering our understanding of human movement and recovery, the society brings together students, academics and clinicians in smaller regional events and an annual meeting, with the 2023 conference taking place in the historic creative hub of Knoxville. While exploring motion capture techniques and biomechanics trends in depth, timely topics of discussion will cover stroke rehabilitation, gait analysis, anterior cruciate ligament (ACL) reconstruction, imaging for bone and joint health and much more.

We will be exhibiting at booth #9, where our Vice President of Global Sales Steve Soltis is excited to meet you and showcase helpful features of our Cortex motion capture software. 

See the next tech frontier in action

The event hosts experts from universities around the United States, acting as an open forum to encourage the adoption of brand new mocap technologies by biomechanics professionals. 

Featured keynotes come courtesy of mechanical engineering leaders from the University of Michigan, the Fischell Department of Bioengineering at the University of Maryland and Vanderbilt University. There are also opportunities to get interactive in practical workshops and investigate large-scale biomechanical data sharing, looking to predict and prevent injury, and highlight disease progression.

We are continually seeing projects that utilize mocap cameras and software to gain objective data to inform clinical decisions—data can identify patterns in movement-related conditions that can help researchers understand why injuries occur, leading to preventative solutions:

We hope to see you at the event

The 2023 conference promises to showcase more smart and practical uses of mocap, as well as a range of data-focused equipment looking to positively change every subsector of the biomechanics industry. 

Steve and the team will be on hand to outline software tips and tricks for Cortex 9.2, including automating tasks through our Workflows panel and all-new digital integrations. We are particularly excited to share Cortex’s real-time feedback capabilities, which provide instant, custom cues to actors during motion capture to change their movements  and improve performance.
 
We are looking forward to catching up with you at this year’s American Society of Biomechanics event in August 2023. Stop by booth #9 to say hello, and be sure to follow us on LinkedIn and Twitter for updates from the day.

In the field: a chat with Thomas Kernozek, Professor, University of Wisconsin, LaCrosse

After a long-running fascination with athletics and injury mechanisms, Prof. Thomas Kernozek has implemented many motion capture systems to fuel his work in physical therapy and the study of movement-related conditions. Using two systems at the University of Wisconsin, LaCrosse, where he is a Professor in the Health Professions—Physical Therapy faculty, Thomas’s teaching gives students valuable experience with advanced motion capture technology, while also gaining evidence-based data for his own clinical research. 

We caught up with Thomas to discover more about his specializations; his experience using real-time feedback and which future mocap features can help nurture the next generation of talent for biomechanics in sports medicine. 

How did you get into biomechanics in human movement, and what inspires your work? 

Like many people that grew up being active and enjoying many forms of sport and exercise—or  becoming injured!—I was driven to understand why some injuries may occur and how it gets examined in a clinical setting. That led to a career in biomechanics, where my research specializes in some common lower extremity injury types: anterior cruciate ligament (ACL) injury, patellofemoral joint and Achilles tendon injuries. 

Physical therapy was once a Bachelor’s degree here in the US but the professional knowledge base has changed drastically since. It became a Master’s degree when I was hired at LaCrosse in 1996, and I now teach and work alongside entry-level clinical students in the doctoral program in physical therapy. Our university laboratory spaces allow our students to engage fully with robust technology, which really helps them develop their own perspectives on how they understand and treat movement-related injuries. I always aim to craft students into striving to become scholarly clinicians by using our mocap systems in my teaching and scholarship.

How did you discover Motion Analysis, and why did you choose it for your own clinical research?

I discovered Motion Analysis while visiting other universities and medical institutions during a sabbatical. When I was “growing up as a biomechanist”, video technology was just in the beginning stages and the use of high speed film was phasing out. I’d used an earlier video based motion capture system before joining LaCrosse that did not have the same capabilities as the Motion Analysis system, so I jumped at the chance to implement this equipment once we had opened the Strzelczyk Clinical Biomechanics Laboratory in our new Health Science Center. 

Its compatibility is a huge plus, as the software and hardware can be upgraded and integrated with existing systems easily. Older Motion Analysis camera models we purchased are still operational and compatible with our software but the overall evolution of these systems has been great to see. We now use mostly Kestrel cameras and Cortex for both systems we have set up in two laboratories—one surrounding an instrumented treadmill—for examining physical activities with human subjects and using data gathered to inform computer models to estimate joint and soft tissue loading.  

Your work at the university covers many roles, including Director of the LaCrosse Institute for Movement Science, so how do Motion Analysis systems help you practically achieve your goals? 

We work with collegiate athletes in jumping sports here at the university, including volleyball and basketball. We’ve also targeted female athletes because we see ACL injuries and related maladies being more prevalent in those performers. We also study a lot of runners. Ultimately, we want to prevent these athletes from getting hurt.  

Our students get practical first-hand access to advanced mocap in classes, so it is used in teaching and research, which is somewhat unique to our physical therapy curriculum. The mocap cameras help identify, measure and track movement, which supplies evidence to inform answers to clinical research questions related to physiotherapy.

One thing we’ve done with Motion Analysis systems is use musculoskeletal models to measure Achilles tendon stress or patellofemoral stress related to running performance. These data are particularly useful for clinical research, as we attempt to drill down to the anatomical structures and tissues to examine how varied athletic movements (such as stride patterns) affect loading. Excessive loading may be associated with the performer’s pain symptoms. We have also used biomechanics within a motor control paradigm to provide augmented feedback to participants to alter their movement performance.

What are your favorite projects involving Motion Analysis technology?

A notable project involved test subjects with patellofemoral pain (pain around the knee cap) performing squats. After a physical therapist made sure that these test participants met certain criteria following a clinical assessment for patellofemoral pain, we streamed their motion capture data into a musculoskeletal model while they were performing squats. The load data between the patella and the femur during the exercise was displayed as augmented feedback.  Participants were able respond to this augmented feedback to alter their squat performance to reduce loading. 

And finally… What excites you most about the future of biomechanics in sports medicine? 

Our capabilities are still evolving, and mocap technology not only shapes our understanding of therapeutic exercise and injury, but contributes to medical literature in the physical therapy profession. Computer modeling approaches informed by Motion Analysis data helps to get a clearer picture of injury mechanisms during movement, and we’re excited to see modeling and motor control capabilities grow quite rapidly.

Wearables and other portable systems are another exciting market to inform clinical practice and provide testing opportunities outside the lab. From a teaching point of view, we’re proud to inform our clinical students on the power of these new technologies and how they may open opportunities for them. We’ve had our students go on to study PhDs or work in residency or clinical practice where they are adept at using motion capture.
 
If Thomas’ use of innovative mocap technology has inspired your own biomechanics testing, talk to our team to find out how Motion Analysis can help you achieve your own goals.

Tech tips: How to do camera calibration for Cortex and BaSix

Before any motion capture project begins, a thorough calibration process must take place. No matter which cameras you use, making sure that they are receptive to markers and synchronized properly has a distinct effect on the accuracy of your captured data. Plus, when the initial basics are completed successfully, it smooths the next stages of using mocap software.

Calibrating cameras for our Cortex and BaSix software is a quick step-by-step process. Here’s how it works, with some handy insider insights about our advanced features. 

Simple setup, rapid results

Considering the individual specifications of multiple cameras, mocap system operators are required to align them properly in order to track movement effectively. This means lenses may get readjusted, while cameras situated in places likely to get knocked repeatedly may need to be repositioned, so it’s best to perform an all-new calibration process to ensure high-quality data capture.

Luckily, calibration typically takes only a couple of minutes, although this can depend on the number of cameras you are using, their capture volume, and whether they are fixed. Likewise, the precision of the capture movement data will be improved when using the system shortly after completing the calibration. 

Camera calibration explained in two simple steps

Calibrating cameras for Cortex and BaSix is a two stage process requiring just a couple of pieces of equipment.

  1. Map the space using the L-frame

This is a simple L-shaped apparatus complete with four markers used to establish the capture space’s coordinate system. 

During initial set up, the corner marker – which defines the volume origin – is typically placed at the center of the intended capture space. If any minimal adjustments need to be made, it is simple to make “spot checks” of each camera within the software to make sure the cameras can only see the L-frame’s four markers before moving on to the next step. 

It’s a misnomer that all cameras have to see the L-frame – it is better if most can, but that may not be possible in an extra large space.

  1. Standardize measurements with the wand

The second stage involves dynamic calibration using a handheld wand, which has a standard 500 mm length between the markers at each end of the wand. This provides a reference point for the cameras to map out the entire capture space using dynamic calibration.

By waving the wand in the cameras’ field of vision, they can measure precise lengths from the wand’s end markers to the surrounding volume, and then correct themselves according to those measurements.

When all the parameters – including focal length, camera orientation, L-frame measurements and wand lengths – are input correctly and calibration converges correctly, the mocap system can be used.

Watch our quick how-to video

Advanced Cortex features

Initial setup is available to customers using BaSix software, with these following extra calibration features available within Cortex:

  1. Gain feedback on camera status

For both the L-frame and wand steps above, Cortex’s 2D view identifies how many centroids a camera sees. This makes it easy to identify if the cameras can only see the L-frame’s four markers, or the wand’s three markers. When the camera  has sufficient wand data for lens calibration, the camera’s 2D view in Cortex changes color from white to green as a form of visual feedback. Similarly, for both steps, camera tabs change color to indicate a camera being uncalibrated, ‘seeded’ (if it sees the correct markers), or fully calibrated. 

  1. Remove the need to restart with Update Calibration

Restarting an entire calibration doesn’t take too long, but the Update Calibration tool requires fewer steps by amending camera calibration information according to pre-calculated values. It is especially helpful when the volume is an odd shape to implement the L-frame properly, or in a bigger capture space where not every camera can see the L-frame’s markers.

  1. Reduce residuals fast using Quick Refine

Cortex processes areas of the space where the wand’s markers are being reconstructed in 3D. These resulting ‘residuals’ are good indicators of calibration success. You are looking to gain a 3D residual average across each camera during calibration.

It is possible that these 3D residuals can increase over time since the initial calibration. For example, any knocks can cause camera vibrations that can disturb the equipment. If you’re rushed for time, rather than completing the full calibration process again, Cortex allows for Quick Refine. Using any markers in the volume – including those attached to a subject for instance – you can record the mocap actor covering the whole space, while performing a quick refine, and the system will then update the originally saved calibration values accordingly. 

  1. Personalize the process using Custom Calibration

Within Cortex’s Custom Calibration wizard in Live Mode, you can toggle both general settings (e.g. frame rate and shutter speed) and individual camera settings (e.g. threshold, brightness, min/max lines).

Custom calibration settings are saved in the check box which, when enabled, applies the user-defined settings after starting the calibration process, and saves them at the end to be applied automatically when the next calibration is started. This is useful when different camera settings are needed for calibration compared to collection, for example when wanting to save time and collect less data by implementing lower frame rates.

Cortex’s settings also allow you to ‘mask’ areas in the 2D view of any given camera, which filters out any ‘noise’ such as bright lights or reflections that may distract from the markers.

  1.  Reusing collected Raw Files

Raw Files get saved during the two-step L-frame and wand calibration process as ‘calframe’ and ‘calwand’ for each step.

If there is any problem causing a diverging calibration (whereby cameras cannot understand spatial positions), these files can be used to recreate the calibration with different settings and gain a more successful result, even for offline calibration. 

  1.  Track moving cameras with Continuous Calibration

If you use a roving camera (or if the room or volume space is moving) Continuous Calibration utilizes stationary markers in the space for the camera to correct its own position while continuing to track subject marker movements, as shown in this demonstration.

We’re here to support you

It is possible that some small details could get overlooked during the calibration process, but there’s usually a quick fix. It could be as simple as a typo when inputting a lens specification. We’re here to assist you with any troubleshooting problems that might occur. 

With a range of options for calibrating cameras for Cortex and BaSix, it is simple to prepare your mocap system quickly and efficiently. If you need help with the calibration process, chat to our Customer Support team

If you’re exploring mocap solutions and would like to find out more about our systems, please book a demo.

Mocap in action: In conversation with Adam Cyr, Biomechanist at Mary Bridge Children’s Hospital

A long-standing client, Mary Bridge Children’s Research and Movement Laboratory (RML) is a multidisciplinary facility that houses a team of engineers and clinicians who conduct research and use the latest technologies to identify, diagnose, and treat individuals with movement challenges.

We caught up with Adam Cyr, a biomechanist at the facility, who has a keen interest in applying engineering principles and techniques to understand how the human body performs. His goal is to improve injury prevention and treatment.

Here, we share what he had to say about his work and how he is using mocap as part of the biomechanics research he does on a daily basis.

Could you give us a quick overview of your background as it relates to the world of biomechanics and biomechanics research?

After completing my studies, I briefly worked at a company doing forensic biomechanics before I found myself at the Research and Movement Lab at Mary Bridge Children’s Hospital. At the RML, we see patients with a wide variety of concerns, including neurological, muscular, and orthopedic disorders. We also see people who are looking to enhance their performance or who suffer from sports-related injuries.

How do you use motion capture technology in the work you do every day?

The more data we can collect, the better. We want to look at kids doing functional tasks. If we see a patient today and collect data on how they move in their preferred way and then they have some sort of intervention, we have data we can use to assess if there’s been an improvement because they will be moving better than before. Our goal is to inform the clinical providers, whether they’re surgeons or physical therapists, and provide them with objective data so they can make better decisions. 

On a typical day, we’ll spend a few hours with a patient either in the morning or the afternoon. We’ll prep the room to make sure that the motion capture system is ready and that the markers are ready to go. We’ll do a subjective history and a physical exam. And then we’ll put the markers on and get the patient to do basic movements. If there’s any particular activity that is causing a problem, we will have them do that activity specifically. After they leave, I compile the data, process it and turn it into graphs and meaningful insights for our therapists to review. It’s great to work this closely with clinicians to see the data and graphs transform into information that means something.  

Can you walk us through your experience using Motion Analysis and share some of the features you find most useful?

The motion capture system I inherited in my current position was an older one. We were very fortunate to be able to upgrade to some newer Motion Analysis cameras recently. The new tech is very impressive. From a size perspective, everything is getting smaller, the optics are better, the speed is better and these cameras can track much smaller markers. 

The cameras are also more advanced, which makes it easier to do things right the first time and not waste time cleaning up the data. This speeds up patient processing times. We want to get a report back to our patients within a couple weeks and if I’m spending a day cleaning up data, that isn’t possible. 

When I do have to clean up data, there are some great features on the backend that make it easier to do so. For example, if a marker dropped off and you didn’t notice, you can use virtual markers to fill in the data gap. I’ve also started to go down the road of playing with what they call the Sky Interface. This allows me to build my own scripts using a batch process. I’ve been working closely with the Motion Analysis team on this and they’ve been hugely helpful. When we collect EMG data, there’s a delay in time so we need to shift the data over for it to line up correctly. With the Sky Interface, I can code something so that I just have to hit one button and it goes through all of my captures and automatically shifts the data over.

We’re also starting to get into real-time feedback using Cortex software. In a clinical setting, we’d use this to better understand upper body motion. For example, we’d put markers on the elbow, the arm and the torso and ask children to reach around so we can see how far they can reach. With real-time feedback, it’s possible to have them reach for virtual markers on a screen, a bit like they are playing a video game. It would all be done in real time using the Motion Analysis workflows I’ve learned. In the work I do, it’s been enormously helpful for me to be able to pick up a phone and connect with the Motion Analysis customer support team or their engineering and technical teams because they are so willing to help out when I have a problem that I need to figure out right away.

If you, like Adam, want to leverage motion capture innovation to better understand movement-related conditions or improve how you monitor the tendencies and patterns of biomechanical movements, we can help. Learn more about how our team can support your mocap needs by scheduling a demo today.