Motion Analysis Corporation Unveils Cortex 9.5 Software Upgrade

November 8 2023, California – Motion Analysis Corporation is excited to announce the highly-anticipated release of Cortex 9.5, the latest edition of its cutting-edge motion capture software. This update is now available for download and is accessible to all customers with active warranties or current software maintenance contracts.

Cortex 9.5 introduces a range of exceptional features and improvements that elevate the motion capture experience to new heights, providing users with greater flexibility, efficiency, and accuracy. Here are the key highlights of this remarkable update:

Quick Files Capture Status: Cortex 9.5 introduces Quick Files Capture Status indicators, simplifying the assessment of dataset status. Users can easily classify captures as “Unedited,” “In Progress,” or “Complete.” Customization options are also available, allowing users to create their own status names and icons, providing a user-friendly experience.

Kestrel Plus Cameras: With Cortex 9.5, Motion Analysis Corporation introduces the Kestrel Plus camera line, featuring the Kestrel Plus 3, Kestrel Plus 22, and Kestrel Plus 42. These new cameras seamlessly integrate with Cortex 9, expanding your capture capabilities and delivering high-quality results.

Trim Capture Modifications: Cortex 9.5 enhances the Trim Capture feature, enabling users to modify names, generate captures on a per-markerset basis, and add timecode support. This streamlined process facilitates the extraction of relevant data from capture files and offers improved post-processing options.

Workflow Improvements: Cortex 9.5 enhances the Workflow feature, making task execution even more efficient. Users can now utilize a search tool and a workflow repository, enabling easy access and management of workflows, optimizing productivity.

Live Detailed Hand Identification: Advanced hand tracking techniques have been integrated into Cortex 9.5, reducing marker swapping during live collection and post-processing of intricate finger movements. Users can contact the support team for a sample markerset to enable this feature.

Automatic Wand Identification for Reference Video Overlay Calibration: In a significant time-saving move, Cortex 9.5 automates the marker selection process for reference video overlay calibration, eliminating manual marker selection and potential user errors. This feature can be applied in both Live Mode and Post Process.

Bertec Digital Integration: Cortex 9.5 now offers support for Bertec AM6800 digital amplifiers, simplifying setup and reducing the number of required components, thus enhancing the overall user experience.

National Instruments New Device Compatibility: Cortex 9.5 continues its support for National Instruments A/D board data collection and expands compatibility to their next generation of DAQs, maintaining flexibility and ensuring compatibility with previously supported devices.

Additional Updates and Features: Several additional updates and features, such as the renaming of the Post Process X panel to Tracks, improved contrast in Dark Mode, and an increased marker slot limit, are included in this feature-rich update.

Cortex 9.5 marks a significant milestone in the field of motion capture, empowering users with advanced tools, enhanced workflows, and improved performance.

To learn more about Cortex 9.5 and take advantage of these exciting new features, download the full release notes here, or contact our sales and support teams for further information and assistance.

Motion Analysis Corporation continues to lead the way in motion capture technology, and Cortex 9.5 is a testament to our commitment to delivering innovative solutions that meet the evolving needs of our customers.

About Motion Analysis Corporation

Motion Analysis Corporation is a leading provider of motion capture technology solutions for various industries, including entertainment, sports, healthcare, and research. With a focus on innovation and customer satisfaction, Motion Analysis Corporation strives to make motion capture more accessible and versatile.

Looking back on Motion Analysis’ highlights of 2022

As one year draws to a close and another one begins, it’s only natural to reflect and think about all the progress that was made in 2022. It’s equally natural to look ahead and make plans for the future.

For Motion Analysis, 2022 was a rather momentous year. We celebrated 40 years of helping our many different clients leverage mocap software to bring their research to life and to enrich their creative projects. Over the course of the four decades we’ve been in business, we’ve had the opportunity, and the incredible privilege, to use motion capture in ways that we could never have imagined when we started out. 

Below, we unpack a few of the other highlights in mocap that made 2022 a great year. 

We moved on from the pandemic

When we look back on 2022, a big Motion Analysis highlight is the fact that things are really starting to return to normal. During the pandemic, most industries experienced supply chain issues, which affected their ability to work without disruption. As Motion Analysis designs, engineers and builds our products in California – everything we do is proudly Made in America – we were able to limit the impact of these supply chain restraints and the adverse impact this might have on our customers. This might be the reason why we’ve seen a strong increase in US sales this year. While we weren’t hugely affected by supply chain issues, we have been very strategic about keeping an eye on our supply chains so that we can address any issues before they affect our ability to build and ship the products our customers want and need.  

Looking more broadly, after an understandable slowdown during the peak of the pandemic, it’s great to see that the industry has rebounded to pre-COVID levels. This is a sign that research and development have resumed and appears to be thriving once again.

We refreshed our brand

In May, our website got a much-needed facelift because we wanted it to better align with our commitment to innovation and quality. By streamlining our logo, modernizing the Motion Analysis color palette and adding fresh imagery, we believe that we’ve injected new vitality and relevance into our brand and we hope that our new branding conveys the energy and passion that we value so much as a team. If you haven’t seen it already, take a look: https://motionanalysis.com/ 

We expanded our product line

Keen to make the work we do more affordable, we launched the BaSix range of cameras.

This family of cameras consists of the three “light” cameras (in ascending order of capability): BaseCam, the Icefall, and the Lhotse. These cameras are used with active marker rigs (BaSix markers) and either BaSix Go software or our premium solution, Cortex. This range of cameras is designed to help smaller 3D animation and gaming studios start their own mocap journeys, with the entry-level system (using BaseCams) starting at $20,000. We also debuted the Firefly Active Marker Kit. A complete active marker tracking set, Firefly is ideal for data scientists, private researchers or anybody carrying out investigative work with drones because these markers are so small and lightweight. We also launched the latest iteration of our Cortex software – Cortex 9.2. This software update includes new features and has been improved and updated based on insights from our customers and changes in the industry. 

We attended a range of industry events

After two years without in-person events and conferences, it was great to get back into the swing of things in 2022. We had the opportunity to attend several events this year and got the chance to interact with our clients and colleagues once again. In March, we hosted a booth at the annual Game Developers Conference (GDC) in San Francisco. We sponsored the Career Award at the American College of Sports Medicine (ACSM) Annual Meeting and World Congresses on Exercise is Medicine and Basic Science of Exercise and Vascular Health, which was held in San Diego. Of course the National Conference on Biomechanics (NACOB) in Ottowa was a highlight, as was the International Symposium on 3D Analysis of Human Movement, which took place in Tokyo, Japan. 

We worked with new clients

Other Motion Analysis highlights include welcoming several new clients to the Motion Analysis family. In 2022, we worked with some of the world’s top universities, video game businesses, medical facilities, sporting brands and even leading aerospace companies. Each new client brings something different to the table and we love getting the chance to use our hardware and software to bring their ideas to life and answer their research questions. 

We’re ready for what lies ahead in 2023!

For us, mocap in 2023 is all about staying innovative and looking for ways to enhance the work we do so that we can better support the research and the creativity of our clients. As our technology makes its way into other industries, we hope to attend a more varied range of conferences and events this year. We are also excited about what we can achieve under the guidance of our new president, Brian Leedy, who stepped into the role in December. If you want to keep up with all things mocap in 2023, please follow along on our journey in the new year by subscribing to our newsletter. Or you can follow Motion Analysis on LinkedIn and Twitter.

Take a look at our favorite motion capture projects

It has been a wonderful 40 years for Motion Analysis – how time has flown!

To celebrate our milestone birthday, we’re sharing some of our favorite motion capture projects and customer stories from the past four decades – from film and animation to industrial applications, biomechanics, broadcasting and more – while we look forward to what lies next for mocap.

Animating Gollum and receiving an Academy Award

We loved Andy Serkis’ portrayal of Gollum in the Lord of the Rings trilogy. Did you know that his final Mount Doom scene was brought to life using the Eagle Digital System – the award-winning motion capture process developed by four of our engineers, including Ned Phipps

Ned has pioneered mocap technology in film and animation for more than two decades. He and the Motion Analysis’ engineering team proudly won an Academy Award for their esteemed mocap work on Peter Jackson’s films. 

Image source: https://www.oscars.org/sci-tech/ceremonies/2005/?fid=4151

Assassin’s Creed, Dr Strange, and lots of other studio work with Centroid Motion Capture

We have been working with Centroid Motion Capture since 1996, on high-profile projects like Assassin’s Creed, Dr Strange, and seasons 6 and 7 of Game of Thrones. 

Using the Raptor camera system, Centroid captures high volumes of performance data in all sorts of locations. They also use it to track animal movements. Centroid uses Cortex, our motion capture and editing software, for everything  from previsualization to skeleton solving to retargeting.

Image source: Courtesy of Centroid Motion Capture

Helping scientists use drones to detect gas leaks

Our software is also instrumental to boost mocap for industrial applications. PhD student, Chiara Ercolani at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland used our drone capabilities for 3D motion tracking in a wind tunnel facility.

Drones and active marker tracking can be used for 3D gas source localization, which ultimately detects gas leaks without putting human lives at risk, in a number of challenging environmental conditions. 

Helping CSI Calgary to improve sports team training and performance 

A luge coach at Canadian Sport Institute Calgary, Pro Stergiou of Sport Product Testing and University of Calgary graduate, Luciano Tomaghelli utilized our technology as a “gold standard” for mocap for biomechanics; collecting large volumes of data to research the effect of starting technique on luge performance.

Comparing accelerometer data to that captured by our cameras and markers, and collecting kinematic data to assess a luge athlete’s pull and paddle technique, these pioneers in this field helped validate how starting technique and analysis could help improve a luge athlete’s training and performance.  

Image source: Courtesy of Sport Product Testing

Improving the development of police equipment and more with the University of Lincoln

Biomechanics was also at the heart of our research work alongside Dr Franky Mulloy at the University of Lincoln’s MoCap Hub, working on trampoline development and load carriage systems for the police and the military. Designed for any business to quantify movement, here Dr Molloy used our software to track kinematics and integrate with multiple third-party tools. 

Our Cortex software assisted in gaining precise kinematic movement data to help identify issues leading to injury, as well as offering solutions to redesign ergonomic products for highly dynamic activities. 

Image source: Courtesy of the University of Lincoln

Taking gaming to the space-age for Respawn and their game, Titanfall

To craft vivid imaginary worlds, video game production relies on mocap for animation to track bodily and facial movements of actors (in ‘bodysuits’) and enhance them with CGI imagery. In this demonstration, you can see how Respawn Entertainment used MAC cameras and tracking sensors on the actors’ suits to animate alien machinery for their game, Titanfall.

Testing and improving ice hockey skills and equipment with CCM Hockey

Ice hockey equipment brand CCM Hockey uses MAC’s Cortex software for its own biomechanics performance lab. As you can see in this video, their system can be set up on the ice to test equipment, and analyze player performance in slow motion using markers on their hockey sticks. 

Working with Ford to reduce risk and improve training in manufacturing plants

While Ford is a forerunner in industrial history, we have helped the brand continually innovate. Using mocap for biomechanics, manufacturing plants can be designed – this video details how our mocap bodysuit helps make risk assessment decisions according to movements used in assembly lines, and subsequent virtual reality headsets train factory workers to handle heavy machinery safely.

Bringing science into the world of advertising with Under Armour

Steph Curry is one of the most jaw-dropping basketball players in the world. The capabilities of his endorsed Under Armour shoe were tried, tested, and improved using mocap for biomechanics, which you can see in this video: Steph gets fitted with mocap markers and analyzed on an interface via Cortex software. 

Broadcast tracking for the BBC’s coverage of the UK general election 2019

We love to see our technology used for broadcast purposes too. During the UK’s general election in 2019, the shots of Jeremy Vine in front of Downing Street were achieved through greenscreen and our system tracking the in-studio cameras.

Image source: https://www.youtube.com/watch?v=-9Zeaol_KXY

Here’s a behind the scenes virtual reality tour from Vine himself!

Here’s to another 40 years!

We can’t wait to see what the future holds in this industry! Already, we’re working hard to develop lighter, more accessible systems that we hope will add a new level of application to this list of amazing motion capture projects.

Why Intel partners with Motion Analysis to bring technology to Olympic athletes

THE CUSTOMER

The Intel Olympic Technology Group (OTG) is a division of Intel focused on bringing cutting-edge technology to Olympic athletes and helping them to better prepare for the Olympic Games.  

THE PROBLEM

The Intel OTG wanted to create a smart coaching application using computer vision pose estimation models. These models use key point locations on the body, like joints, to calculate biomechanical attributes relevant to athletes, such as velocity, acceleration, and posture etc.

THE SOLUTION

Motion capture was first used in biomechanics in the late 1970s to analyze a subject’s gait. But a lot has changed since then. Today, this technology is being used across the increasingly data-driven sports industry

The information generated using motion capture software empowers coaches to identify issues that may be preventing a player from improving their performance. This technology can also be used to prevent injuries. When physiotherapists use motion capture software to analyze the kinematics of a particular movement, it is feasible to identify any range of motion (ROM) issues and determine if these are linked to pain or injury in the athlete. When it comes to movement analysis, the accuracy of data is key. The precise data collection and instant translation of this data makes it viable for coaches and physiotherapists to identify areas where re-injury might occur, determine an appropriate recovery time and provide evidence-based recommendations for rehabilitation. 3D motion capture software can also be used to track the movements of an entire team. This data can be used by coaches to strategize better because it is possible to track a range of player performance factors – like accelerations or decelerations.

Benjamin Hansen, Product Engineering Lead in AI & Sports Technology for Intel OTG, has been using motion capture to do just this. Describing himself “a lifetime customer of Motion Analysis”, he has utilized Motion Analysis systems to provide athlete testing services to elite athletes and professional baseball teams. And now, he’s using Cortex to validate and benchmark the smart coaching application described above.

Cortex is Motion Analysis’ most powerful motion capture software that completely manages motion capture and measurement for all applications from biomechanics, broadcasting, and engineering to sports performance, game production, and film.

One of the projects the OTG worked on using Cortex was 3D Athlete Tracking (3DAT). Intel’s motion tracking platform, 3DAT, creates scalable technology that advances the understanding of human health and performance. Crucially, it relies on a Motion Analysis system, which includes Kestrel cameras and Cortex software, to benchmark data accuracy in order to inform the necessary algorithms. Developed over four years for athletes competing at the Tokyo & Beijing Olympics, 3DAT is now being commercialized as a camera agnostic motion capture software development kit (SDK) that developers can use to create biomechanics solutions for the sports, health, and fitness industries.
 
Want to find out more about Intel’s journey with Motion Analysis? Download the full case study, here, to learn more.

Deepfake technology for entertainment: the pros and cons

As cinema predicted, the robots are taking over! 

Perhaps not the entire world just yet, but they spring up in all areas of entertainment. Artificial intelligence is rife, not just on the internet but in television shows, music videos, and on the silver screen. And it’s all about deepfake technology.

Deepfake technology is changing the way that artificial intelligence can be used to create realistic moments without real-life actors. 

But can such artificial intelligence become the next big thing in entertainment when it’s so fraught with controversy? Let’s take a look.

How does deepfake work?

Deepfake is a type of image manipulation that uses machine learning techniques to alter facial features or expressions. It can also be used to mimic voices. 

Its origins can be traced to the work of William Goodfellow, who looked into generative adversarial networks (GANs). These generate realistic images using an algorithm whereby two GANs try to trick each other into thinking that the image is real.

This all sounds rather complicated, but deepfake technology has come a long way these days and can be practiced by any adept digital artist. A GAN can already recreate a realistic human face from one single image, with the propensity to ‘rebuild’ an uncanny likeness of a whole human being in the future. 

How AI enhances performance

Deepfake has gained worldwide fame in many forms of artistic expression. Most notably, a deepfake artist superimposed faces of actors including Tom Cruise and Seth Rogen in an interview with comedian Bill Hader to enhance his impressions. Rapper Kendrick Lamar used deepfake in a 2022 music video to ‘become’ influential celebrities conveying the meanings of his lyrical content. Elsewhere, the revitalised Star Wars franchise very cleverly implemented deepfake to bring back characters such as Princess Leia and Moff Tarkin despite their respective actors having passed away. The technique can similarly ‘age’ faces, giving a whole new dimension to realistic storytelling through cinema. 

Elsewhere, deepfake technology has been utilised by a malaria charity to alter David Beckham’s voice to speak nine languages, and has made it easy to generate AI avatars without the need for actors during lockdowns in the Covid-19 pandemic.

The problems posed by deepfake tech 

Then again, there are problems that deepfake technology can cause. Taking the ‘fake’ term very literally, the images are not real but can easily be construed as such, often for nefarious means.

Take for instance the rising cases of ‘fake news’ that promote unreal events often through social media; deepfake can be used to place real people’s faces into pictures to tell a make-believe news story. Politicians have found themselves mimicked through the technology – potential voters could easily be duped. There are also cases of identity misuse when deepfaked faces of celebrities are put into pornographic contexts. This has extended into artificially generated nude models of real people, done without their permission.

The technology can be used for identity fraud through facial recognition or voice activation services. The viral enthusiasm of the technology, and the means by which it is spread, makes it tough for social media networks to regulate.

Deepfake vs mocap: what is the future of film?

Deepfake could be mistaken for a type of motion capture technology, but this is not completely true. It remains a form of artificial intelligence or machine learning that creates manipulated images and voices. Motion capture technology instead is a visual effect technology: it uses the real-time movements, facial expressions and voices of an actor with the ability to transpose computer generated imagery (CGI) onto them. 

Using the examples of de-aging or regenerating late actors, deepfakes look uncanny as they are not able to track actual movement. Mocap on the other hand is advanced in its capturing of an actor’s movements and expressions, and uses that data to animate CGI with realistic precision. 

Perhaps there will be a time when deepfake’s cinematic advantages can be used in conjunction with visual effects and motion capture. Mocap can be used as the foundation of by recording and analysing an actor’s movement in performance, with help from deepfake technology to recreate faces or age actors for continuity efforts. 

Deepfake is certainly useful as an artistic endeavour, with an unfortunate array of misuse. While there are ways to use GANs to detect fake images and lower its more negative connotations, nothing can replicate the real; deepfakes are currently far from creating a more ideal image that can be gained from mocap. 

At Motion Analysis, we think mocap remains the future of the film industry to create unforgettable animated experiences. Explore  our Cortex software to find out why.

Our brand has had a facelift. What do you think?

If you visit our website often, you’ll have noticed that it’s got a fresh new look and feel. We have just refreshed our brand, and we would love to know what you think of it. 

Tag us in a post on LinkedIn if you like it!

Why the brand refresh?

It’s been a while since we focused on our brand (since we usually dedicate all our time and resources to product development) and we felt that it was time to ensure that the brand is truly representative of our firm commitment to innovation and quality – especially as we ramp up in our mission to create world-class motion capture technology. 

We want to convey the intelligence, energy, and flexibility that we value as a team. And of course, we want to make sure that our company appeals to our target market of scientists, engineers, researchers, 3D animators, mocap studios, and broadcasters. 

By streamlining the logo design, modernizing our color palette, and adding new imagery, we have added new vitality and relevance to the brand.

Our website’s had a refresh, too

We have implemented the new brand across our corporate website and we’ve taken it a step further by taking the time to reassess the user journey and make changes to improve the site’s speed and functionality. We have also added new product photography to make it more engaging.

Take a look if you haven’t already: https://motionanalysis.com/ 

Watch this space for more updates and innovations

We’re working on a few new product and website developments that will help make our customers’ lives easier. Watch this space and LinkedIn and Twitter for more information!

Making inventory and logistics fun with Ramon Contreras

Ramon Contreras is a fan of solving puzzles. It’s this enthusiasm that piqued his interest in inventory management and logistics. While employed at a large-scale electronics retailer he was given the chance to work across a range of different departments and he quickly realized that he liked the flow and occasional complexity of logistics and materials handling. 

For Ramon, having to fit different pieces together to achieve a desired outcome is his definition of fun. Similarly, when you are packing a crate or pallet, he believes that being a Tetris expert is an absolute essential. On a less physical level, his job demands that he solves puzzles around how to get a product from point A to point B; making sure that all of the necessary boxes are ticked and that the right people are involved to get the product where it needs to go on time. 

As a Logistic Supervisor at Motion Analysis, he works closely with vendors and fellow employees handling everything from purchase orders and shipping to pick kit assembly and inventory and warehouse supply management. A typical day starts with emails; checking for any issues or handling quick questions and then he handles everything from processing new orders and receiving inbound goods to keeping tabs on inventory and prebuilding sub-assemblies. As such, he sometimes thinks of himself as more of a “Professional Materials Handler”.

The value of this work was showcased during the pandemic when global supply chains stuttered leaving businesses without stock. Good supply and logistics management is a very important tool in creating value for customers because it ensures that products are available to more people, which is exactly what Ramon does every day.

The decision to work at Motion Analysis was driven by the particulars of the role itself, he explains. These job specifications were also a good fit for his work experience. Running his own department, he doesn’t have to report to anyone, which he believes gives him an incredible opportunity to learn and grow. 

“Having to wear so many hats keeps everything fresh. I can use what I have learned in the past and learn more as I go along. This also opens up opportunities to create new and improved processes. Once I better understand exactly what we are trying to achieve, I can create the roadmap to reach that goal.”

Understanding the value of the work he does, Ramon is looking forward to being impactful; figuring out how to streamline the warehouse and remove unneeded and/or wasteful processes. 

“This is not just a warehouse job. It requires a lot of thought, planning, spatial awareness and organizational skills. You need to be able to understand priorities and know what’s available so that you can make the best and most cost-effective decisions. With this knowledge, you know exactly when and who to ask for assistance when confronted with a challenge. I realize that it might be surprising for some, but this really is my version of fun.”

How Steve Soltis measures success after 20 years in mocap

He may be a new face in the Motion Analysis family, but Steve Soltis is certainly not new to the world of motion capture. Our latest US Sales Manager has spent over 20 years working in the industry, and shares a common passion with our company: a love for the customers. 

“It’s always really exciting, and such a learning experience, to work with customers who are experts in their field. One of the things I am really looking forward to in my new position at Motion Analysis is getting to help our customers advance the research and applications they are wanting to pursue by matching them with the most accurate and economical mocap solutions. Every day and every customer brings a unique set of circumstances and challenges to the table. I enjoy using the years of experience I have in motion capture to help tackle these challenges, while still being inspired by the exciting work the customers are doing in the process.”

But the customers are not the only thing that drew Steve into this industry, or into the Motion Analysis world.

“Motion Analysis is a leader in the industry. I previously worked as a distributor for them so I have been able to experience their knowledge and professionalism firsthand, and let me tell you, it’s impressive. I consider myself very fortunate to work in such a respected company. A big part of why I love working in this industry so much is the opportunity we get to help verify and prove what is otherwise only speculated or theorized. The world of motion capture is constantly changing and improving, and I get to work at the forefront of proving to customers what our solutions can do.”

As is probably obvious, Steve is a problem-solver. He doesn’t just want to sell systems, but wants to present our customers with solutions. This is very in line with the values of Motion Analysis as a company – that we offer our customers more than just a product – and is why it’s so important for us to hire those who share our same values. So, what is Steve’s measure of success in his work? 

“I don’t see success as a finish line, but rather as a goal post that is constantly moving. I believe that in order to be successful you also need to keep moving towards that goal post. The moment you stop moving forward is the moment you stop being successful. With an eye on the past, strive to think outside the box to pioneer new applications and solutions to further technology and research.”

Meet one of the biggest brains behind our mocap dev

For Ned Phipps, a seemingly impossible problem is the best kind to solve. Many would feel overwhelmed when facing this sort of task, but this is where Ned thrives. Ned is one of four brilliant Motion Analysis engineers who developed the award-winning motion capture process, named the Eagle Digital System, which was responsible for bringing Lord of the Rings character, Gollum, in the scene on Mount Doom, to life. It was later also used for the facial animation of King Kong, in King Kong, and all the robot motion in iRobot.

When he is not experimenting creatively with algorithms and techniques, as the Motion Analysis Senior Software Engineer in Rohnert Park California, he can be found playing the violin in his local orchestra; doing Tai Chi in the park; or sailing a 31-foot sloop on San Francisco Bay. Whatever his secret is to a calm mindset, it seems to be working, because he has been successfully helping Motion Analysis tackle seemingly unsolvable software issues for over two decades and even took home an Academy Award in the process!

“I’ve been a primary developer of our motion tracking software since 1997. My background with math, physics, and programming was a perfect match for the job. Over the years I’ve had the opportunity to assess and improve every aspect of our systems, primarily working on the host machine and core software, but also developing the camera software, reworking the camera FPGA hardware, and writing the first two versions of our SDK.” 

Motion Analysis has always encouraged their software engineers to be self-starting. Having this attitude towards his job has worked well for Ned, since his job role rarely involves working on assigned tasks, and mostly revolves around being presented with problems and then brainstorming solutions for those problems. But having opportunities to fix or improve complex systems is what Ned loves so much about the industry and has led to him playing a key part in many software successes. 

“The number one success story would have to be making motion capture real-time. Our video boards had a ‘test’ mode that was able to be transformed into a continuous data stream. The tracking process was turned into a set of threads that hand off data and the tracking process speed was improved sufficiently to keep up with the data stream.”

So how does Ned approach issues that seem unsolvable? It takes a whole lot of patience, the eagerness to try, try, and try again and, of course, a passion for what he does. This attitude is clear to see in the immense contribution he has made to Motion Analysis in the time that he has been with us. 

“The general speeding up of triangulating markers was another success that happened over many years. I had predicted a 25% improvement on my first attempt at speeding up this process, excited to be making an early contribution to Motion Analysis, however, testing it with data from files showed zero improvement. I was shocked. But I didn’t let that stop me. Over the years I’ve reworked all of the marker tracking, benchmarking multiple improvements. One that stands out was an over 20X speed improvement. Currently, my basic benchmark is from an 8-person capture that could easily be processed at over 2.5X faster than its capture rate, on the same hardware! So we’re ahead of the game already but we have ideas to get even better.”

Ned’s advice to future software engineers comes down to a simple catchphrase: 

“Got an impossible problem to solve? Eager to help!”

Tips on templating and how it can reduce post-processing time

You could save a lot of time by identifying your marker data in live mode instead of Post-Processing Mode. All it takes is utilizing a little feature we call “templating”. 

Surprisingly, many customers don’t even know what templating is. And those who do, often don’t realize the benefits it offers to their motion capture process. 

Cortex uses a template to identify markers, and the template is a collection of links. Those links provide allowable distance between markers. Using the links, list of markers, marker order and relative location of unnamed markers in the volume, Cortex applies an identity to the unnamed markers that the cameras see in order to get usable data in post process

If you take the time to build a great template, you can apply it during a live recording, which prevents you from having to identify markers in post processing mode.

When using a template in Live Mode, you should always use “New Subject”, which is a tool in Cortex that can be used in both live and Post-Process Mode. Creating a robust template and utilizing the New Subject feature allows scaling of the previously created template to fit new subjects which eliminates the need to recreate the template in post process for each subject.  

Being able to use New Subject to fit the template in Live Mode also prevents the user from needing to spend a large amount of time identifying features in Post Process Mode.  

Let’s say you have two people you’re recording data on – one of them is 5ft and one is 6ft. You would use New Subject to fit the “Robust Template” or “Golden Template” to the 5ft person.The beauty is, the same Robust/Golden Template will work on a 6ft person.

Templating offers Cortex users a huge benefit in that you can get identified data as soon as you’ve recorded it in Live Mode, which will reduce the amount of work you need to do in Post-Processing Mode.

Here are our top three tips for building a template:

  1. Make sure you’re starting with a static capture and then extending for a range of motion capture that’s representative of the dataset you’ll be using the marker set on. So, for example, if you’re using it on gait analysis, you want to make sure you’re extending the template using gait data.

  2. Ensure that “New Subject” is used between each subject that you extend the template on. Scaling the template first to fit a new subject will ensure any extensions made afterwards will encompass only the range of motion of that specific subject and not the difference in size between the current and previous subject.

  3. Extend the template for multiple subjects, because different subjects will move in different ways, for example in gait analysis, various people (subjects) will have different gaits. You need to make sure the template encompasses the range of motion that you would expect from your full sample size.

And remember, if at first you don’t succeed, because building a template takes time (but is still worth the effort), you can always contact one of our legendary customer support team members for help. 

To find out more about templating, read our thorough and insightful guide here.