Motion Analysis Corporation Unveils Cortex 9.5 Software Upgrade

November 8 2023, California – Motion Analysis Corporation is excited to announce the highly-anticipated release of Cortex 9.5, the latest edition of its cutting-edge motion capture software. This update is now available for download and is accessible to all customers with active warranties or current software maintenance contracts.

Cortex 9.5 introduces a range of exceptional features and improvements that elevate the motion capture experience to new heights, providing users with greater flexibility, efficiency, and accuracy. Here are the key highlights of this remarkable update:

Quick Files Capture Status: Cortex 9.5 introduces Quick Files Capture Status indicators, simplifying the assessment of dataset status. Users can easily classify captures as “Unedited,” “In Progress,” or “Complete.” Customization options are also available, allowing users to create their own status names and icons, providing a user-friendly experience.

Kestrel Plus Cameras: With Cortex 9.5, Motion Analysis Corporation introduces the Kestrel Plus camera line, featuring the Kestrel Plus 3, Kestrel Plus 22, and Kestrel Plus 42. These new cameras seamlessly integrate with Cortex 9, expanding your capture capabilities and delivering high-quality results.

Trim Capture Modifications: Cortex 9.5 enhances the Trim Capture feature, enabling users to modify names, generate captures on a per-markerset basis, and add timecode support. This streamlined process facilitates the extraction of relevant data from capture files and offers improved post-processing options.

Workflow Improvements: Cortex 9.5 enhances the Workflow feature, making task execution even more efficient. Users can now utilize a search tool and a workflow repository, enabling easy access and management of workflows, optimizing productivity.

Live Detailed Hand Identification: Advanced hand tracking techniques have been integrated into Cortex 9.5, reducing marker swapping during live collection and post-processing of intricate finger movements. Users can contact the support team for a sample markerset to enable this feature.

Automatic Wand Identification for Reference Video Overlay Calibration: In a significant time-saving move, Cortex 9.5 automates the marker selection process for reference video overlay calibration, eliminating manual marker selection and potential user errors. This feature can be applied in both Live Mode and Post Process.

Bertec Digital Integration: Cortex 9.5 now offers support for Bertec AM6800 digital amplifiers, simplifying setup and reducing the number of required components, thus enhancing the overall user experience.

National Instruments New Device Compatibility: Cortex 9.5 continues its support for National Instruments A/D board data collection and expands compatibility to their next generation of DAQs, maintaining flexibility and ensuring compatibility with previously supported devices.

Additional Updates and Features: Several additional updates and features, such as the renaming of the Post Process X panel to Tracks, improved contrast in Dark Mode, and an increased marker slot limit, are included in this feature-rich update.

Cortex 9.5 marks a significant milestone in the field of motion capture, empowering users with advanced tools, enhanced workflows, and improved performance.

To learn more about Cortex 9.5 and take advantage of these exciting new features, download the full release notes here, or contact our sales and support teams for further information and assistance.

Motion Analysis Corporation continues to lead the way in motion capture technology, and Cortex 9.5 is a testament to our commitment to delivering innovative solutions that meet the evolving needs of our customers.

About Motion Analysis Corporation

Motion Analysis Corporation is a leading provider of motion capture technology solutions for various industries, including entertainment, sports, healthcare, and research. With a focus on innovation and customer satisfaction, Motion Analysis Corporation strives to make motion capture more accessible and versatile.

Motion capture suit, camera & action! What goes into a mocap performance?

There’s more to mocap than rolling around in a lycra suit!

We’ve already looked at the acting skills needed for a successful mocap performance, now let’s dive into the technical side of things to better understand each piece of tech that makes a performance work. 

1. The motion capture suit

The motion capture suit is really just a lycra outfit to hold the markers  onto the actor’s skin so they can move naturally without feeling inhibited. But the markers attached to these suits are the real star of the show. 

These retro-reflective 3D tracking dots are small spheres positioned strategically on the performer to record their real-life movements. Imagine the markers as computerized puppet strings – pulling the skeleton of the character through frames that create animated motion. 

2. The cameras 

The retro-reflective markers are tracked by specialized motion capture cameras. The more cameras you use, the more complete and accurate the outcome will be.

Cameras such as the Kestrel produce marker coordinate data rather than an image. They detect only infrared or near-infrared light and are able to pass information at a much higher frame rate than a typical television camera could. 

The Kestrel 4200 is one of the best pieces of hardware out there when it comes to mocap tech, and is an excellent investment for large and complex mocap systems. But if you’re working on a limited budget then the Kestrel 300 will still deliver a high quality motion capture.

Related: Choose the motion capture hardware that’s best suited for you

3. The software

An animation studio, game maker or filmmaker will use professional 3D animation software – Autodesk’s Maya is one of the more popular ones – which provides all the modeling, rendering, simulation, texturing, and animation tools that need to be added once motion is captured.

4. The rig

Before tracking movement for animation, animators need to have a basic skeleton mapped out for the character they are creating. This skeleton will help them to determine how many markers they need to use, and what levels of movement they need to track. For example, an acrobatic dancer who is going to be doing backflips will require more markers than a rigid-limbed robot that stomps around. 

The cameras and markers capture the motion and the data driving the character’s skeleton rig is sent back to the animation program where it’s transformed with fur, clothing, or skin. 

Our Cortex system is capable of solving the skeletons of any structure with any number of segments, including bipeds, quadrupeds, props, facial animation and more.

Because most humanoid characters have similar skeletons and move in similar ways, it’s possible to develop marker sets that can be used on a number of skeletons. 

Our Basix Go software has a built-in, constrained and tracked human skeleton at its core, which works for almost all humanoid characters. The six active markers strapped to the performer’s waist, feet, hands and head, are enough to track a human’s motion very accurately and precisely. Then within our software, (or in the receiving package), this rig can be mapped to the creator’s humanoid skeleton. 

Having this built-in solver-skeleton that’s ready to be tracked, means our BaSix system setup time is minimal compared to other traditional mocap systems. You simply need to walk into the studio once cameras are set up, strap on your six markers, stand in a “T” pose, press “reset skeleton” in the software, and voila – you’re tracking movement and data is being streamed live into your animation package in real-time, ready to be recorded. 

Interested in finding out more about our motion capture suits and technology? Find out more about our systems and book a demo today.

Exhibitor’s diary: Behind the scenes at SIGGRAPH 2023

We have returned from an insightful few days as an exhibitor at SIGGRAPH 2023. Hosted by the Association for Computing Machinery (ACM), it’s the largest exhibition of its kind, showcasing products and services in the computer graphics and interactive techniques market. 

Celebrating SIGGRAPH’s 50th year, we were delighted to see exciting breakthroughs for motion capture in the animation industry and to present our latest dedicated animation software, Rig Solver, a stand-alone flexible skeleton solver module. 

Conferences are still smaller than they were pre-pandemic, but nothing beats the experience of meeting the next generation of character animators in person, gaining feedback, and catching up with industry friends. Here’s a glimpse of the behind-the-scenes happenings for the Motion Analysis team at SIGGRAPH 2023.

A hive of activity

The ACM event took our team to Downtown Los Angeles, the metropolitan hub that brings the world together from a whole range of creative backgrounds. That description certainly suited SIGGRAPH 2023, where the conference hall was abuzz with flashing screens and vibrant booths premiering bold animation and VFX innovations to excited event-goers. As fast-paced as the nearby sunset strip, this space for high-tech exhibitors would somehow transform to host a Taekwondo Championship only a few days later.

This was not our first time as exhibitors at SIGGRAPH, where we knew that only a minimal setup was necessary to showcase our Rig Solver software. For other occasions, we might need a mighty truss to support our camera system, but thankfully some monitors for video tutorials were more than enough. Arriving in LA, our team got straight into planning mode after finding our materials hadn’t arrived the day before. As they say in Hollywood, the show must go on!

Never ones to be deterred by a challenge, a few emails and phone calls later we were up and running and raring to go before the deadline, thankful for the shipping team’s excellent service. We’re still not completely sure where the equipment went! 

On-the-floor opportunities

The day for an exhibitor at SIGGRAPH starts bright and early. We greeted everyone at Booth 245 in the vast hall with our complimentary freebies of candy, stationery, and our popular back-scratchers. Since the return of similar in-person conferences, we bump into recognizable friendly faces from all across the animation industry that bring great community spirit to every event we visit.

This year, we were lucky enough to meet both customers of our software and those only just discovering the world of mocap. It gave us the rare opportunity to go in-depth with experienced users face to face, then to be able to discuss the history of mocap and its use cases for gaming, film and more, as well as the work our company carries out. It’s always a refreshing experience to hopefully inspire newcomers to become motion capture practitioners, where our paths may cross again at future animation events. 

Advancements in technology were everywhere—even simple QR code scanning gave us far more time to interact with everybody that stopped by our booth, without the need to print and hand out hundreds of brochures. 

New experiences for all

It was especially exciting for us to have Rig Solver as a brand new product offering. Having both a large monitor and a laptop worked perfectly to run an introductory Rig Solver explanation on the former, while the tech-focused crowd were able to interact with the details on the small screen. 

We could practically delve into more advanced features if we needed to and dissect the range of useful software-specific questions. We found that, while Rig Solver is a complex piece of software for the tricky task of skeleton calculation, its approachable demonstration and easy-to-use interface made it easy for everyone to understand and engage with.

Rig Solver works as a flexible skeleton solver for animation, able to reposition, translate, scale, and rotate each part of a tricky bone or joint movement within a rig to fit marker trajectories gathered from motion capture data. Developed and released due to popular demand for our Calcium skeleton solving tool, Rig Solver is a stand-alone feature also able to clean data from multiple mocap cameras and marker systems to simplify the post process workflow of character animators.

We hope to be at SIGGRAPH again soon, with 2023’s edition providing even more invaluable first-hand looks into the current technologies and trends fueling the animation industry. It was brilliant to be a part of the festivities and catch up with friends, colleagues, and partners. old and new.
If you’d like to discover more about our Rig Solver module, or if you were at SIGGRAPH 2023 and want to get in touch, please contact the Motion Analysis team today.

Discover new frontiers and our latest launch for animation studios at SIGGRAPH 2023

Animation events come no bigger than SIGGRAPH. As the world’s premier computer graphics and interactive techniques conference, SIGGRAPH 2023 is shaping up to be a major force in moving the needle for motion capture in visual effects and production across the board—from film to broadcasting, gaming, research, art, and design. 

The three-day Los Angeles exhibition is just around the corner, with the full event taking place from August 6 to 10. Our team is ready to embrace exciting breakthroughs within animation mocap, where we will be exhibiting our software’s latest features, as well as our stand-alone flexible skeleton solver module, Rig Solver, to help improve post-production for character animators. 

Experience the grounds of innovation

At industry events of this scale, character animators have the chance to discover even more advanced mocap software, techniques, and applications to render life-like virtual worlds the likes we haven’t seen before, and this is the premier event for you to be a part of the action. 

As the driving force behind computer graphics and animation events, SIGGRAPH is celebrating its 50th year in style, chronicling the global community’s past and showcasing the creative minds and technologies fueling the industry’s future. As part of that worldwide community, your event ticket gives you access to invaluable keynote talks, VR experiences, and forums covering hot topics such as augmented reality and the metaverse, AI graphics, 3D animation, and data visualization, including talks from famed studios such as Weta Workshop. 

Alongside a job fair for aspiring visual artists, the animation event provides you with networking opportunities to interact with leading talents in their associated fields in the exhibition hall, and to try out new mocap software for yourself.

Bringing Rig Solver to the stage

We will be exhibiting at Booth 245, ready to showcase our easy-to-use Rig Solver module for post production capabilities.  

As you will know, constructing realistic movement is a challenge requiring speedy and accurate skeleton solving capabilities. Due to popular demand for our ‘Calcium Solver’ skeleton calculation tool within our mocap system, we have now launched Rig Solver as a stand-alone module to simplify post process workflows. 

While skeletons are traditionally moved using keyframing, motion capture records and tracks the realistic movements of actors in real-time onto a mapped rig. Rig Solver works as a flexible skeleton solver that can reposition, translate, scale, and rotate each part of a tricky bone or joint movement within a rig to fit marker trajectories gathered from motion capture data.

Rig Solver’s functionality means marker sets can be replaced and created, and resulting movement data can be exported in industry-leading FXB file formats, also supporting HTR and C3D file types. Also able to clean up data imported from a range of mocap cameras, setups and marker systems, Rig Solver easily fits into a post-processing pipeline as a complete, cost-effective solution for character animators.

See you there!

Find us at our stand, Booth 245, in the exhibition hall for a chat and to learn all about our motion capture cameras and solutions. We look forward to seeing you there and connecting with the worldwide animation and VFX community, and you can follow all of our updates during the event across our social channels.

If you’re not able to catch us at the event, be sure to explore Rig Solver here.

Introducing Rig Solver: the flexible post-processing skeleton solver for animators

For character animators everywhere, moving a character in realistic ways is a challenge. Optical motion capture (mocap) is the gold standard for speed and quality, where markers are situated on the outside of the subject. The animator, however, wants the motion of the underlying bone structure; skeleton solving is a tried-and-tested way to create high quality movements of animated characters. 

Customers using our mocap system have enjoyed the best-in-business global optimization tool ‘Calcium Solver’, and other animators have often enquired about having the ability to apply our tools to other 3D-trajectory marker data. Due to popular demand, we are excited to announce the launch of our new interoperable post-process tool Rig Solver: the data cleanup and mocap rig fitting software. 

Mocap Rig Solver is a standalone version of our core functions, including Calcium Solver (as well as other skeleton engines) to enhance post-production capabilities for character animation. The software can help you label and clean-up data, and contains multiple functions to minimize time-consuming tasks. Simply import data obtained from a range of motion capture cameras, setups and marker systems. Let’s dive into how it works.

Readily generating realistic moving characters

Animators are looking to enhance the detail of human or human-like figures for feature films and animated shorts, or to appear as playable characters in games. Once a static character has been designed and created, the skeleton (or rig) transposed onto it using animation software helps to build out a realistic, moving computer-generated humanoid.
While the skeleton, made up of ‘bones’, has traditionally been moved using keyframing (still the preferred method for some studios), motion capture systems allow the ability to move those bones using the actions of real-time by actors, not frame by frame. This speeds up the process and makes those movements easier to record, while making the resulting characters more realistic. A practical, in-action example could be a character’s ‘signature move’ in a video game, where sport stars are fitted with mocap body suits to track their unique, recognisable movements for their in-game avatar. Once graphics and a mapped rig are aligned, later post-production texture art further adds a detailed skin for even greater realism.

We understand that a main difficulty for character animation is solving the positioning of these ‘bones’ in accordance to trajectories produced by marker systems, essentially directional paths signifying movement through time. Skeleton solving software is necessary to reposition and transform each part of the rig, bones and difficult joint movements, from one place to another to fit those marker trajectories.

Where Rig Solver steps in

Skeletons can be calculated using the Calcium Solver – a popular core offering ingrained within our Cortex software – but Rig Solver can also create bone structures using properties obtained from a range of other skeleton engines. 

Sometimes, data including marker identification or trajectories may need to be cleaned up once imported into an interface. There’s usually human error in capturing fluid movement for character animation; markers can be accidentally misplaced or knocked from performance activities such as simulated fighting, and causes gaps between tracked motion from one action to the next.

Using high-quality cameras or multiple markers attached all across an actor’s body can improve the accuracy of obtained data, but Rig Solver software includes functionality to process and clean up data collected from a wide range of motion capture systems with the same power as proprietary solutions. Marker sets can be replaced or created. The resulting movement data can be exported in the preferred FBX file format, industry-leading for humanoid characters, and Rig Solver also supports HTR and C3D file types. 

Take skeleton solving to the next level

Skeleton engine software has been a popular choice for our animation customers to level up their rig fitting capabilities. We are delighted to offer an adapted form of that core functionality with Rig Solver: a complete, intuitive, time-saving and cost-effective solution which you can implement into your post-processing pipeline today.

If you would like to discover more about our Mocap Rig Solver, we’d be happy to help. Book a demo with our team today.

How to get into a career in game development

Game development is as vast a landscape as the boundless worlds, characters, and globally connected communities that production teams create. Whether online, on mobile, console, or through a VR headset, the process of creating applications for gamers involves a large team of talented engineers, designers, producers, and much more. 

It is a competitive field, but one which requires both technical and creative minds to take initial concepts into a fully realized gaming experience. Breaking into a game development role allows a rare and exciting behind-the-scenes hand within pre-production, and can springboard a career within motion capture: a technology that continues to evolve rapidly and assist multiple industries.

An industry on the lookout for talent

Gaming accessibility has seen a rise in players around the world. This makes it a lucrative market; PwC has estimated that the global gaming industry could be worth $321 billion by 2026. Greater demand means greater need for talented production team members.

Much like any video game, a career in game development is engaging, challenging, and an organic learning experience all in one go. There are misconceptions about what these jobs entail; knowing the ins and outs of various computer softwares, applications, website builders or being well-versed in coding languages certainly suit those with computer science backgrounds. But companies also require people with artistic or theatrical abilities and interests that may be unaware of adjacent ways into the industry.

Many colleges, technical institutes and universities offer skill-development courses that can progress into a degree or career in games development. These environments boost creative endeavors – story building or character development – and teach the practical and technical requirements needed to be a well-rounded asset to any games production company. 

How 3D animation can open doors

These skills, whether individual or combined, are transferable to motion capture jobs. 3D animation is one medium through which game developers utilize mocap to craft imaginary worlds. It goes way beyond the misnomer of a person moving around in a lycra suit, and requires a number of hands and brains to bring imaginations to life.

Actors and directors

The body suits, covered in 3D tracking sensors, need to be filled by the mocap actor. Responsible for real-time body movements and facial expressions, they provide the actionable human backbone for model skeletons which later become animated graphics. Now, more lightweight and affordable motion capture systems are available without the need for a full suit. 

Much like any film, the mocap director is responsible for ensuring the actors are well informed to perform actions correctly, but also to oversee all teams for the day-to-day set operations, mocap camera setup, and the processing of data for the post-production animation team.


Animation teams require pre-visualization model skeletons for characters before movement tracking can begin, to superimpose computer-generated imagery (CGI) onto during the post-production stage. Rigs need to be set up to determine how many markers are needed, depending on levels of movement. The mocap technical manager or capture technician is responsible for ensuring that the tracked data is all captured correctly, by calibrating the markers, cameras and rig. The Cortex system consists of two skeletons: one tied to the actor’s mocap markers, the other matching the animator’s rig, and can solve skeletons for a number of body or facial structures. 

The more motion capture cameras in place, the more accurately the captured tracked movements are. Given the high-spec of this kit, the role of mocap cameraman is paramount to handle the equipment safely and efficiently.

Post-production team

Any actor’s sensor-tracked data is transformed from moving geometric shapes to animated special effects in post-production. 

3D animators are responsible for taking the skeletons generated in pre-visualization and building out the 3D graphics – humans, animals or monsters – onto them, bringing them to vivid life. Even more detailed, texture artists are responsible for making CGI surfaces look realistic, whether that be surfaces of an alien world or a bear’s fur. Animation software, such as the popular Maya or MotionBuilder, is used for computer processing techniques: real-time modeling, rendering, and texturing. 

Footage editors are also required to fit animated clips together with the director, creating the cutscenes players can view in-game. 

A practical example of how motion capture for 3D animation works can be seen below to craft animated gameplay for Titanfall. Notice how the actors are fitted with sensors, surrounded by tracking cameras, production workers, tech operators and directors, all working together to create the final product. 

Every visual aspect is thought out by specialist game art designers: landscape design, building concepts and architecture, and character voices and outfits. Producers are also responsible for the slick, collaborative organization of each department; leadership roles can be learned on the job throughout a career in games development alongside creative endeavors. Whether applicants in the space have a practical knowledge and passion for computer animation or concept art, all contribute to the production.

Of course, this is not limited to a gaming context. Film and television production crews look for similar mocap capabilities for 3D animation. A few prominent examples of this work include Gollum in the Lord of the Rings franchise, or for dragon-riding in Game of Thrones, with the latter building on the first’s cutting-edge mocap techniques. 

The sky’s the limit

Augmented Reality and Virtual Reality require the construction of computer-generated worlds by 3D animators. Creative directors and concept artists are all instrumental in bringing the new frontier of the metaverse to life. Capture technicians and mocap software operators are needed for careful drone tracking, enhancing sport performance, adapting industrial facility training and ensuring safety, crafting virtual environments for broadcasting and so much more

It couldn’t be a better time to build a career in game development or motion capture. Equipment and software is becoming more affordable and adaptable to many different industries, opening up endless career options in the future. It may be tough to break into the burgeoning gaming industry, but for technologists and creatives alike, the possibilities really are endless  — some of which we do not even know to exist yet.

Ones to watch: the leading motion capture trends to follow in 2023

A man on a horse, then rotoscoping, then Gollum: trends do not so much come and go within motion capture, but continue on an upward trajectory. Movie magic, owed to the growing capability of visual effects since last century, was just the start for 3D animation and mocap’s rapid advancement. 

Since then, high-quality cameras, expansive analytical software, and lightweight autonomous vehicles have all contributed towards innovation in the space; the global 3D motion capture market hit an impressive US$193.1 million in 2022. And now, accurate motion mapping not only helps to craft otherworldly characters and worlds for movies and gaming experiences. Healthcare, sport performance, product development, and the military are all sectors growing their mocap abilities to better our understanding of movement through AI and robotics. 

Here are prevalent motion capture trends putting cutting-edge technology into practice, looking to spark creative endeavors and boost scientific discovery this year and beyond.

Enhanced drone tracking to enable safe work

Drones are not just remote-controlled airborne crafts. While reliable for filming footage over rugged landscapes, or above sports stadiums, drones are instead autonomous vehicles able to traverse ground level (or subterranean) environments. Currently used mainly by private researchers, among other critical use cases, drones’ location accuracy needs to be precise during the operations that researchers and other professionals conduct. 

To ensure this precision, mocap can be used in the testing phases of drone tracking, allowing the vehicles to perform remotely via GPS. An operator can follow the movements of their attached emitters using advanced motion cameras, even when obscured by surfaces or objects. This is essential when carrying out dangerous safety checks, including disaster relief, identifying leaked gas dispersion, or inspecting faulty equipment, which pose great risks of injury. Already used by energy companies, drones and their tracking components are also fast becoming more lightweight and flexible for different engineering needs and maximum performance

The rise of deepfake in entertainment

Deepfake is often mistaken as a form of motion capture, a machine learning tool rather than mocap’s visual effect technology able to track real-time movement. But despite being under fire for its nefarious uses of superimposing different identities onto real people, deepfake’s positives for the film industry and biometrics can thrive with increasing regulation, and generative adversarial networks (GANs) able to detect fake images, taking it far beyond a facial-mapping trick.

Deepfaking has already been used for deaging special effects (The Irishman), or replicating characters performed by late actors (Star Wars). But its future relies on collaborating with motion capture technology, which can enhance these continuity efforts by recording actors’ movements to make whole deepfaked entities more realistic, besides just facial expression. Hollywood may adopt this ‘meeting in the middle’ approach, an innovation in motion capture backed by famed bodysuit artist Andy Serkis.

AI and mocap revolutionizing healthcare

Motion capture wearables are by no means limited to acting use. In landmark studies, researchers across University College London and Imperial College London are instead combining data collected by bodysuits with AI algorithms to help understand movement-related conditions, including dementia, muscular dystrophy, stroke, and Parkinson’s. 

Mocap systems help researchers to monitor the tendencies and patterns of biomechanical movements as the software can create digitally-mapped ‘twins’, rendered representations of patients, for further data analysis. Resulting insights assist in tracking the progress of rehabilitation techniques, or predict any future detrimental effects across a variety of conditions associated with bodily motion.  

Crafting more efficient virtual productions

FIlmmaking was rife with problems caused by the pandemic; namely the lack of production equipment supplies and mass crew shortages for shoots worldwide. But the knock-on effect has seen further investment in virtual production: ‘LED volumetric’ capabilities can take mocap-suit actors to any conceivable virtual location using large-scale screens. 

Live action can be shot in real time against these high-definition backdrops superimposed with limitless computer generated graphics. Artists are able to craft stunning worlds (on earth or otherwise) in a remote studio for smaller teams, all while curbing logistical issues and reducing carbon emissions associated with the movie industry. 

Mocap to enter the metaverse

Not only is cloud technology seeing 3D character animators working collaboratively and remotely online, but mocap is being used to further virtual and augmented reality. The metaverse marks the next digital frontier, where captured movements of singers, dancers or actors, and other entertainers can populate an interactive virtual platform where avatars (representations of ourselves) work together, shop, or experience live music and dramatic events. It’s a reality beyond our current lived reality, and an exciting prospect to see come to life through mocap. 

Considering the immense motion capture advances above, the technology has to similarly thrive among a host of use cases; whether for character animation, drone tracking, or otherwise, accurate motion capture relies on robust cameras and marker kits. Our expanded range of upgradable BaSix mocap cameras provides advantages for various locations and services, integrating with Cortex software. As these mocap trends kick into gear, we’re looking forward to seeing how we can assist our customers to revolutionize mocap use across the globe. 

See how Bournemouth University puts Motion Analysis’ future-ready mocap into action or get in touch with our team to discover our range of solutions for animation, gaming, broadcasting, industrial work, and more.

Why Bournemouth University uses Motion Analysis to nurture animation’s next generation

The Customer

Bournemouth University is recognized as one of the foremost animation institutions in the United Kingdom. Under the leadership of Zhidong Xiao, Deputy Head of Department at the National Centre for Computer Animation, their animation focus applies to three main practice areas: teaching the full pipeline of motion capture technology to inspire student animation projects; exploring new mocap usage for research councils; and helping creative filmmakers and artists through studio space and advanced equipment.

The Problem

Having experimented with motion capture systems since 2003, the university’s original fixed capture space was an ample-sized classroom primarily used for teaching character animation, utilizing Motion Analysis’ Raptor 2 active optical motion system. By 2010, however, the team was developing a new, larger studio facility. Able to accommodate a greater number of cameras, there was an increasing need for advanced data processing and motion retargeting to suit larger-scale projects.

The Solution

There is a growing expectation for advanced detail in 3D character animation, which has spurred mocap technology to similarly scale in precision and sophistication. Whether it’s creating robots, mythical creatures or cartoon figures in games, films or television shows, facial and bodily movements are becoming increasingly lifelike to make visual experiences like never before.

To track these actors’ movements (fitted in bodysuits complete with markers), passive optical systems are one option to accurately capture motion data. Able to track the simultaneous motions of objects and humans alongside video footage, the marker movements synchronize with Motion Analysis’ Cortex software, which helps to map the skeleton that will later become a 3D animated character brought to vivid life. The tracked real-time data makes completing re-dos or small edits in post-production far simpler, keeping a record of the actor’s motions before the computer graphics have been superimposed.

For animation educators like Zhidong’s team, that cross-collaboration between software and equipment provides a greater advantage to teaching the full scale of animation methods to students. Now utilizing 16 of our fixed 4K Kestrel cameras for accurate data capture, student classes remain a focus, but the newer space was also designed to better craft virtual reality sets for television shows, film music videos, and to develop special effects for the silver screen. 

The interoperable mocap system opens the doors to create brand new experiences in the studio, to benefit community projects, student work and artistic expression. Bournemouth University’s media department is set to carry on their work in these areas, using their animation studio space and Motion Analysis system to develop machine learning and training techniques for industries choosing to adopt mocap technology’s many advantages. 
Want to discover more about Bournemouth University’s collaboration with Motion Analysis? Catch up on the full story in our case study.

Looking back on Motion Analysis’ highlights of 2022

As one year draws to a close and another one begins, it’s only natural to reflect and think about all the progress that was made in 2022. It’s equally natural to look ahead and make plans for the future.

For Motion Analysis, 2022 was a rather momentous year. We celebrated 40 years of helping our many different clients leverage mocap software to bring their research to life and to enrich their creative projects. Over the course of the four decades we’ve been in business, we’ve had the opportunity, and the incredible privilege, to use motion capture in ways that we could never have imagined when we started out. 

Below, we unpack a few of the other highlights in mocap that made 2022 a great year. 

We moved on from the pandemic

When we look back on 2022, a big Motion Analysis highlight is the fact that things are really starting to return to normal. During the pandemic, most industries experienced supply chain issues, which affected their ability to work without disruption. As Motion Analysis designs, engineers and builds our products in California – everything we do is proudly Made in America – we were able to limit the impact of these supply chain restraints and the adverse impact this might have on our customers. This might be the reason why we’ve seen a strong increase in US sales this year. While we weren’t hugely affected by supply chain issues, we have been very strategic about keeping an eye on our supply chains so that we can address any issues before they affect our ability to build and ship the products our customers want and need.  

Looking more broadly, after an understandable slowdown during the peak of the pandemic, it’s great to see that the industry has rebounded to pre-COVID levels. This is a sign that research and development have resumed and appears to be thriving once again.

We refreshed our brand

In May, our website got a much-needed facelift because we wanted it to better align with our commitment to innovation and quality. By streamlining our logo, modernizing the Motion Analysis color palette and adding fresh imagery, we believe that we’ve injected new vitality and relevance into our brand and we hope that our new branding conveys the energy and passion that we value so much as a team. If you haven’t seen it already, take a look: 

We expanded our product line

Keen to make the work we do more affordable, we launched the BaSix range of cameras.

This family of cameras consists of the three “light” cameras (in ascending order of capability): BaseCam, the Icefall, and the Lhotse. These cameras are used with active marker rigs (BaSix markers) and either BaSix Go software or our premium solution, Cortex. This range of cameras is designed to help smaller 3D animation and gaming studios start their own mocap journeys, with the entry-level system (using BaseCams) starting at $20,000. We also debuted the Firefly Active Marker Kit. A complete active marker tracking set, Firefly is ideal for data scientists, private researchers or anybody carrying out investigative work with drones because these markers are so small and lightweight. We also launched the latest iteration of our Cortex software – Cortex 9.2. This software update includes new features and has been improved and updated based on insights from our customers and changes in the industry. 

We attended a range of industry events

After two years without in-person events and conferences, it was great to get back into the swing of things in 2022. We had the opportunity to attend several events this year and got the chance to interact with our clients and colleagues once again. In March, we hosted a booth at the annual Game Developers Conference (GDC) in San Francisco. We sponsored the Career Award at the American College of Sports Medicine (ACSM) Annual Meeting and World Congresses on Exercise is Medicine and Basic Science of Exercise and Vascular Health, which was held in San Diego. Of course the National Conference on Biomechanics (NACOB) in Ottowa was a highlight, as was the International Symposium on 3D Analysis of Human Movement, which took place in Tokyo, Japan. 

We worked with new clients

Other Motion Analysis highlights include welcoming several new clients to the Motion Analysis family. In 2022, we worked with some of the world’s top universities, video game businesses, medical facilities, sporting brands and even leading aerospace companies. Each new client brings something different to the table and we love getting the chance to use our hardware and software to bring their ideas to life and answer their research questions. 

We’re ready for what lies ahead in 2023!

For us, mocap in 2023 is all about staying innovative and looking for ways to enhance the work we do so that we can better support the research and the creativity of our clients. As our technology makes its way into other industries, we hope to attend a more varied range of conferences and events this year. We are also excited about what we can achieve under the guidance of our new president, Brian Leedy, who stepped into the role in December. If you want to keep up with all things mocap in 2023, please follow along on our journey in the new year by subscribing to our newsletter. Or you can follow Motion Analysis on LinkedIn and Twitter.

Five (more) things you might not know about Cortex motion capture software

In July 2021, we debuted Cortex 9, which enabled users across a wide range of industries – from researchers and scientists to engineers, biomechanics professionals and creatives – to use motion capture more efficiently and effectively. 

And now, we’re taking things up a notch with Cortex 9.2. The latest version of our Cortex software, Cortex 9.2 is all about expanding digital integrations; particularly the Ultium EMG and IMU systems from Noraxon. Before Cortex 9.2, users would have needed extra components – including additional hardware, extra cables and a wire for each channel – to collect data from a Noraxon system in Cortex. This could get quite messy and add complexity to a setup. Now, the setup simply requires a USB cable from the Noraxon base unit into the PC.

In addition to this, we have also expanded the types of Delsys sensors that can be used. The latest version of Cortex is compatible with two more sensor types, doubling what was previously available. And we’ve added additional compatibility with reference video cameras and made improvements to digital force plate integrations for TecGihan and Kistler. The latest version of Cortex also includes several bug fixes, as is standard with any new release. 

A little while back, we showcased several Cortex features users might not know about. With the recent release of Cortex 9.2, we’re adding to this list. Take a look.

Capture Inspector 

Captures can consist of many file types covering tracks, raw camera data, analogue, marker sets and so much more. In addition to this, when users are analyzing their capture, they are often copying data from one computer to another. All of this increases the likelihood that you could ‘lose’ certain files. With the Capture Inspector assistant panel you can easily identify when data gets lost or mixed up and clean up and rename files so that it’s easier to find them down the line. 

Golden Templates

If you’re using the same marker set for every subject, you can keep adding information – things like height, size, shape – and the Golden Template will keep learning. This means that rather than having to manually identify each and every subject, Cortex 9.2 will automatically add the subject’s linkage lengths and range from a Golden Template. So users no longer have to name each marker individually, making it possible to automatically label a new subject when they enter. 

Moving Origins

One of our exciting new post process analyzing tools allows you to make adjustments to characters that might not quite be in the volume or at the angle you want them to be in. With Moving Origins, you can simply slide the character into the correct space, adjusting the origin of a live piece of equipment.


When you’re comparing person A to person B or you’re looking at someone wearing shoes versus someone without shoes on, you need to have a sense of what is ‘normal’ and use this as your baseline. This entails plotting a movement through a cycle from 0 – 100. With Cortex 9.2, you can normalize the amount of time it takes to complete a movement so that it’s easier to compare one cycle of data to another. Once you’ve worked with 20 or 30 subjects you can get a sense of an average and then identify any differences or deviations from the ‘normal’.


With the Workflows panel, introduced with Cortex 9.0, users can quickly automate repetitive tasks. This saves time and delivers greater consistency between users. Workflows can be set up to include any number of functions in Live Mode or Post Process mode and are ideal for users who don’t have scripting or coding experience. Once a workflow is created, it can then be saved and applied to different capture sessions and by different users to maintain a consistent protocol.

And if you’re in need of a little help, the full Cortex 9.2 manual and QuickStart guide can be launched from the HELP tab. And when using the software, if you click the ‘?’ on the page, a context sensitive Panel Help bar will appear. 

Want to see one of the many new Cortex 9.2 motion capture software features in action? Check out this video of a workflow designed to get you ready for subject collection in just a few clicks.