Motion capture suit, camera & action! What goes into a mocap performance?

There’s more to mocap than rolling around in a lycra suit!

We’ve already looked at the acting skills needed for a successful mocap performance, now let’s dive into the technical side of things to better understand each piece of tech that makes a performance work. 

1. The motion capture suit

The motion capture suit is really just a lycra outfit to hold the markers  onto the actor’s skin so they can move naturally without feeling inhibited. But the markers attached to these suits are the real star of the show. 

These retro-reflective 3D tracking dots are small spheres positioned strategically on the performer to record their real-life movements. Imagine the markers as computerized puppet strings – pulling the skeleton of the character through frames that create animated motion. 

2. The cameras 

The retro-reflective markers are tracked by specialized motion capture cameras. The more cameras you use, the more complete and accurate the outcome will be.

Cameras such as the Kestrel produce marker coordinate data rather than an image. They detect only infrared or near-infrared light and are able to pass information at a much higher frame rate than a typical television camera could. 

The Kestrel 4200 is one of the best pieces of hardware out there when it comes to mocap tech, and is an excellent investment for large and complex mocap systems. But if you’re working on a limited budget then the Kestrel 300 will still deliver a high quality motion capture.

Related: Choose the motion capture hardware that’s best suited for you

3. The software

An animation studio, game maker or filmmaker will use professional 3D animation software – Autodesk’s Maya is one of the more popular ones – which provides all the modeling, rendering, simulation, texturing, and animation tools that need to be added once motion is captured.

4. The rig

Before tracking movement for animation, animators need to have a basic skeleton mapped out for the character they are creating. This skeleton will help them to determine how many markers they need to use, and what levels of movement they need to track. For example, an acrobatic dancer who is going to be doing backflips will require more markers than a rigid-limbed robot that stomps around. 

The cameras and markers capture the motion and the data driving the character’s skeleton rig is sent back to the animation program where it’s transformed with fur, clothing, or skin. 

Our Cortex system is capable of solving the skeletons of any structure with any number of segments, including bipeds, quadrupeds, props, facial animation and more.

Because most humanoid characters have similar skeletons and move in similar ways, it’s possible to develop marker sets that can be used on a number of skeletons. 

Our Basix Go software has a built-in, constrained and tracked human skeleton at its core, which works for almost all humanoid characters. The six active markers strapped to the performer’s waist, feet, hands and head, are enough to track a human’s motion very accurately and precisely. Then within our software, (or in the receiving package), this rig can be mapped to the creator’s humanoid skeleton. 

Having this built-in solver-skeleton that’s ready to be tracked, means our BaSix system setup time is minimal compared to other traditional mocap systems. You simply need to walk into the studio once cameras are set up, strap on your six markers, stand in a “T” pose, press “reset skeleton” in the software, and voila – you’re tracking movement and data is being streamed live into your animation package in real-time, ready to be recorded. 

Interested in finding out more about our motion capture suits and technology? Find out more about our systems and book a demo today.

In the field: a chat with Thomas Kernozek, Professor, University of Wisconsin, LaCrosse

After a long-running fascination with athletics and injury mechanisms, Prof. Thomas Kernozek has implemented many motion capture systems to fuel his work in physical therapy and the study of movement-related conditions. Using two systems at the University of Wisconsin, LaCrosse, where he is a Professor in the Health Professions—Physical Therapy faculty, Thomas’s teaching gives students valuable experience with advanced motion capture technology, while also gaining evidence-based data for his own clinical research. 

We caught up with Thomas to discover more about his specializations; his experience using real-time feedback and which future mocap features can help nurture the next generation of talent for biomechanics in sports medicine. 

How did you get into biomechanics in human movement, and what inspires your work? 

Like many people that grew up being active and enjoying many forms of sport and exercise—or  becoming injured!—I was driven to understand why some injuries may occur and how it gets examined in a clinical setting. That led to a career in biomechanics, where my research specializes in some common lower extremity injury types: anterior cruciate ligament (ACL) injury, patellofemoral joint and Achilles tendon injuries. 

Physical therapy was once a Bachelor’s degree here in the US but the professional knowledge base has changed drastically since. It became a Master’s degree when I was hired at LaCrosse in 1996, and I now teach and work alongside entry-level clinical students in the doctoral program in physical therapy. Our university laboratory spaces allow our students to engage fully with robust technology, which really helps them develop their own perspectives on how they understand and treat movement-related injuries. I always aim to craft students into striving to become scholarly clinicians by using our mocap systems in my teaching and scholarship.

How did you discover Motion Analysis, and why did you choose it for your own clinical research?

I discovered Motion Analysis while visiting other universities and medical institutions during a sabbatical. When I was “growing up as a biomechanist”, video technology was just in the beginning stages and the use of high speed film was phasing out. I’d used an earlier video based motion capture system before joining LaCrosse that did not have the same capabilities as the Motion Analysis system, so I jumped at the chance to implement this equipment once we had opened the Strzelczyk Clinical Biomechanics Laboratory in our new Health Science Center. 

Its compatibility is a huge plus, as the software and hardware can be upgraded and integrated with existing systems easily. Older Motion Analysis camera models we purchased are still operational and compatible with our software but the overall evolution of these systems has been great to see. We now use mostly Kestrel cameras and Cortex for both systems we have set up in two laboratories—one surrounding an instrumented treadmill—for examining physical activities with human subjects and using data gathered to inform computer models to estimate joint and soft tissue loading.  

Your work at the university covers many roles, including Director of the LaCrosse Institute for Movement Science, so how do Motion Analysis systems help you practically achieve your goals? 

We work with collegiate athletes in jumping sports here at the university, including volleyball and basketball. We’ve also targeted female athletes because we see ACL injuries and related maladies being more prevalent in those performers. We also study a lot of runners. Ultimately, we want to prevent these athletes from getting hurt.  

Our students get practical first-hand access to advanced mocap in classes, so it is used in teaching and research, which is somewhat unique to our physical therapy curriculum. The mocap cameras help identify, measure and track movement, which supplies evidence to inform answers to clinical research questions related to physiotherapy.

One thing we’ve done with Motion Analysis systems is use musculoskeletal models to measure Achilles tendon stress or patellofemoral stress related to running performance. These data are particularly useful for clinical research, as we attempt to drill down to the anatomical structures and tissues to examine how varied athletic movements (such as stride patterns) affect loading. Excessive loading may be associated with the performer’s pain symptoms. We have also used biomechanics within a motor control paradigm to provide augmented feedback to participants to alter their movement performance.

What are your favorite projects involving Motion Analysis technology?

A notable project involved test subjects with patellofemoral pain (pain around the knee cap) performing squats. After a physical therapist made sure that these test participants met certain criteria following a clinical assessment for patellofemoral pain, we streamed their motion capture data into a musculoskeletal model while they were performing squats. The load data between the patella and the femur during the exercise was displayed as augmented feedback.  Participants were able respond to this augmented feedback to alter their squat performance to reduce loading. 

And finally… What excites you most about the future of biomechanics in sports medicine? 

Our capabilities are still evolving, and mocap technology not only shapes our understanding of therapeutic exercise and injury, but contributes to medical literature in the physical therapy profession. Computer modeling approaches informed by Motion Analysis data helps to get a clearer picture of injury mechanisms during movement, and we’re excited to see modeling and motor control capabilities grow quite rapidly.

Wearables and other portable systems are another exciting market to inform clinical practice and provide testing opportunities outside the lab. From a teaching point of view, we’re proud to inform our clinical students on the power of these new technologies and how they may open opportunities for them. We’ve had our students go on to study PhDs or work in residency or clinical practice where they are adept at using motion capture.
 
If Thomas’ use of innovative mocap technology has inspired your own biomechanics testing, talk to our team to find out how Motion Analysis can help you achieve your own goals.

Tech tips: How to do camera calibration for Cortex and BaSix

Before any motion capture project begins, a thorough calibration process must take place. No matter which cameras you use, making sure that they are receptive to markers and synchronized properly has a distinct effect on the accuracy of your captured data. Plus, when the initial basics are completed successfully, it smooths the next stages of using mocap software.

Calibrating cameras for our Cortex and BaSix software is a quick step-by-step process. Here’s how it works, with some handy insider insights about our advanced features. 

Simple setup, rapid results

Considering the individual specifications of multiple cameras, mocap system operators are required to align them properly in order to track movement effectively. This means lenses may get readjusted, while cameras situated in places likely to get knocked repeatedly may need to be repositioned, so it’s best to perform an all-new calibration process to ensure high-quality data capture.

Luckily, calibration typically takes only a couple of minutes, although this can depend on the number of cameras you are using, their capture volume, and whether they are fixed. Likewise, the precision of the capture movement data will be improved when using the system shortly after completing the calibration. 

Camera calibration explained in two simple steps

Calibrating cameras for Cortex and BaSix is a two stage process requiring just a couple of pieces of equipment.

  1. Map the space using the L-frame

This is a simple L-shaped apparatus complete with four markers used to establish the capture space’s coordinate system. 

During initial set up, the corner marker – which defines the volume origin – is typically placed at the center of the intended capture space. If any minimal adjustments need to be made, it is simple to make “spot checks” of each camera within the software to make sure the cameras can only see the L-frame’s four markers before moving on to the next step. 

It’s a misnomer that all cameras have to see the L-frame – it is better if most can, but that may not be possible in an extra large space.

  1. Standardize measurements with the wand

The second stage involves dynamic calibration using a handheld wand, which has a standard 500 mm length between the markers at each end of the wand. This provides a reference point for the cameras to map out the entire capture space using dynamic calibration.

By waving the wand in the cameras’ field of vision, they can measure precise lengths from the wand’s end markers to the surrounding volume, and then correct themselves according to those measurements.

When all the parameters – including focal length, camera orientation, L-frame measurements and wand lengths – are input correctly and calibration converges correctly, the mocap system can be used.

Watch our quick how-to video

Advanced Cortex features

Initial setup is available to customers using BaSix software, with these following extra calibration features available within Cortex:

  1. Gain feedback on camera status

For both the L-frame and wand steps above, Cortex’s 2D view identifies how many centroids a camera sees. This makes it easy to identify if the cameras can only see the L-frame’s four markers, or the wand’s three markers. When the camera  has sufficient wand data for lens calibration, the camera’s 2D view in Cortex changes color from white to green as a form of visual feedback. Similarly, for both steps, camera tabs change color to indicate a camera being uncalibrated, ‘seeded’ (if it sees the correct markers), or fully calibrated. 

  1. Remove the need to restart with Update Calibration

Restarting an entire calibration doesn’t take too long, but the Update Calibration tool requires fewer steps by amending camera calibration information according to pre-calculated values. It is especially helpful when the volume is an odd shape to implement the L-frame properly, or in a bigger capture space where not every camera can see the L-frame’s markers.

  1. Reduce residuals fast using Quick Refine

Cortex processes areas of the space where the wand’s markers are being reconstructed in 3D. These resulting ‘residuals’ are good indicators of calibration success. You are looking to gain a 3D residual average across each camera during calibration.

It is possible that these 3D residuals can increase over time since the initial calibration. For example, any knocks can cause camera vibrations that can disturb the equipment. If you’re rushed for time, rather than completing the full calibration process again, Cortex allows for Quick Refine. Using any markers in the volume – including those attached to a subject for instance – you can record the mocap actor covering the whole space, while performing a quick refine, and the system will then update the originally saved calibration values accordingly. 

  1. Personalize the process using Custom Calibration

Within Cortex’s Custom Calibration wizard in Live Mode, you can toggle both general settings (e.g. frame rate and shutter speed) and individual camera settings (e.g. threshold, brightness, min/max lines).

Custom calibration settings are saved in the check box which, when enabled, applies the user-defined settings after starting the calibration process, and saves them at the end to be applied automatically when the next calibration is started. This is useful when different camera settings are needed for calibration compared to collection, for example when wanting to save time and collect less data by implementing lower frame rates.

Cortex’s settings also allow you to ‘mask’ areas in the 2D view of any given camera, which filters out any ‘noise’ such as bright lights or reflections that may distract from the markers.

  1.  Reusing collected Raw Files

Raw Files get saved during the two-step L-frame and wand calibration process as ‘calframe’ and ‘calwand’ for each step.

If there is any problem causing a diverging calibration (whereby cameras cannot understand spatial positions), these files can be used to recreate the calibration with different settings and gain a more successful result, even for offline calibration. 

  1.  Track moving cameras with Continuous Calibration

If you use a roving camera (or if the room or volume space is moving) Continuous Calibration utilizes stationary markers in the space for the camera to correct its own position while continuing to track subject marker movements, as shown in this demonstration.

We’re here to support you

It is possible that some small details could get overlooked during the calibration process, but there’s usually a quick fix. It could be as simple as a typo when inputting a lens specification. We’re here to assist you with any troubleshooting problems that might occur. 

With a range of options for calibrating cameras for Cortex and BaSix, it is simple to prepare your mocap system quickly and efficiently. If you need help with the calibration process, chat to our Customer Support team

If you’re exploring mocap solutions and would like to find out more about our systems, please book a demo.

Mocap in action: In conversation with Adam Cyr, Biomechanist at Mary Bridge Children’s Hospital

A long-standing client, Mary Bridge Children’s Research and Movement Laboratory (RML) is a multidisciplinary facility that houses a team of engineers and clinicians who conduct research and use the latest technologies to identify, diagnose, and treat individuals with movement challenges.

We caught up with Adam Cyr, a biomechanist at the facility, who has a keen interest in applying engineering principles and techniques to understand how the human body performs. His goal is to improve injury prevention and treatment.

Here, we share what he had to say about his work and how he is using mocap as part of the biomechanics research he does on a daily basis.

Could you give us a quick overview of your background as it relates to the world of biomechanics and biomechanics research?

After completing my studies, I briefly worked at a company doing forensic biomechanics before I found myself at the Research and Movement Lab at Mary Bridge Children’s Hospital. At the RML, we see patients with a wide variety of concerns, including neurological, muscular, and orthopedic disorders. We also see people who are looking to enhance their performance or who suffer from sports-related injuries.

How do you use motion capture technology in the work you do every day?

The more data we can collect, the better. We want to look at kids doing functional tasks. If we see a patient today and collect data on how they move in their preferred way and then they have some sort of intervention, we have data we can use to assess if there’s been an improvement because they will be moving better than before. Our goal is to inform the clinical providers, whether they’re surgeons or physical therapists, and provide them with objective data so they can make better decisions. 

On a typical day, we’ll spend a few hours with a patient either in the morning or the afternoon. We’ll prep the room to make sure that the motion capture system is ready and that the markers are ready to go. We’ll do a subjective history and a physical exam. And then we’ll put the markers on and get the patient to do basic movements. If there’s any particular activity that is causing a problem, we will have them do that activity specifically. After they leave, I compile the data, process it and turn it into graphs and meaningful insights for our therapists to review. It’s great to work this closely with clinicians to see the data and graphs transform into information that means something.  

Can you walk us through your experience using Motion Analysis and share some of the features you find most useful?

The motion capture system I inherited in my current position was an older one. We were very fortunate to be able to upgrade to some newer Motion Analysis cameras recently. The new tech is very impressive. From a size perspective, everything is getting smaller, the optics are better, the speed is better and these cameras can track much smaller markers. 

The cameras are also more advanced, which makes it easier to do things right the first time and not waste time cleaning up the data. This speeds up patient processing times. We want to get a report back to our patients within a couple weeks and if I’m spending a day cleaning up data, that isn’t possible. 

When I do have to clean up data, there are some great features on the backend that make it easier to do so. For example, if a marker dropped off and you didn’t notice, you can use virtual markers to fill in the data gap. I’ve also started to go down the road of playing with what they call the Sky Interface. This allows me to build my own scripts using a batch process. I’ve been working closely with the Motion Analysis team on this and they’ve been hugely helpful. When we collect EMG data, there’s a delay in time so we need to shift the data over for it to line up correctly. With the Sky Interface, I can code something so that I just have to hit one button and it goes through all of my captures and automatically shifts the data over.

We’re also starting to get into real-time feedback using Cortex software. In a clinical setting, we’d use this to better understand upper body motion. For example, we’d put markers on the elbow, the arm and the torso and ask children to reach around so we can see how far they can reach. With real-time feedback, it’s possible to have them reach for virtual markers on a screen, a bit like they are playing a video game. It would all be done in real time using the Motion Analysis workflows I’ve learned. In the work I do, it’s been enormously helpful for me to be able to pick up a phone and connect with the Motion Analysis customer support team or their engineering and technical teams because they are so willing to help out when I have a problem that I need to figure out right away.

If you, like Adam, want to leverage motion capture innovation to better understand movement-related conditions or improve how you monitor the tendencies and patterns of biomechanical movements, we can help. Learn more about how our team can support your mocap needs by scheduling a demo today.

Join us at these upcoming biomechanics conferences

As we speed through the year, biomechanics conferences are well underway and buzzing with innovative ideas. Providing a welcome experience to network with researchers, practitioners, and clinicians face to face, industry events inspire the sharing of insightful perspectives and findings to anyone involved in biomechanics—or those looking to move into the sector.

Alongside advancements in motion capture technology, forward steps in sport product testing, robotics, medicine, gait analysis, rehabilitation, and data collection are all current trends fuelling discussion in 2023 and highlighting the outstanding work of leading and up-and-coming biomechanics academics.

After compiling a list of the year’s best industry events for biomechanics, we are looking forward to meeting valued colleagues old and new at these two US-based conferences in the near future.

ACSM 2023

When: May 30th – June 2nd
Where: Denver, CO, USA

This Tuesday sees the start of the ACSM Annual Meeting and World Congresses. The yearly conference, hosted by the American College of Sports Medicine, is a flagship event for sport fitness, healthcare, and treatment professionals. 

We will be exhibiting at Booth #100, and we are again proud to sponsor the ACSM Biomechanics Interest Group’s Career Achievement Award, which recognizes the great achievements of upcoming scientists in the field of biomechanics. 

The event is an invaluable opportunity for budding students and experts to network, share career advice and watch mocap applications for biomechanics in action. Attendees can immerse themselves in personal workshops, learn from over 1,500 case presentations, or join online to watch 13 hours of unique content and recorded live sessions. 

We are excited to meet you and to explore the future of the industry in Denver very soon. 

Human Movement Variability and Great Plains Biomechanics Conferences

When: June 5th – 6th
Where: Omaha, NE, USA

Shortly after, our team will also be exhibiting at a dual event at the University of Nebraska Omaha

Home to the Human Movement Variability Conference for its eighth edition, the annual event focuses on student-centred discussion regarding human movement research and is run by the university’s Center for Research in Human Movement Variability and the Department of Biomechanics. 

The event space will also host the Great Plains Biomechanics Conference. Keynote speakers and podium sessions bring together around 100 academics, investigators, and scientists from around the world to explore progressive biomechanics topics including vascular mechanics, bioprinting and much more. Guest speakers include Dr. Beatrix Vereijken from Norwegian University of Science and Technology and Dr. Bill Baltzopoulos from Liverpool John Moores University.

These two Omaha-based biomechanics conferences are being held solely on premises and in-person for the first time since the pandemic, and we are keen to discuss motion capture for biomechanics with the wider community. Come say hi!

See you soon

With plenty more mocap industry events still to come in 2023, there are a range of excellent opportunities to learn new trends, applications and technologies that continue to move the biomechanics world forward.

If you are attending these or any other biomechanics conferences in 2023, let us know via LinkedIn or Twitter.

Here’s how real-time feedback in Cortex works

When conducting real-time motion capture (for training, research, animation or other purposes), it helps to understand whether a required movement is performed correctly for its use case, and to get that insight directly while the action is carried out. To do that, a feedback indicator can alert the system operator whether the exact motion has been exhibited, as soon as it happens.

We’ve received a lot of interest from users of our Cortex software to better understand how to use real-time feedback in the platform, with this useful functionality built in. While it is possible to send captured data to third parties, BioFeedTrak in Cortex instead allows you to set up a range of desired feedback loops to suit your needs. 

Real-time streaming in practise

Many of our customers can achieve their desired motion capture outcomes during post-process analysis, particularly useful for data clean up and modeling. But there are a vast number of mocap users in other cases or industries that benefit from identifying when correct motions are achieved in real-time. Some examples include (but are not limited to) researching joint functions or the rehabilitation of movement disorders in laboratory settings, where those carrying out this work may need to repeat actions to understand when progress is being made.

Multiple users are aware of BioFeedTrak being a tool to indicate a particular frame where an event occurs – a heel strike for gait analysis, or ball release in a pitching test as some examples – but its function is extensive. BioFeedTrak also gives you the ability to track and measure motions performed by a subject, kitted out with markers, and automate feedback cues to alert the subject to perform that action differently, or continue in the same way until a certain height, distance, or time, for example, is reached.

These cues could be a pop up window, a flashing icon, or a sound, but the BioFeedTrak interface is fairly limitless in providing whatever form of feedback best fits the user or environment. 2D and 3D motion capture can be controlled with immediate effect using real-time feedback, with the added benefit that the BioFeedTrack functionality is available in Cortex.

How BioFeedTrack works

BioFeedTrak can simply be found by following Tools>BioFeedTrak Event Editor:

The BioFeedTrak Event Editor creates events, whereas the BioFeedTrak Event Timeline allows users to assign an event to a frame, or discover on which frame a particular event has occurred:

While movement data gets tracked in Cortex, BioFeedTrak can control how and when your feedback will be shown to the motion capture subject. 

BioFeedTrak Event Editor allows you to provide a description of your feedback, according to the action it is following along with. These “Events” can be enabled in “Live” mode. Imported scripts can then instruct and trigger the program to perform the desired feedback. Using a visual feedback example, a script could indicate that a running subject needs to change direction once they have reached a certain marker in the room, tracked by mocap cameras. 

The “Presentation Graphs” functionality in Cortex also provides a real-time graphical representation of a subject’s movement data within the interface. When linked to the script, the program follows this data so that it knows to provide feedback when numerical touchstones are reached on the graph. BioFeedTrak can also train a motion actor to reach a specific point using a “Threshold Event”, which will apply the desired feedback once a certain event is completed. 

Visual and audio forms of feedback make up the biggest use cases which can be shown to the motion subjects: directional arrows like the example above, or beeping sounds. But as scripts are written by the user, forms of feedback are open-ended to cater to any environment and measure a range of variables. 

Video example: Tracking distance measurement with audio feedback

Here, BioFeedTrak and Presentation Graphs have both been set up and synchronized to adjust a sound’s pitch based on the distance between a thumb and forefinger. The script tracks the measurement between the digits’ mocap markers and provides the pitch adjustment feedback based on that distance. 

Video example: Measuring maximum value with visual feedback

In this video, BioFeedTrack Event is measuring the maximum ground reaction force according to the peak value that is seen during the capture. When greater values are reached, the pop-up message is updated until capture is completed, whereby the remaining value displayed in the popup indicates that topmost ground reaction force. 

Video example: Assessing body weight percentage with visual feedback

In another visual example, the script is written to identify when a certain body weight percentage gets applied onto a plate scale. When the desired force achieves an optimal weight (the “Threshold Event”) the subject gets informed by a pop-up text window changing from red to green. This method is especially useful for rehabilitation practice; in this case, to check weight percentage exerted onto a prosthetic leg.

Giving motion actors the ability to realize a full range of motion with instant feedback is a helpful way for you to improve a range of processes and save time. Using the BioFeedTrak function within Cortex software along with motion capture cameras means that rich data gets collected simultaneously, and feedback is provided instantly in a live setting without the need to send data to a third-party application for similar feedback functions. Whether you need greater real-time feedback for biomechanics research, rehabilitation efforts or workplace training, BioFeedTrak is a handy built-in tool to best streamline your motion capture projects

If you’re a Cortex user and would like to find out more about using BioFeedTrak, chat to our Customer Support team. If you’re exploring mocap solutions and would like to find out more about Cortex, please book a demo.

How mocap can add a human touch to the expanding metaverse

The metaverse used to be a term that wouldn’t seem out of place on a comic book cover. Now however through further public attention  the metaverse is not as intangible as its ‘beyond the universe’ namesake suggests. Instead, it’s a frontier that we’re already exploring: our real lives transferred to a digital space. 

Motion capture is a valuable tool to help bring ‘fantasy’ to life, to take the metaverse from a playground of technological opportunity to a practical interactive storefront, cinematic experience, or place to do work. Marking itself out as a key mocap trend for 2023, it’s exciting to predict the future of motion capture capabilities in the metaverse world that we’re becoming, very quickly, acquainted with.

From obscure to mainstream

The metaverse remains an enigma to many, its shared name with Meta (formerly the Facebook company) perhaps being an assumed point of reference. The metaverse is simply a 3D computer-generated landscape where we can interact with other users as avatars – representations of ourselves exploring online as we would in real life. Alternatively known as ‘mixed reality’, it is an extension of virtual or augmented reality, a market that amounted to roughly $29 million worldwide by the end of last year. 

Particularly used by gaming communities in its current form, the metaverse is evolving from an advanced work-in-progress that looks poised to become as familiar as a Google search. A number of large brands are experimenting with its opportunities – Disney is even looking to build an entire theme park in the metaverse, giving otherworldly entertainment experiences new meaning. By the end of this decade, it is projected to be used by 700 million people. 

Tomorrow comes today

The next speculative iteration of the internet – Web 3.0 – looks to encourage token-based transactions through blockchain technology, useful for digital artists to sell artworks. Those involved in the ever-increasing digital economy welcome the ease and openness of the metaverse, taking social media interaction to new levels. 

But while that could seem like a niche usage, artists and designers from various creative backgrounds have experimented in the metaverse to promote themselves and sell goods in a digital guise. Video games provide a vast, connected worldwide community, best showcasing the metaverse in motion; to curb the lack of live experiences during the pandemic, bands and DJs (as avatars) performed at music festivals within Minecraft

Elsewhere, Gucci has sold digitized handbags via Roblox. Nike acquired virtual sneaker brand RTFKT to allow users to ‘try-on’ shoes. Hybrid or remote workplaces may benefit from virtual workplaces for formal and informal business meetings. The metaverse has grown from a marketing curiosity to a genuine avenue to interact with digital versions of real-life consumers. 

Giving the metaverse movement

Web 3.0 forerunners Meta announced that avatars will soon have limbs, and while Mark Zuckerberg’s humorous demo was pre-rendered, this was all made possible using motion capture. Integrating mocap technology in the metaverse is a huge step up to give realistic movement to computer-generated people, with Meta’s ‘legs’ development just the start. 

The human experience – taking a 2D webpage into a physical dimension – will rely on adding unique identities and personalities to metaverse participants. While 2020’s in-game concert experiences may have been pre recorded and streamed to satisfy avatar fans, now artists can use mocap suits to play instruments, sing and dance properly as digital versions of themselves in real-time, not dissimilar from a true gig. 

In fact, we have already seen motion capture put into practice to render people as characters in virtual environments. The Collaborative Human Immersive Laboratory in Denver provides a state-of-the-art facility for product designers, engineers, and manufacturers to visualize equipment, manipulate 3D objects, or inspect factory floors using a VR headset, interacting with the space around avatar versions of selves. VFX companies will look to create the same movie magic of a cinema within the metaverse, Weta Workshop (famed for their work on Peter Jackson’s The Lord of the Rings franchise) sees the immersive technology as a distinct way to bring new creators into a space, so long as more affordable motion capture systems can be provided. 

Adding ‘humanity’ not only makes the metaverse more genuine and adoptable, but it could be beneficial to peoples’ lives. Metaverse ‘homes’ can currently showcase virtual art collections, but the future for real estate may see the ability to view, buy, and sell properties rendered from their physical form to the web. In the same way, holidaymakers could experience a hotel before opting to vacation there. Some healthcare companies are already developing metaverse facilities to simulate physiotherapy treatments, and banks are aiming to curb fraud and money laundering by conducting identification in the metaverse. As the regulation of decentralized blockchain technology, cryptocurrencies, and non-fungible tokens increases, the safety of the online environment also seems to be developing as quickly as the capabilities of artificial intelligence and mixed reality. 

Always looking ahead

“The metaverse is increasingly becoming an extension of one’s reality. Participants around the world freely engage and interact in experiences and events that were previously unimaginable. Motion capture has a front row seat to this exciting new realm.”

Not only does the metaverse pose entertainment opportunities, it looks to become an invaluable tool for companies to build connections with people in a transformed way, already expanding to service a ready and burgeoning customer base. We are keeping an eye on the possibilities that mocap can offer the metaverse space, encouraged that its assistance could benefit industries across the world, or indeed in a new dimension.

The metaverse is just one exciting prospect for the future of motion capture. To find out more, talk to our team today.

Introducing Rig Solver: the flexible post-processing skeleton solver for animators

For character animators everywhere, moving a character in realistic ways is a challenge. Optical motion capture (mocap) is the gold standard for speed and quality, where markers are situated on the outside of the subject. The animator, however, wants the motion of the underlying bone structure; skeleton solving is a tried-and-tested way to create high quality movements of animated characters. 

Customers using our mocap system have enjoyed the best-in-business global optimization tool ‘Calcium Solver’, and other animators have often enquired about having the ability to apply our tools to other 3D-trajectory marker data. Due to popular demand, we are excited to announce the launch of our new interoperable post-process tool Rig Solver: the data cleanup and mocap rig fitting software. 

Mocap Rig Solver is a standalone version of our core functions, including Calcium Solver (as well as other skeleton engines) to enhance post-production capabilities for character animation. The software can help you label and clean-up data, and contains multiple functions to minimize time-consuming tasks. Simply import data obtained from a range of motion capture cameras, setups and marker systems. Let’s dive into how it works.

Readily generating realistic moving characters

Animators are looking to enhance the detail of human or human-like figures for feature films and animated shorts, or to appear as playable characters in games. Once a static character has been designed and created, the skeleton (or rig) transposed onto it using animation software helps to build out a realistic, moving computer-generated humanoid.
 
While the skeleton, made up of ‘bones’, has traditionally been moved using keyframing (still the preferred method for some studios), motion capture systems allow the ability to move those bones using the actions of real-time by actors, not frame by frame. This speeds up the process and makes those movements easier to record, while making the resulting characters more realistic. A practical, in-action example could be a character’s ‘signature move’ in a video game, where sport stars are fitted with mocap body suits to track their unique, recognisable movements for their in-game avatar. Once graphics and a mapped rig are aligned, later post-production texture art further adds a detailed skin for even greater realism.

We understand that a main difficulty for character animation is solving the positioning of these ‘bones’ in accordance to trajectories produced by marker systems, essentially directional paths signifying movement through time. Skeleton solving software is necessary to reposition and transform each part of the rig, bones and difficult joint movements, from one place to another to fit those marker trajectories.

Where Rig Solver steps in

Skeletons can be calculated using the Calcium Solver – a popular core offering ingrained within our Cortex software – but Rig Solver can also create bone structures using properties obtained from a range of other skeleton engines. 

Sometimes, data including marker identification or trajectories may need to be cleaned up once imported into an interface. There’s usually human error in capturing fluid movement for character animation; markers can be accidentally misplaced or knocked from performance activities such as simulated fighting, and causes gaps between tracked motion from one action to the next.

Using high-quality cameras or multiple markers attached all across an actor’s body can improve the accuracy of obtained data, but Rig Solver software includes functionality to process and clean up data collected from a wide range of motion capture systems with the same power as proprietary solutions. Marker sets can be replaced or created. The resulting movement data can be exported in the preferred FBX file format, industry-leading for humanoid characters, and Rig Solver also supports HTR and C3D file types. 

Take skeleton solving to the next level

Skeleton engine software has been a popular choice for our animation customers to level up their rig fitting capabilities. We are delighted to offer an adapted form of that core functionality with Rig Solver: a complete, intuitive, time-saving and cost-effective solution which you can implement into your post-processing pipeline today.

If you would like to discover more about our Mocap Rig Solver, we’d be happy to help. Book a demo with our team today.

How to get into a career in game development

Game development is as vast a landscape as the boundless worlds, characters, and globally connected communities that production teams create. Whether online, on mobile, console, or through a VR headset, the process of creating applications for gamers involves a large team of talented engineers, designers, producers, and much more. 

It is a competitive field, but one which requires both technical and creative minds to take initial concepts into a fully realized gaming experience. Breaking into a game development role allows a rare and exciting behind-the-scenes hand within pre-production, and can springboard a career within motion capture: a technology that continues to evolve rapidly and assist multiple industries.

An industry on the lookout for talent

Gaming accessibility has seen a rise in players around the world. This makes it a lucrative market; PwC has estimated that the global gaming industry could be worth $321 billion by 2026. Greater demand means greater need for talented production team members.

Much like any video game, a career in game development is engaging, challenging, and an organic learning experience all in one go. There are misconceptions about what these jobs entail; knowing the ins and outs of various computer softwares, applications, website builders or being well-versed in coding languages certainly suit those with computer science backgrounds. But companies also require people with artistic or theatrical abilities and interests that may be unaware of adjacent ways into the industry.

Many colleges, technical institutes and universities offer skill-development courses that can progress into a degree or career in games development. These environments boost creative endeavors – story building or character development – and teach the practical and technical requirements needed to be a well-rounded asset to any games production company. 

How 3D animation can open doors

These skills, whether individual or combined, are transferable to motion capture jobs. 3D animation is one medium through which game developers utilize mocap to craft imaginary worlds. It goes way beyond the misnomer of a person moving around in a lycra suit, and requires a number of hands and brains to bring imaginations to life.

Actors and directors

The body suits, covered in 3D tracking sensors, need to be filled by the mocap actor. Responsible for real-time body movements and facial expressions, they provide the actionable human backbone for model skeletons which later become animated graphics. Now, more lightweight and affordable motion capture systems are available without the need for a full suit. 

Much like any film, the mocap director is responsible for ensuring the actors are well informed to perform actions correctly, but also to oversee all teams for the day-to-day set operations, mocap camera setup, and the processing of data for the post-production animation team.

Technicians

Animation teams require pre-visualization model skeletons for characters before movement tracking can begin, to superimpose computer-generated imagery (CGI) onto during the post-production stage. Rigs need to be set up to determine how many markers are needed, depending on levels of movement. The mocap technical manager or capture technician is responsible for ensuring that the tracked data is all captured correctly, by calibrating the markers, cameras and rig. The Cortex system consists of two skeletons: one tied to the actor’s mocap markers, the other matching the animator’s rig, and can solve skeletons for a number of body or facial structures. 

The more motion capture cameras in place, the more accurately the captured tracked movements are. Given the high-spec of this kit, the role of mocap cameraman is paramount to handle the equipment safely and efficiently.

Post-production team

Any actor’s sensor-tracked data is transformed from moving geometric shapes to animated special effects in post-production. 

3D animators are responsible for taking the skeletons generated in pre-visualization and building out the 3D graphics – humans, animals or monsters – onto them, bringing them to vivid life. Even more detailed, texture artists are responsible for making CGI surfaces look realistic, whether that be surfaces of an alien world or a bear’s fur. Animation software, such as the popular Maya or MotionBuilder, is used for computer processing techniques: real-time modeling, rendering, and texturing. 

Footage editors are also required to fit animated clips together with the director, creating the cutscenes players can view in-game. 

A practical example of how motion capture for 3D animation works can be seen below to craft animated gameplay for Titanfall. Notice how the actors are fitted with sensors, surrounded by tracking cameras, production workers, tech operators and directors, all working together to create the final product. 

Every visual aspect is thought out by specialist game art designers: landscape design, building concepts and architecture, and character voices and outfits. Producers are also responsible for the slick, collaborative organization of each department; leadership roles can be learned on the job throughout a career in games development alongside creative endeavors. Whether applicants in the space have a practical knowledge and passion for computer animation or concept art, all contribute to the production.

Of course, this is not limited to a gaming context. Film and television production crews look for similar mocap capabilities for 3D animation. A few prominent examples of this work include Gollum in the Lord of the Rings franchise, or for dragon-riding in Game of Thrones, with the latter building on the first’s cutting-edge mocap techniques. 

The sky’s the limit

Augmented Reality and Virtual Reality require the construction of computer-generated worlds by 3D animators. Creative directors and concept artists are all instrumental in bringing the new frontier of the metaverse to life. Capture technicians and mocap software operators are needed for careful drone tracking, enhancing sport performance, adapting industrial facility training and ensuring safety, crafting virtual environments for broadcasting and so much more

It couldn’t be a better time to build a career in game development or motion capture. Equipment and software is becoming more affordable and adaptable to many different industries, opening up endless career options in the future. It may be tough to break into the burgeoning gaming industry, but for technologists and creatives alike, the possibilities really are endless  — some of which we do not even know to exist yet.

Ones to watch: the leading motion capture trends to follow in 2023

A man on a horse, then rotoscoping, then Gollum: trends do not so much come and go within motion capture, but continue on an upward trajectory. Movie magic, owed to the growing capability of visual effects since last century, was just the start for 3D animation and mocap’s rapid advancement. 

Since then, high-quality cameras, expansive analytical software, and lightweight autonomous vehicles have all contributed towards innovation in the space; the global 3D motion capture market hit an impressive US$193.1 million in 2022. And now, accurate motion mapping not only helps to craft otherworldly characters and worlds for movies and gaming experiences. Healthcare, sport performance, product development, and the military are all sectors growing their mocap abilities to better our understanding of movement through AI and robotics. 

Here are prevalent motion capture trends putting cutting-edge technology into practice, looking to spark creative endeavors and boost scientific discovery this year and beyond.

Enhanced drone tracking to enable safe work

Drones are not just remote-controlled airborne crafts. While reliable for filming footage over rugged landscapes, or above sports stadiums, drones are instead autonomous vehicles able to traverse ground level (or subterranean) environments. Currently used mainly by private researchers, among other critical use cases, drones’ location accuracy needs to be precise during the operations that researchers and other professionals conduct. 

To ensure this precision, mocap can be used in the testing phases of drone tracking, allowing the vehicles to perform remotely via GPS. An operator can follow the movements of their attached emitters using advanced motion cameras, even when obscured by surfaces or objects. This is essential when carrying out dangerous safety checks, including disaster relief, identifying leaked gas dispersion, or inspecting faulty equipment, which pose great risks of injury. Already used by energy companies, drones and their tracking components are also fast becoming more lightweight and flexible for different engineering needs and maximum performance

The rise of deepfake in entertainment

Deepfake is often mistaken as a form of motion capture, a machine learning tool rather than mocap’s visual effect technology able to track real-time movement. But despite being under fire for its nefarious uses of superimposing different identities onto real people, deepfake’s positives for the film industry and biometrics can thrive with increasing regulation, and generative adversarial networks (GANs) able to detect fake images, taking it far beyond a facial-mapping trick.

Deepfaking has already been used for deaging special effects (The Irishman), or replicating characters performed by late actors (Star Wars). But its future relies on collaborating with motion capture technology, which can enhance these continuity efforts by recording actors’ movements to make whole deepfaked entities more realistic, besides just facial expression. Hollywood may adopt this ‘meeting in the middle’ approach, an innovation in motion capture backed by famed bodysuit artist Andy Serkis.

AI and mocap revolutionizing healthcare

Motion capture wearables are by no means limited to acting use. In landmark studies, researchers across University College London and Imperial College London are instead combining data collected by bodysuits with AI algorithms to help understand movement-related conditions, including dementia, muscular dystrophy, stroke, and Parkinson’s. 

Mocap systems help researchers to monitor the tendencies and patterns of biomechanical movements as the software can create digitally-mapped ‘twins’, rendered representations of patients, for further data analysis. Resulting insights assist in tracking the progress of rehabilitation techniques, or predict any future detrimental effects across a variety of conditions associated with bodily motion.  

Crafting more efficient virtual productions

FIlmmaking was rife with problems caused by the pandemic; namely the lack of production equipment supplies and mass crew shortages for shoots worldwide. But the knock-on effect has seen further investment in virtual production: ‘LED volumetric’ capabilities can take mocap-suit actors to any conceivable virtual location using large-scale screens. 

Live action can be shot in real time against these high-definition backdrops superimposed with limitless computer generated graphics. Artists are able to craft stunning worlds (on earth or otherwise) in a remote studio for smaller teams, all while curbing logistical issues and reducing carbon emissions associated with the movie industry. 

Mocap to enter the metaverse

Not only is cloud technology seeing 3D character animators working collaboratively and remotely online, but mocap is being used to further virtual and augmented reality. The metaverse marks the next digital frontier, where captured movements of singers, dancers or actors, and other entertainers can populate an interactive virtual platform where avatars (representations of ourselves) work together, shop, or experience live music and dramatic events. It’s a reality beyond our current lived reality, and an exciting prospect to see come to life through mocap. 

Considering the immense motion capture advances above, the technology has to similarly thrive among a host of use cases; whether for character animation, drone tracking, or otherwise, accurate motion capture relies on robust cameras and marker kits. Our expanded range of upgradable BaSix mocap cameras provides advantages for various locations and services, integrating with Cortex software. As these mocap trends kick into gear, we’re looking forward to seeing how we can assist our customers to revolutionize mocap use across the globe. 

See how Bournemouth University puts Motion Analysis’ future-ready mocap into action or get in touch with our team to discover our range of solutions for animation, gaming, broadcasting, industrial work, and more.