Meet the Thunderbird

We’re pleased to announce the release of the Thunderbird motion capture camera range. These state-of-the-art cameras are designed to provide unparalleled precision and versatility for motion capture professionals in a range of applications.

The Thunderbird range consists of five state-of-the-art cameras that support both active and passive markers, offering unparalleled versatility for a cohesive motion capture experience.

Precision and versatility

The Thunderbird range consists of five advanced cameras compatible with both active and passive markers.

Higher resolutions

Featuring resolutions of up to 12MP, Thunderbird ensures clarity and precision in every frame. Ideal for various environments, these cameras guarantee exceptional detail capture, whether in a lab, studio, or other confined space.

A range of lenses

Understanding the need for customization, Thunderbird offers a diverse range of lenses, allowing users to choose the perfect lens to meet their creative vision and specific requirements.

Cutting-edge core technology

Underpinning Thunderbird’s performance is the latest core technology, including GigE camera standard communication, advanced field-programmable gate arrays (FPGAs), and built-in PTP-based output synchronization for immediate success and long-term innovation.

Precise timing with PTP

Eliminating the need for timing “windows,”  PTP technology ensures seamless integration with your other devices, setting a new standard for precision timing in motion capture.

Built for durability and reliability

Thunderbird’s robust housing, passive cooling, and sealed sensor/FPGA unit are designed to withstand challenging environments, ensuring reliability in any condition. They offer enhanced features such as improved robustness, environmental protection, new firmware, and a state-of-the-art ring-light design.

Explore the full range here

How to develop a marker set that meets your needs

A marker set is far more than the floating points recorded in a capture space. Curating a full marker set in Cortex is an integral stage to define the markers’ properties, and their relation to each other, in order to develop a model that can be used for a range of motion capture studies. Marker sets need to be identified to drive underlying skeletons that can be reused or modified in live mode or during post-processing. 

We run through the various components of a Cortex “MarkerSet” and how to construct them to best suit your motion capture research and project needs for biomechanics, clinical trials, gait analysis, and character animation.  

What makes up a MarkerSet?

MarkerSet components can be found and edited in the right-hand panel of the Cortex platform, titled Properties, before being saved and exported in a comprehensive marker set file. The listed MarkerSet properties are as follows:

Markers are small points attached to a test subject and tracked by cameras to capture movement. When displayed as raw data in Cortex, these markers are unnamed, but you can name them based on their positions on the body to identify them easily. 

Virtual markers define central locations where ‘real’ markers cannot be placed—the middle of a joint, for example. Virtual markers can calculate a location relative to up to three ‘real’ markers or other virtual markers, which is very useful when needing to define the endpoints of a segment.  

Segments represent different portions of the body. Each segment’s movement is driven by the positions of identified markers, which can calculate its rotation across three axes. Segments can be automatically adjusted, or you can manipulate segments saved in the MarkerSet to fit various motion capture subjects.

Links “connect the dots” between markers to map their relative distance. Each link has an allowable distance (how close or far the markers can be to each other) where markers outside of this range cannot be identified. Links are critical to the real-time identification process, where you can elongate or shorten links to fit different test subjects, and they allow you to identify markers during the post-processing stage using templating tools.

Rigid Subsets can be created using markers that do not move relative to each other, such as those on a rigid plate attached to a test subject. When a subject is first in the capture space, Cortex first tries to identify the rigid subsets in the MarkerSet before the other markers. This adds another layer of accuracy for identification either during live mode or post-process.

The Template allows you to automatically assign the identified markers from one MarkerSet to the raw data’s unnamed points in one go. This part of a MarkerSet also allows you to select a repeatable Model Pose—a “standard position” that can be chosen from a single captured frame that visualizes identified markers.

Considerations for biomechanics

For biomechanics motion capture research or clinical analysis, more detailed markers are needed to drag the underlying skeleton and gain precise data to construct the MarkerSet.

Markers should be placed on accurate anatomical locations throughout the body based on which physical activity is being evaluated. Studying baseball pitching would require detailed markers on the upper extremities, whereas running or jumping activities may require more markers on the lower extremities. Either way, the marker positions drive the movement of the segments. Without identifying these markers, it’s impossible to work out joint kinematics and the subsequent kinetics for the skeleton. 

Biomechanical work utilizes the Skeleton Builder engine (SkB) to accurately define the movement of every segment. You need at least three markers on a segment (real, virtual, or combinations of both) in order to calculate rotation using a three-point axis. This 3D coordinate system helps to assess limb movements including joint flexion/extension, abduction/adduction, and internal/external rotations.

Considerations for animators 

Animation generally relies on the same anatomical marker locations as above, but accuracy is not as crucial. For character animators, it is important that the resulting skeleton mimics the actor’s movements as best as possible, and every segment identified in Cortex’s MarkerSet has to match the animated character it is driving. 

Animators use the Calcium Solver in Cortex, which defines segments differently and more flexibly. This software uses a globally optimized solution to drive an underlying skeleton rather than using three fixed marker points, and utilizes joint types and limitations to constrain the skeleton movement. Each segment is attached to a marker with an attachment. These attachments act like springs, telling the software which markers are driving the motion so that related segments can move in a similar way. This solution allows you to control the full skeleton according to these segment preferences determined in the MarkerSets.

The hybrid skeleton builder is also useful for creating a MarkerSet in that it combines the functionality of the two engines listed above. For the initial stage, it offers the scalability options offered by the SkB engine, but completes the remainder of the process using Calcium’s globally optimized solution to define a subject’s dynamic movements.

All set for future capture

Cortex displays all the MarkerSet information upfront, allowing you define its properties as you see fit, with file names, marker names and even links colors being fully customizable. 

Once all of a MarkerSet’s components are saved, the resulting template can be viewed during post-processing or be loaded into a live capture and tweaked accordingly to fit different motion capture subjects. Using a defined marker set as the first port of call, motion capture research and analysis can be conducted faster, with marker sets fully adaptable for your specific industry use case.

If you’re inspired to collate your own marker set for a particular motion capture project or if you’d like more info, feel free to reach out to our team today.

Motion Analysis Corporation Unveils Cortex 9.5 Software Upgrade

November 8 2023, California – Motion Analysis Corporation is excited to announce the highly-anticipated release of Cortex 9.5, the latest edition of its cutting-edge motion capture software. This update is now available for download and is accessible to all customers with active warranties or current software maintenance contracts.

Cortex 9.5 introduces a range of exceptional features and improvements that elevate the motion capture experience to new heights, providing users with greater flexibility, efficiency, and accuracy. Here are the key highlights of this remarkable update:

Quick Files Capture Status: Cortex 9.5 introduces Quick Files Capture Status indicators, simplifying the assessment of dataset status. Users can easily classify captures as “Unedited,” “In Progress,” or “Complete.” Customization options are also available, allowing users to create their own status names and icons, providing a user-friendly experience.

Kestrel Plus Cameras: With Cortex 9.5, Motion Analysis Corporation introduces the Kestrel Plus camera line, featuring the Kestrel Plus 3, Kestrel Plus 22, and Kestrel Plus 42. These new cameras seamlessly integrate with Cortex 9, expanding your capture capabilities and delivering high-quality results.

Trim Capture Modifications: Cortex 9.5 enhances the Trim Capture feature, enabling users to modify names, generate captures on a per-markerset basis, and add timecode support. This streamlined process facilitates the extraction of relevant data from capture files and offers improved post-processing options.

Workflow Improvements: Cortex 9.5 enhances the Workflow feature, making task execution even more efficient. Users can now utilize a search tool and a workflow repository, enabling easy access and management of workflows, optimizing productivity.

Live Detailed Hand Identification: Advanced hand tracking techniques have been integrated into Cortex 9.5, reducing marker swapping during live collection and post-processing of intricate finger movements. Users can contact the support team for a sample markerset to enable this feature.

Automatic Wand Identification for Reference Video Overlay Calibration: In a significant time-saving move, Cortex 9.5 automates the marker selection process for reference video overlay calibration, eliminating manual marker selection and potential user errors. This feature can be applied in both Live Mode and Post Process.

Bertec Digital Integration: Cortex 9.5 now offers support for Bertec AM6800 digital amplifiers, simplifying setup and reducing the number of required components, thus enhancing the overall user experience.

National Instruments New Device Compatibility: Cortex 9.5 continues its support for National Instruments A/D board data collection and expands compatibility to their next generation of DAQs, maintaining flexibility and ensuring compatibility with previously supported devices.

Additional Updates and Features: Several additional updates and features, such as the renaming of the Post Process X panel to Tracks, improved contrast in Dark Mode, and an increased marker slot limit, are included in this feature-rich update.

Cortex 9.5 marks a significant milestone in the field of motion capture, empowering users with advanced tools, enhanced workflows, and improved performance.

To learn more about Cortex 9.5 and take advantage of these exciting new features, download the full release notes here, or contact our sales and support teams for further information and assistance.

Motion Analysis Corporation continues to lead the way in motion capture technology, and Cortex 9.5 is a testament to our commitment to delivering innovative solutions that meet the evolving needs of our customers.

About Motion Analysis Corporation

Motion Analysis Corporation is a leading provider of motion capture technology solutions for various industries, including entertainment, sports, healthcare, and research. With a focus on innovation and customer satisfaction, Motion Analysis Corporation strives to make motion capture more accessible and versatile.

Client spotlight: How Mizuno accelerates sport testing with Motion Analysis

Rigorous product testing and research and development (R&D) in sport require two major factors. First, human subjects to perform actions, then advanced technology to record and analyze data. This is why motion capture for sports is so vital – it provides accurately tracked athletic movements for clinicians, apparel and footwear designers, sport coaches, and biomechanics experts to evaluate. 

One of the world’s leading sportswear, shoe and equipment manufacturers, Mizuno, has streamlined mocap processes at its new facility, with the Osaka-based company working with us at Motion Analysis to gather quantitative performance data to launch better sporting goods, faster. 

Here’s how Mizuno upgraded its mocap system to advance its capabilities in R&D in sport, a leading trend in the biomechanics space:

A partnership in motion, powering R&D at Mizuno’s new facility

Mizuno, fittingly bearing the brand slogan ‘Reach Beyond’, provides sportspeople with the highest quality equipment and clothing to improve athletic performance. Working within soccer, track and field, golf, volleyball and many other sporting disciplines, Mizuno’s researchers require the ability to track unique movements in both indoor and outdoor environments using prototypes and real athletes.

Having used Motion Analysis’ 3D motion capture for product testing since 2005, Mizuno opened its state-of-the-art innovation center, MIZUNO ENGINE, in 2022—a space for designers and R&D units to create, test, and fine-tune its product range. The company also upgraded its Motion Analysis camera setup, in line with its drive to continuously innovate.

Behind the scenes at the Mizuno facility

Originally, Mizuno used 3D motion capture for computer graphics purposes, applying its data to unique digital models during apparel design, as well as to observe changes in performance before and after a human actor tried out new sportswear or equipment. 

With work undertaken by product testers, R&D specialists and sport coaches diagnosing athletes’ conditions, a flexible solution is vital to present large volumes of accurate, shareable data within an intuitive user interface, and to recognize markers in real time.

Mizuno now uses two systems to facilitate motion capture for sports, equipped with Kestrel 2200 cameras housed in the larger 6,500 m² Mizuno facility. The established R&D center includes a laboratory to measure product durability in controlled environments, and motion capture data can be collected from athletes actively testing prototypes on a running track or in a gym. 

On the running track, the system gathers movement data from 8 fitted force plates buried in the track. 3D motion cameras are attached to a lane, enabling research to move to another indoor area quickly. The Kestrel 2200 system is also used with a Bertec instrumented treadmill, purposely built to obtain specialized running and walking data. It has high rigidity to maintain a natural-feeling environment for its product testers, with the aim of gathering data as close to a normal running situation as possible.

The need for speed, propelled by innovative technology

Before working with Motion Analysis, Mizuno’s previous 3D movement analysis system only received images from two units of high-speed cameras, where its lack of accuracy led to material testing rather than applications involving real people. Product performance in the past was also measured subjectively. 

Now, Mizuno uses the Kestrel system to produce fast, quantitative data that can more precisely prove R&D methodology than the original camera images. It helps to understand how kinematics relate to sport performance, and highlights individual features which need to be practically improved according to the preferences of test subjects. 

Prototypes are developed on the second floor of the facility and can immediately be tested on-site by R&D teams using the mocap systems on the first floor. Before, previous prototyping took place in overseas factories. The development process has been sped up, particularly for shoes where performance is easily affected by their materials. 

The future of motion capture for sports

Up to 50 research efforts have already been undertaken at the new Mizuno facility, including the successful testing of track spikes, walking shoes, and sporting apparel.

Mizuno next aims to utilize FBX data from Cortex’s Calcium Solver optimization tool to work with its 3D fashion design software. Resulting motion data can also be used to expand various product development methods including musculoskeletal simulation, computer-aided apparel design, and motion classification across a range of sports.

Research and development in sport is rapidly picking up speed with motion capture. If you’re looking to achieve mocap success like Mizuno, book a demo to see how we can assist your team today.

Motion capture suit, camera & action! What goes into a mocap performance?

There’s more to mocap than rolling around in a lycra suit!

We’ve already looked at the acting skills needed for a successful mocap performance, now let’s dive into the technical side of things to better understand each piece of tech that makes a performance work. 

1. The motion capture suit

The motion capture suit is really just a lycra outfit to hold the markers  onto the actor’s skin so they can move naturally without feeling inhibited. But the markers attached to these suits are the real star of the show. 

These retro-reflective 3D tracking dots are small spheres positioned strategically on the performer to record their real-life movements. Imagine the markers as computerized puppet strings – pulling the skeleton of the character through frames that create animated motion. 

2. The cameras 

The retro-reflective markers are tracked by specialized motion capture cameras. The more cameras you use, the more complete and accurate the outcome will be.

Cameras such as the Kestrel produce marker coordinate data rather than an image. They detect only infrared or near-infrared light and are able to pass information at a much higher frame rate than a typical television camera could. 

The Kestrel 4200 is one of the best pieces of hardware out there when it comes to mocap tech, and is an excellent investment for large and complex mocap systems. But if you’re working on a limited budget then the Kestrel 300 will still deliver a high quality motion capture.

Related: Choose the motion capture hardware that’s best suited for you

3. The software

An animation studio, game maker or filmmaker will use professional 3D animation software – Autodesk’s Maya is one of the more popular ones – which provides all the modeling, rendering, simulation, texturing, and animation tools that need to be added once motion is captured.

4. The rig

Before tracking movement for animation, animators need to have a basic skeleton mapped out for the character they are creating. This skeleton will help them to determine how many markers they need to use, and what levels of movement they need to track. For example, an acrobatic dancer who is going to be doing backflips will require more markers than a rigid-limbed robot that stomps around. 

The cameras and markers capture the motion and the data driving the character’s skeleton rig is sent back to the animation program where it’s transformed with fur, clothing, or skin. 

Our Cortex system is capable of solving the skeletons of any structure with any number of segments, including bipeds, quadrupeds, props, facial animation and more.

Because most humanoid characters have similar skeletons and move in similar ways, it’s possible to develop marker sets that can be used on a number of skeletons. 

Our Basix Go software has a built-in, constrained and tracked human skeleton at its core, which works for almost all humanoid characters. The six active markers strapped to the performer’s waist, feet, hands and head, are enough to track a human’s motion very accurately and precisely. Then within our software, (or in the receiving package), this rig can be mapped to the creator’s humanoid skeleton. 

Having this built-in solver-skeleton that’s ready to be tracked, means our BaSix system setup time is minimal compared to other traditional mocap systems. You simply need to walk into the studio once cameras are set up, strap on your six markers, stand in a “T” pose, press “reset skeleton” in the software, and voila – you’re tracking movement and data is being streamed live into your animation package in real-time, ready to be recorded. 

Interested in finding out more about our motion capture suits and technology? Find out more about our systems and book a demo today.

What is optical motion capture?

Motion capture’s light-speed development has seen it branch out into more unexpected paths than anyone could have anticipated. Since its initial use for biomechanics research and clinical gait analysis in universities and hospitals, that same technology would soon go on to animate the world’s most memorable characters in film and gaming, revolutionize industrial practices, develop military hardware, and even help to build out virtual reality worlds including the metaverse.

The mocap world’s list of technical terminology has also grown exponentially. While it can be tough to keep up, it’s worth going back to basics to the most widely practiced format: optical motion capture. In this blog, we’ll delve into what optical motion capture means, and how it brings human movement to virtual life across a range of industries.

The importance of marker sets in optical motion capture

Motion capture is an example of photogrammetry, the practice of using photography for surveying purposes. In this case, cameras are used to measure small, bright dots of light within a whole capture space emitting from markers which are carefully attached to a person or object. Optical motion capture, which can also be referred to as ‘marker-based tracking’, uses a set of cameras to track the coordinates of these markers to construct a detailed three-dimensional view of a moving subject.

The majority of mocap systems use passive markers which ‘bounce’ light emitting from infrared LEDs circled around the cameras’ lenses, while other marker sets may use active LEDs, which instead give off their own light. The brightness of these markers ensure that they are the only images the cameras are able to pick up, rather than the test subject or any background “noise”.

Passive markers are usually retro-reflective and spherical, making it easier for a computer to work out their central points. When these central points are tracked by multiple cameras from different angles, they can be triangulated to produce 3D coordinates of the motion being performed. The resulting data can then be transposed onto a model or skeleton using mocap software.

Where to spot optical motion capture in action

Given the great level of detail gained by optical motion capture, using high-resolution cameras and involving minimal data cleanup, it is usually reserved for large-scale projects. It underpins the 3D animated characters featured in many big-budget films and TV shows such as Lord of the Rings, Avatar and Stargate SG1, as well as ‘Triple A’ computer games. These highly flexible systems can be used in large-scale indoor or outdoor spaces where a range of cameras can operate, such as a movie set or a laboratory. Biomechanics researchers, for example, can use optical motion capture to precisely measure the athletic movements of certain joints or muscles, or test the effectiveness of sports equipment. 

Optical motion capture is a common method of marker-based tracking, and the quality of the capture is determined by the number of cameras. Some practitioners, however, use other mocap methods depending on their use case or project. Markerless systems, for instance, aim to complete the same task using software alone, rather than specialized tracking devices. However, they may not be as accurate as optical motion capture’s marker-based tracking for mapping high-resolution human movement.

Optical motion capture also differs from inertial motion capture, where subjects wear inertial measurement units (IMUs)––sensors strapped to their bodies or wearables that measure accelerations. Several stages translate movement into animation data. It’s a smaller setup acceptable for quick and easy motion capture but is limited in quality due to not measuring position directly.

Other methods include mechanical motion capture systems, which consist of an exo-skeleton structure attached to the test subject to approximate joint angles. Magnetic motion capture systems, less common nowadays, use sensors attached to the subject, which act as receivers to measure the low-frequency magnetic field generated by a transmitter. A computer then correlates the field strength within the capture space to calculate position, which is susceptible to errors caused by metal in the capture space.

While there is not a one-size-fits-all method, optical motion capture is an effective option for a range of use cases.

Getting started with optical motion capture

Recording motion data using optical motion capture requires multiple cameras for tracking purposes, a marker set, and processing software.

Cortex is our flagship motion capture processing software that uses optical systems for biomechanics, character animation, VFX, robotics, broadcasting, and more. Its compatibility with our Kestrel cameras allows for complex optical motion capture in large areas where robust equipment is needed for precise marker tracking. Alternatively, BaSix Go offers animators and other mocap artists a more affordable, lightweight optical motion capture option. Its range of accurate upgradable cameras are cross-compatible with various systems and work with active marker rigs.

Optical motion capture extends to every facet of movement analysis no matter the industry, letting filmmakers, visual artists, clinicians, sports coaches and more track motion, record mocap data, and construct valuable skeleton models for post-production and further research.

If you’re feeling inspired to find out more or explore our optical motion capture solutions, get in touch with our team today.

How to set up the seamless Noraxon integration with Cortex

You may have seen from our recent rollout of Cortex 9.2 that we’re focused on delivering better digital integrations for our customers. In this blog, we’re going to talk more about the intuitive Noraxon integration with Cortex for easy and accurate motion capture to better understand complex biomechanics data.

The Noraxon integration explained

Noraxon is a leader in the field of biomechanics research and human movement metrics, offering a combination of software and hardware to record and measure both 2D and 3D human motion.

Electromyography (EMG) measures underlying muscle responses in relation to nerve stimulation during movement. Noraxon’s Ultium EMG sensors monitor muscle activation and synchronize with inertial measurement units (IMUs), which are able to map a dynamic range of motion during exercise. IMUs add another real-time method to measure 3D movement, used for validation studies and particularly to track activities that require high velocity (such as baseball pitching, for instance). 

Noraxon’s myoRESEARCH® gathers biomechanics data from various sensory devices, then feeds it into one easy-to-use interface. In our case, Cortex takes data from Noraxon’s EMG and IMU sensors, which can be recorded alongside the 3D motion capture data provided by the Cortex system. This seamless integration provides you with a holistic high-level overview of why movements in the human body occur the way they do.

From the lab to the field

This workflow can be applied in a range of use cases, from sports training and improving athletic performance to rehabilitation and physiotherapy, or gait pattern analysis. 

Both sensor types are wearables attached to a test subject. The Ultium EMG sensor is applied to the skin using an electrode, which then monitors how and when muscles get activated during exercise. The data can be used to identify if there are certain imbalances or weaknesses on one side of a body, or to understand whether the intended muscle is being activated by tracking electric signals between the brain and subsequent bodily motion.

The other sensory device, the Ultium Motion System, uses wearable IMU-based sensors to see how a body moves around in a 3D space, where the tracked data can be visualized in graphs or as a skeletal avatar. Noraxon’s IMU sensors are useful in that they are portable, able to perform motion capture in any space outside of a laboratory setting and without using cameras. This could highlight the biomechanical differences from a subject performing in indoor or outdoor environments.

Get the complete Noraxon and Cortex setup

To use the Noraxon integration, the only requirement is to be up to date with Cortex 9.2, which has been tested with the latest version of Noraxon’s myoRESEARCH® software. 

Once Ultium EMG and IMU sensors are added to the Motion Analysis system, with cameras connected and calibrated, Cortex will start the data collection having already been linked to the present EMG and IMU sensors. It takes the push of a button to record motion capture and get a comprehensive singular view of both Noraxon and Cortex data—this integration aims to make initial setup and motion capture as fluent as possible, where the software guides you throughout the process.

If you’re feeling inspired to get to grips with the Noraxon integration, or if you are interested in a demo of Cortex software from our team, talk to us to learn more.

Exhibitor’s diary: Behind the scenes at SIGGRAPH 2023

We have returned from an insightful few days as an exhibitor at SIGGRAPH 2023. Hosted by the Association for Computing Machinery (ACM), it’s the largest exhibition of its kind, showcasing products and services in the computer graphics and interactive techniques market. 

Celebrating SIGGRAPH’s 50th year, we were delighted to see exciting breakthroughs for motion capture in the animation industry and to present our latest dedicated animation software, Rig Solver, a stand-alone flexible skeleton solver module. 

Conferences are still smaller than they were pre-pandemic, but nothing beats the experience of meeting the next generation of character animators in person, gaining feedback, and catching up with industry friends. Here’s a glimpse of the behind-the-scenes happenings for the Motion Analysis team at SIGGRAPH 2023.

A hive of activity

The ACM event took our team to Downtown Los Angeles, the metropolitan hub that brings the world together from a whole range of creative backgrounds. That description certainly suited SIGGRAPH 2023, where the conference hall was abuzz with flashing screens and vibrant booths premiering bold animation and VFX innovations to excited event-goers. As fast-paced as the nearby sunset strip, this space for high-tech exhibitors would somehow transform to host a Taekwondo Championship only a few days later.

This was not our first time as exhibitors at SIGGRAPH, where we knew that only a minimal setup was necessary to showcase our Rig Solver software. For other occasions, we might need a mighty truss to support our camera system, but thankfully some monitors for video tutorials were more than enough. Arriving in LA, our team got straight into planning mode after finding our materials hadn’t arrived the day before. As they say in Hollywood, the show must go on!

Never ones to be deterred by a challenge, a few emails and phone calls later we were up and running and raring to go before the deadline, thankful for the shipping team’s excellent service. We’re still not completely sure where the equipment went! 

On-the-floor opportunities

The day for an exhibitor at SIGGRAPH starts bright and early. We greeted everyone at Booth 245 in the vast hall with our complimentary freebies of candy, stationery, and our popular back-scratchers. Since the return of similar in-person conferences, we bump into recognizable friendly faces from all across the animation industry that bring great community spirit to every event we visit.

This year, we were lucky enough to meet both customers of our software and those only just discovering the world of mocap. It gave us the rare opportunity to go in-depth with experienced users face to face, then to be able to discuss the history of mocap and its use cases for gaming, film and more, as well as the work our company carries out. It’s always a refreshing experience to hopefully inspire newcomers to become motion capture practitioners, where our paths may cross again at future animation events. 

Advancements in technology were everywhere—even simple QR code scanning gave us far more time to interact with everybody that stopped by our booth, without the need to print and hand out hundreds of brochures. 

New experiences for all

It was especially exciting for us to have Rig Solver as a brand new product offering. Having both a large monitor and a laptop worked perfectly to run an introductory Rig Solver explanation on the former, while the tech-focused crowd were able to interact with the details on the small screen. 

We could practically delve into more advanced features if we needed to and dissect the range of useful software-specific questions. We found that, while Rig Solver is a complex piece of software for the tricky task of skeleton calculation, its approachable demonstration and easy-to-use interface made it easy for everyone to understand and engage with.

Rig Solver works as a flexible skeleton solver for animation, able to reposition, translate, scale, and rotate each part of a tricky bone or joint movement within a rig to fit marker trajectories gathered from motion capture data. Developed and released due to popular demand for our Calcium skeleton solving tool, Rig Solver is a stand-alone feature also able to clean data from multiple mocap cameras and marker systems to simplify the post process workflow of character animators.

We hope to be at SIGGRAPH again soon, with 2023’s edition providing even more invaluable first-hand looks into the current technologies and trends fueling the animation industry. It was brilliant to be a part of the festivities and catch up with friends, colleagues, and partners. old and new.
 
If you’d like to discover more about our Rig Solver module, or if you were at SIGGRAPH 2023 and want to get in touch, please contact the Motion Analysis team today.

See Cortex in action this August at the American Society of Biomechanics

With every passing month, the biomechanics industry undertakes all-new projects requiring advanced motion capture technology. We get to experience more and more demonstrations of innovations first-hand now that the conference season is in full swing, with more upcoming events in August 2023 and beyond.

Next up in the mocap events calendar, we will be attending the American Society of Biomechanics 2023 conference hosted in Knoxville, Tennessee from August 8 to 11.

What you can expect at this year’s event

The American Society of Biomechanics fosters an inclusive community of like-minded researchers, with around 850 members representing the field across five main disciplines: biological sciences; exercise and sports science; health sciences; ergonomics and human factors; and engineering and applied science.

Furthering our understanding of human movement and recovery, the society brings together students, academics and clinicians in smaller regional events and an annual meeting, with the 2023 conference taking place in the historic creative hub of Knoxville. While exploring motion capture techniques and biomechanics trends in depth, timely topics of discussion will cover stroke rehabilitation, gait analysis, anterior cruciate ligament (ACL) reconstruction, imaging for bone and joint health and much more.

We will be exhibiting at booth #9, where our Vice President of Global Sales Steve Soltis is excited to meet you and showcase helpful features of our Cortex motion capture software. 

See the next tech frontier in action

The event hosts experts from universities around the United States, acting as an open forum to encourage the adoption of brand new mocap technologies by biomechanics professionals. 

Featured keynotes come courtesy of mechanical engineering leaders from the University of Michigan, the Fischell Department of Bioengineering at the University of Maryland and Vanderbilt University. There are also opportunities to get interactive in practical workshops and investigate large-scale biomechanical data sharing, looking to predict and prevent injury, and highlight disease progression.

We are continually seeing projects that utilize mocap cameras and software to gain objective data to inform clinical decisions—data can identify patterns in movement-related conditions that can help researchers understand why injuries occur, leading to preventative solutions:

We hope to see you at the event

The 2023 conference promises to showcase more smart and practical uses of mocap, as well as a range of data-focused equipment looking to positively change every subsector of the biomechanics industry. 

Steve and the team will be on hand to outline software tips and tricks for Cortex 9.2, including automating tasks through our Workflows panel and all-new digital integrations. We are particularly excited to share Cortex’s real-time feedback capabilities, which provide instant, custom cues to actors during motion capture to change their movements  and improve performance.
 
We are looking forward to catching up with you at this year’s American Society of Biomechanics event in August 2023. Stop by booth #9 to say hello, and be sure to follow us on LinkedIn and Twitter for updates from the day.

Discover new frontiers and our latest launch for animation studios at SIGGRAPH 2023

Animation events come no bigger than SIGGRAPH. As the world’s premier computer graphics and interactive techniques conference, SIGGRAPH 2023 is shaping up to be a major force in moving the needle for motion capture in visual effects and production across the board—from film to broadcasting, gaming, research, art, and design. 

The three-day Los Angeles exhibition is just around the corner, with the full event taking place from August 6 to 10. Our team is ready to embrace exciting breakthroughs within animation mocap, where we will be exhibiting our software’s latest features, as well as our stand-alone flexible skeleton solver module, Rig Solver, to help improve post-production for character animators. 

Experience the grounds of innovation

At industry events of this scale, character animators have the chance to discover even more advanced mocap software, techniques, and applications to render life-like virtual worlds the likes we haven’t seen before, and this is the premier event for you to be a part of the action. 

As the driving force behind computer graphics and animation events, SIGGRAPH is celebrating its 50th year in style, chronicling the global community’s past and showcasing the creative minds and technologies fueling the industry’s future. As part of that worldwide community, your event ticket gives you access to invaluable keynote talks, VR experiences, and forums covering hot topics such as augmented reality and the metaverse, AI graphics, 3D animation, and data visualization, including talks from famed studios such as Weta Workshop. 

Alongside a job fair for aspiring visual artists, the animation event provides you with networking opportunities to interact with leading talents in their associated fields in the exhibition hall, and to try out new mocap software for yourself.

Bringing Rig Solver to the stage

We will be exhibiting at Booth 245, ready to showcase our easy-to-use Rig Solver module for post production capabilities.  

As you will know, constructing realistic movement is a challenge requiring speedy and accurate skeleton solving capabilities. Due to popular demand for our ‘Calcium Solver’ skeleton calculation tool within our mocap system, we have now launched Rig Solver as a stand-alone module to simplify post process workflows. 

While skeletons are traditionally moved using keyframing, motion capture records and tracks the realistic movements of actors in real-time onto a mapped rig. Rig Solver works as a flexible skeleton solver that can reposition, translate, scale, and rotate each part of a tricky bone or joint movement within a rig to fit marker trajectories gathered from motion capture data.

Rig Solver’s functionality means marker sets can be replaced and created, and resulting movement data can be exported in industry-leading FXB file formats, also supporting HTR and C3D file types. Also able to clean up data imported from a range of mocap cameras, setups and marker systems, Rig Solver easily fits into a post-processing pipeline as a complete, cost-effective solution for character animators.

See you there!

Find us at our stand, Booth 245, in the exhibition hall for a chat and to learn all about our motion capture cameras and solutions. We look forward to seeing you there and connecting with the worldwide animation and VFX community, and you can follow all of our updates during the event across our social channels.

If you’re not able to catch us at the event, be sure to explore Rig Solver here.