Year: 2019

Timeline: 4 Months 

Hardware: Desktop VR 

Industry: Pharmaceuticals

Services: Bespoke Development


Founded in 1781, Takeda is the largest pharmaceutical company in Asia and one of the top 20 largest pharmaceutical companies in the world by revenueFollowing their company core values “Better Health and Better Future, Takeda is committed to innovation and developmentIn fact, Takeda uses new technological and digital solutions and resourcesincluding robotics and virtual reality, in its production phases      


Like most pharmaceutical laboratories, Takeda involves its chemists in a process called column chromatography, a technique for purification, using the General Electric Chromaflow 400-1000 column.  

Slurry methods using the Chromaflow are complex and follow multiple steps. Negligence of these rules can lead to contamination, that is possibly carcinogenic, highly flammable and volatile (i.e. damaging the lungs) due to the particles present in the slurry mixture. Also, if the area gets contaminated, the chemists in the lab need to start over, meaning materials and costs employed to obtain them are wasted. Millions of dollars could be lost due to staff’s inattention, with each slurry tank worth around $2M. In this sense, Takeda needs its employees to follow the correct protocol and risk assessment procedures while working around the Chromaflow. 


To increase productivity and job performance as well as combat hazardous negligence, we proposed a virtual reality (VR) training solution for

Takeda’s employees to safely learn how to use the Chromaflow column packing station. Training the muscle memory in VR for the required tasks means reducing possible mistakes, safety-related accidents and the financial risk of wasted materials in the long run.  

VR training offers a safe space where learners can focus on the steps in a more natural flow. In fact, contrarily to traditional test-based learning, VR allows for monitoring and usage without having to interrupt the experience for the learner. For more on this, head to the ASSESSMENT AND RESULTS section at the bottom of the page.  

The possibility of practicing the Chromaflow steps in different levels of difficulty allows the learners to test themselves and challenge their own understanding of the technique, developing new ways of thinking and new attitudes.  


How best to replicate the movement necessary for the slurry mixing? With a realistic paddle integration, of course! We created a telescopic mop stick with the HTC Vive Puck to simulate the exact dimensions of a slurry mixing paddle, allowing the user to feel perfectly immersed. Also, this solution meant that the user’s gestures and movements could be tracked to create meaningful analytics to improve performance and reduce mistakes.  

This real-life puck in VR ensured the learning of the right movements necessary for slurry mixing, mostly focusing on muscle memory since pressure feedback is difficult to replicate in VR. In this sense, it was a challenge for our team to ensure learning outcomes were maximised. With hindsight, we now know that we could have integrated, for example, a bucket with water and so create a mixed reality (MR) experience, to simulate real-life pressure feedback as much as possible.  

If you’ve read our case study on Mooring Operations VR Simulator, you’ll know ropes, pipes and connecting lines are usually hard to work with when developing a VR experience. Since they move in very unexpected ways, they tend to interfere with other objects in the scene, causing some confusion in the app. This project was no different: it took attention to detail and a deep knowledge of the hardware and software from our team to solve this issue.  


This project was developed in early 2019 and if you know anything about immersive technologies, you’ll know how fast new advancements come about almost every month and, even within just a year, regular hardware improvements are announced.  

What would we do differently today with the hardware and software available?   


At the time of creation, this application could have only been done in Desktop VR. Why? Because Mobile VR (the most famous example being Oculus) was not exactly there yet. 

In fact, Mobile VR at the time only had 3dof (3 dimensions of freedom) meanwhile now it’s 6dof (6 dimensions of freedom). 3dof on Mobile VR would have meant that if the user, for example, walked forward, the field of vision would not follow them. The same goes with bending down. As you can imagine from the actions required during the Chromaflow set-up process, allowing more natural movements was crucial to the simulation. At the time, only Desktop VR could provide this. 


With the experience we have today, we would have encouraged to have less UI (User Interface) text because a user in VR isn’t meant to read text but to experience and interact with the environment in the simulation like they would in real life. Also, within the field, it is widely known now that VR learners tend to dismiss text quickly, unless short and tailored to the experience.  

A solution to this problem is having guided assistance in the simulation – for example, AI guides such as TOVI (Mooring Operations) or C.R.A.I.G. (CONVERT Simulator) from other projects of ours – instead of text upon text. The advantage of guides in the experience is that they create a more natural interaction and a more immersive experience in which the user can listen to advice and act spontaneously, resulting in memorable learning and better UX (User Experience).  


The experience today, with the new hardware (i.e., headsets, eye-tracking, hand tracking, the 6 degrees of freedom) would create an immersive experience favouring natural learning flows and actions. For example, the user could move within the virtual room and have no limitations as to where and how they could move. The level of detail possible in VR today means the visualisation between 3mm and 1mm required by the Chromaflow process would be much better, making the experience even more realistic and productive.  


In each scenario within the training, after a task is completed, the results are presented to the user along with a list of good and bad practices. This offers the opportunity to reflect on choices made while also giving direct and instantaneous feedback for better knowledge retention. Upon completion, the user is also given the chance to continue or repeat the exercise. 

We used learning analytics (xAPI) to track every choice, mistake, or specific practice into Learning Locker, a type of data repository designed to store learning activity. This is particularly beneficial to monitor and understand employees’ behaviour while on the job, including possible costly negligence. Also, it guarantees a natural, uninterrupted learning experience for the user.  


Finally, there was one big discovery thanks to performance monitoring possible in VR: Takeda’s executives predicted that most of its staff’s mistakes would concern which cables to put where or even how to use the Chromaflow machine correctly. However, employees in the simulation actually kept on dragging the pipes on the floor, a signal of negligence in the process, that is a massive financial loss for the company. If it wasn’t for VR, there would have been no way to test this sort of assessment in the real-world – another example of how VR is truly a safe place to train and practice with no consequences.  


“We’re very excited that the Takeda project has now been delivered, and the operating team at Takeda is looking to get going on use of the LRS (Learning Record Store). Digitalnauts did some great work thinking through how to use xAPI to track learning pathways. This gets the users thinking more into their processes, which is exactly what we needed. ”
General Manager at Adept XR Learning (The Glimpse Group)