Structure of the system
|The virtual training system contains key elements both on hardware and software side. Its most important part is a PC-connected VR headset – primarily Oculus Rift or HTC Vive – which is worn by the user during the training. The PC has to be powerful enough to maintain high-enough framerate (preferably 90Hz or more)) while rendering virtual reality content, or else users may feel motion sickness.
VR headsets are usually used with controller interaction, but this method is not immersive enough in most cases. The main drawback is the fact that controllers are designed to control computers and cannot represent everyday actions and movements naturally. In the real life, people do not push buttons or grab joysticks to assemble or disassemble machines and they will not be able to learn or practise the real movements of the procedures if they have to do so. Immersion is a critical point of virtual reality, which means that interaction methods also have to be as life-like and accurate as possible. For practising the assembly work, precise and latency-free (real-time) motion detection is essential. Many different devices are available on the market, however for our application, LEAP Motion provides the best solution, as its small sized, non-contact optical motion sensor can be fixed onto the VR headset itself and it does not disturb the free movement of the user. The sensor recognizes features of the human hand and is able to build up a skeleton using the position of the users’ real hand and fingers. The software side of the platform relies on Unity game engine, using which, this hand model gets transformed to the virtual space with the help of LEAP Motion’s SDK.
However, rendering the models of the user’s own hands in the virtual space and capturing its motion is not enough to fully replace controllers. If the aim is not to overlap virtual objects, but to be able to touch and grab them, an interaction engine is also necessary. In the early days, we used the default gesture based model provided by LEAP SDK for this purpose, the biggest disadvantage is of which is that it does not take physical qualities of the object into account.
The user can grab the nearest object whenever the “pinch” gesture is performed. Later, we began to develop an own, more precise way for interaction, which determines the fact of grabbing considering outlines, mass and size of touchable objects and the angle of the touching fingers.
Using this method, users can not only see their own real hand but are also able to work with it confidently in virtual reality without the distraction of any other devices.
Another issue in virtual reality is getting around large virtual spaces, which is also relevant in nuclear power plant maintenance. The platform has multiple solutions for this: on the one hand, workers can use a special “walker” called Cyberith, which uses optical flow sensors to determine direction and intensity of feet movement while users walk in place. On the other hand, the popular “teleport” mechanism can also be utilized. In this concept, users have to walk in the real area, but when a door or special barrier is reached, they get teleported to another spot, so there is no risk of outrunning the real space.
The advantage of the treadmill is that the operator can travel anywhere while they stay in the same position in the physical space. However, the Cyberith we use does not give full immersion in the field of simulating the principle of walking.
The step detection optical sensors do not sense the elevation of the foot, but rather a sliding motion, so this process is more like a controller: it has to be learned and accustomed to its special use. Depending on these artifacts, negative innervation may be developed which does not correspond to reality.
Another solution to implement motion into virtual reality is free movement. In this case, the operator walks in the physical space on their own legs like in reality and does not need to learn to walk again in virtual reality like on the treadmill. This method is much closer to real spatial motion. For maximizing freedom, we used a backpack computer because it is wireless – with 2 hours of battery time – and the operator is not limited by cables. For the motion tracking, we used the Stereolabs ZED stereo depth-sensing camera and inertial sensor that allows us to map our environment. By implementing SLAM (Simultaneous Localization and Mapping) algorithm for environment mapping and object and determine the actual position of the user, which is widely used in navigation and robotics besides VR and AR applications.
The disadvantage of the free movement solution used for the VR training platform is the limitation of the physical space. The boundaries of a platform set up in a room will be determined by the physical dimensions of the real environment. For this shortcoming, we implemented teleportation as a workaround.
Advantages in training
|As stated earlier, the main purpose of the above-mentioned technologies is making training of maintenance workers more efficient and flexible. A simulation model is a great tool for training workforce because it can be done anywhere in the training room even before the production line is built. Software training with real data offers many benefits. If the control software is integrated into the simulation model, then the operator can acquire the same user interface as in real life, thereby gaining a holistic view of the production system. This allows them to study system parameters, weaknesses, operator reactions, and early problems in order to correct those.
Contrary to traditional procedure instructions and video trainings, the virtual training platform can effectively improve every moment of the practice, regardless of location and time. There is no need to build or rent expensive simulation halls, as virtually any environment can be easily built, and later, individual elements can be easily replaced and rearranged, making construction work cost-effective.
Other possible use-case scenarios
|Aside from trainings, there are also some other efficient use-case scenarios of using VR, some of which we would like to introduce and discuss further.
In order for engineering teams to work in parallel phases, 3D visualization tools are needed to improve communication. The initial planning and design is always done in front of monitors, but once the base parameters of the facility and the list of objects to be placed are available, the imaginary concept can be constructed and tested in virtual reality. Rapid prototyping can be beneficial in any industry and this way, it can be way more efficient.
Using the VR platform can also be beneficial in product simulation if the concept is constructed in virtual reality before the real construction. In this field, we would like to determine and test how our preliminary plans, flow of materials would work, whether our control principles are appropriate, the size and location of the buffer are well estimated, and where the bottlenecks are. If the data that we are working on is based on real data and comes from a similar product family or from the same versions we can turn it to our [advantage in further applications. This is an iterative analysis where engineers have to examine the system from the most basic elements to determine what parameters require further analysis or changes, for example, to reduce cycle times. An important requirement is that the simulation should be able to validate our measurements and ideas, for which an easily parametric and flexible model is essential.
Dr. Szabolcs SZÁVAI, acting division director, Engineering Division, Department of Structural Integrity and Production Technologies
Address: H-3528 Miskolc, Iglói u. 2