Presentation of a mixed reality software with a HoloLens headset for a nutrition workshop

Microsoft has recently released a mixed reality headset called HoloLens. This semi-transparent visor headset allows the user who wears it to view the projection of 3D virtual objects placed in its real environment. The user can also grasp these 3D objects, which can also interact with each other. The framework of this new technology taking into account this physical approach (interactions, collisions) is called mixed reality. We had the opportunity to digitally transform a conventional nutrition workshop for patients waiting for bariatric surgery by designing a mixed reality software using the HoloLens headset. This software is called HOLO_NUTRI. In this paper, we present this software and its assessment (acceptance of this technology) by a cohort of thirty patients.

reality; in Section 1.5, we will review the state of the art of softwares used in nutritional therapeutic education; in Section 1.6, we will present the software implemented for this mixed reality experience.

Terminology
Bariatric surgery aims at patient's weight loss. As part of our research, we are interested in the following two surgical solutions: reduction of the volume of the stomach "Sleeve Surgery" (Fig. 1(a)), setting up a deviation from the normal path of progression of foods "Bypass Surgery" (Fig. 1(b)).

Conventional workshop of nutritional therapeutic education
The conventional nutritional therapeutic education workshop, later called CONV WORK, is organized around the nine following steps (also summarized in Table 1 for a more global view): introduction of the session (Step 1); debate around the issue "To feed, eat, when one has been operated on a bariatric surgery, what does it evoke you ?" (Step 2); beginning of the workshop to discuss with patients about the daily diet to adopt after intervention (Step 3); patients are then invited to compose their menu by choosing printed cards representing different foods and in different quantities, the images come from the study SU.VI.MAX [23,24] (Step 4); collection by the clinical team of the menus developed by the patients (Step 5); collegial discussion on the choice made by each patient (Step 6); the dietician then gives to each patient advices according to the food and quantities that he or she chose (Step 7); dissemination of quantitative and qualitative messages (Step 8), qualitative messages focus on the need to vary the number of dishes, the expected duration of the meal, the importance of the chewing time, while quantitative messages focus on reducing the amount of foods compared to a meal before surgery; finally, patients complete forms to evaluate the session (Step 9).   9. Forms (assessment knowledge + satisfaction)

Interests in digitalizing the conventional workshop
We have proposed the digitalization of the classical workshop CONV WORK, previously described in Section 1.2. This new workshop, named DIGIT WORK afterwards, has the following main objectives to propose: -the digital version of the menu composition using a software that should be rather playful. This software application is called HOLO NUTRI, -a more detailed analysis of the elaborated menu (several quantities associated by food, meal time estimation, taking into account of the chewing time, etc.), the numbers and quantities of foods being more important for this digitalized workshop make it possible to develop more personalized menus, -an analog, in the computer application, of the patient's gesture when he or she takes one card representing a food and the associated quantity in the CONV WORK workshop.
The first two items above can be achieved using the conventional computer/keyboard/screen/mouse set and software development. However, we have searched for the most appropriate solution for the implementation of the gesture to catch the digital model corresponding to the printed card of the food, or which would reinforce this learning [5]. To do this, we chose a solution setting up a mixed reality framework.

Mixed reality
To justify the framework of our study, we briefly recall the different virtual, augmented and mixed realities and the most commonly used hardware in such context. Example of an augmented reality application using a tablet to scan a painting; there is a necessity to hold the device with one or two hands. (d) This view is from one user of the Google Glass device; the projected information are mainly in 2D and the user cannot interact with its environment. (e) The PokemonGO application on a mobile phone, occlusion problems persist and there is no interaction with the user environment. Subfigure (a) is from [53], subfigure (b) is from [54], subfigure (c) is from [55], subfigure (d) is from [56], subfigure (e) is from [57] In the context of virtual reality, for individual use, opaque headsets are the most used. The best known headsets are the HTC Vive [25] (Fig. 2(a)) and Oculus Rift [44]. The main advantage of a virtual reality application is to be able, for example, to simulate an unrealistic environment (large-scale environments, etc.) and/or non real actors (avatars, etc.) and to provide immersive visualization (not related to the real environment). The main drawbacks are the discomfort associated with virtual reality headsets (cybersickness) [30], and a little high latency (time between the triggering of an action and the visual result related to this action). The main interactions by the wearer of this type of headset are carried out using handles (controllers) or gloves ( Fig. 2(a)). Other rather collective devices exist, such as CAVEs (Cave Automatic Virtual Environment) [11]. In brief, virtual reality headsets are built with opaque visor and prevent users to see their real environment; if taken into consideration, the content of the application can be handled by controllers.
With regard to augmented reality [4], the interest is to superimpose the visualization of 3D virtual objects to the one of the real environment. Most applications use markers to superimpose virtual objects ( Fig. 2(b)). The most common devices are smartphones, tablets for which the real environment is filmed using an embedded camera, or glasses (for example, Epson Moverio [15], Google Glass [21], etc.) and headsets for which the environment is perceived thanks to transparent glasses or to a semi-transparent visor [37,41]. The main downside of using tablets and smartphones is the need to hold them (Fig. 2(c)). If the application does not use markers, then it is necessary to use more sophisticated methods (such as registration for example), not necessarily exploitable or currently in development for light or inexpensive devices (or low quality cameras). This is why most of these devices (tablets, glasses, etc.) also deals mainly with 2D information (Fig. 2(d)). Note also that until very recently, occlusion problems were not solved to take into account the real environment on smartphones [64], in other words, there is no real feeling of depth ( Fig. 2(e)). Interactions are performed using these same devices, or external cable-connected controllers. Fig. 3 (a) Example of the environment scanning performed by the HoloLens; it is decomposed in multiple triangles in order to create a 3D model of the scene where possible interactions will be allowed. (b) Photo montage illustrating the user of a HoloLens mixed reality headset and the content as seen by him in his own environment (boxed). The blue circle indicates the HoloLens cursor as seen by the user in the scene. The blue arrow indicates the gaze direction of the user. Subfigure (a) is from [58] Microsoft recently released the HoloLens headset [38]. In addition to scanning the real environment around the user ( Fig. 3(a)), this headset allows to display virtual 3D objects (called holograms) superimposed on the visualization of the real environment, and to interact with these same objects thanks to the headset sensors ( Fig. 3(b)).
It is therefore the consideration of physics in order to manipulate holograms (collisions, etc.) that mainly distinguishes augmented reality from mixed reality. The user can interact with his environment using events from the Gaze, Gesture and Voice triplet (GGV) [20] by triggering actions programmed in the computer application to respond to these events. Indeed, the integrated gyroscope gives the movements of the head -Gaze, we highlight that this notion of gaze is defined by the orientation of the head ( Fig. 3(b)), the eyes are not tracked. Depth cameras of the headset allow it to follow some of the user's gestures (fingers pressed, hands opened, etc.) -Gesture and a microphone acquires the words or sounds emitted by the user -Voice.
Let us give more explanations about gaze interactions. When the user looks and keeps its gaze on an object (represented by the blue arrow on Fig. 3(b)), if an interaction has been designed in the application with this object, it is therefore possible to interact with it; a blue cursor then appears on one of the 3D object's facet, the closest from the headset and in respect with the user gaze direction (thanks to the environment scanning ( Fig. 3(a))).

State of the art of softwares in nutritional therapeutic education
We will review some applications in relation to nutritional therapeutic education, according to the three frameworks of virtual, augmented and mixed realities.

Virtual reality experiences
Virtual reality experiences mainly consist of simulating meals or more specific situations (feeling of satiety, choice of sweet foods, etc.).
The simulation of meals may have more or less specific therapeutic aims. In [29], a meal simulation is proposed with, in addition to a virtual reality headset, an odour diffuser, a hearing device (sound of chewing) and a gripper (grasping the food), the objectives are to facilitate the management of nutritional disorders or to circumvent food allergies. Another meal simulation [12] uses electrical probe (salty sensation), thermal probe (spicy or menthol sensation) and electrode (chewing sensation) in addition to a virtual reality headset.
As more specific situations, a serious game has been proposed in [48] with a virtual reality headset to take into account the differences in human body reactions depending on sugar consumption (see also [49]). We can list other works, for example, modifying the visualization, inside the headset, of the volume of food ingested in order to accelerate the impression of being satiated [3]; or the simulation with a focus on food selection [60]; or the modification of the body-shape [18]. In [46], the validation of the use of virtual reality is questioned for assessing parents' child feeding behavior. Another research work will be noted (neuroscience framework) to decouple the desire to eat from the "intrinsic survival" notion (feeding) [65]. Other experiments using the term virtual reality are in fact carried out without immersion: for the treatment of obesity in an individual way in front of a screen [45], or even a shopping experience (food) in a 3D supermarket [35].

Experiences in augmented reality
Augmented reality experiments mainly consist in providing nutritional information on food products either punctually and individually, or in the form of a medical follow-up.
We can list projects to get information about the composition of meals, with scanners (camera sensors of smartphones and/or tablets) to visualize nutritional information [6], for the recognition of a fruit and its composition [26], to provide a nutritional response according to a predefined diet (preferences or intolerances) [17] or more specifically for sweet drinks [16]. Other hardware also provide access to this information such as connected watches in [9] which use information on food packaging by RFID (Radio Frequency IDentification) tags. In [14], a system of aid for the purchase of food products in a supermarket is proposed based on individual or nutritional criteria. Finally, other projects aimed at nutritional education in general are presented in the reference [36], such as managing children's appetite for vegetables.
Regarding the medical follow-up, work has been proposed for different categories of people (diabetic patients, pregnant women, etc.): analysis of the management of the care for diabetic patients by mobile applications [13,50], follow-up of meals taken for several days by pregnant women [2], estimation of delivered portions [51,52].

Mixed reality experiences
The HoloLens headset is used in the medical context mainly for two types of applications: teaching [10, 22,63] or visual assistance in surgery (arterial network [47], prostate [59], shoulder [8]). To our knowledge, only the reference [43] (nutritional diseases -overweight management) makes a link between HoloLens and the context related to our work.
In the experience described in this paper, as previously stated, the main objective of our application is to compose a meal while strengthening the choice of foods. In order to do that, we have chosen to provide a gesture learning reinforcement which justifies the mixed reality framework. In addition, qualitative and quantitative messages are not focused on the nutritional composition of each food (unlike nutritional scanners) but on the composition of the entire meal. Here, the patient-specific follow-up is limited to verify that the qualitative and quantitative messages have been retained at different times after this experiment.

Technological justification
It is therefore this mixed reality technology using the HoloLens headset that we have chosen for the digital transformation of the CONV WORK workshop for the following reasons: -patients continue to see the real environment around them through the semi-transparent visor, this avoids any additional stress that can occur with opaque virtual reality headsets ( Fig. 3(b)), -patients must wear this headset about thirty minutes in a one-and-a-half hour session.
This seems to us to be too long for uninitiated users of virtual reality. We also have assumed that mixed reality would reinforce learning in the DIGIT WORK digital workshop. In fact, unlike virtual reality headsets, here, the user acts and sees the movement of his arm, the action of catching a food with the hand. Unlike phones and tablets using at least one hand of the user or the one of an assistant (augmented reality framework), here the user can perform the gesture freely with the HoloLens headset on her/his head (yet the gesture has to be done in an area defined by the headset sensors) ( Fig. 3(b)).

Plan of the article
In Section 2, we present the developed application and its features. Section 3 presents the framework in which this application was used and the results of the evaluation of its use by a cohort of thirty patients. In Section 4, we will discuss the first conclusions of this study.

List of expected features
The specifications of our application is composed of two parts. The first part must allow the patient to clearly understand the type of operation he or she will undergo (by visualizing the stomach before and after surgery). The second part must provide an interactive composition of a standard menu in two iterations: -the first iteration gives little guidance to the patient and displays at the end a first analysis (qualitative and quantitative messages), -the second iteration displays more information (visualization of the stomach in transparency showing its filling as the food is ingested, display of the number of portions), which should allow the patient to better compose his menu.
During both parts of the application, it is requested that the user is the most active possible: for the first part, the user must remove a portion of the stomach depending on the envisaged intervention; for the second part, the patient must compose his meal by choosing foods, one after the other, and their ingestion will then be simulated by placing each food in a "virtual" jaw, which will trigger a chewing animation and prevent the patient from selecting a new food until the chewing time has not elapsed.
We have chosen to treat this type of interaction with mixed reality in order to have a maximum impact on the patient (learning by gesture): it is with the hands that he or she will interact to remove a part of the stomach (first part), and also to choose foods when placing them into the virtual jaw (second part).

Development environment
Whatever the type of reality envisaged, the development of computer applications to use this hardware is done either from proprietary SDKs (Software Development Kit) or by using interactive 3D graphical application development engines such as Unity [61] and Unreal Engine [62], originally designed for the development of video games but now also addressed to wider professional sectors (automotive, AEC -Architectural, Engineering and Construction-, etc.).
For our application, we first design several 3D resources (3D models and animations made in Blender software [7], see Section 2.3). The development was carried out with the Unity engine [61] (import of 2D images, 3D models and animations). Several scripts were designed to describe the wanted user interactions on 3D objects; these scripts (integrated in our Unity framework) were written with the C# language from the Microsoft Visual Studio programming environment [42] and with the HoloToolKit library [39] specialized for mixed reality with the HoloLens headset. Once all these assets are compiled, it produces our software HOLO NUTRI (that can be tested on a PC through an emulator). By plugging the HoloLens headset into the PC, the application can then be deployed/embedded into the headset. The user, wearing the headset, can then run HOLO NUTRI from the HoloLens main menu without the headset being connected to a PC (by either cable or WiFi).

Digital contents
The different 3D models (stomach and jaw) were made with the software Blender [7], these models must not have too many polygons to maintain the smooth running of the software [34]. For the food selection stage, these are represented by 2D images from SU.VI.MAX study [23,24]. Once the food is selected, it appears as a cube, which will be easier for the patient to handle than a 2D image. The size of this cube is based on the quantity selected for the selected food.

First part: basic simulation of the surgical intervention
The first part proposes to the user to choose the type of intervention that he or she will undergo (either using buttons in the application (Fig. 4(a)), or by scanning a leaflet with the HoloLens (Fig. 4(b)), knowing that there is one leaflet for each type of intervention.
Then a virtual stomach is displayed and the user will interact with it (Fig. 5). For example in the sleeve operation (Section 1.1 and Fig. 1(a)) the user, by pinching its thumb and forefinger, will select a part of the stomach that is supposed to be removed by moving it aside from the main part of the stomach. This part was not present in the conventional workshop CONV WORK. Once this action is done, the second part of HOLO NUTRI starts.

Second part: meal composition
The second part displays a virtual buffet of about thirty different foods. Due to the quantity of informations and in order to render those informations in the most visible way, this buffet is provided in a semi vertical cylinder manner, with for the same column, three different portion sizes (small, medium, large) of the same food (Figs. 6 and 7).
The user chooses the food by looking at it through the transparent visor (HoloLens' gaze) then the user can grab it by pinching his/her fingers together and by releasing them afterwards (HoloLens' gesture). The selected food appears as a cube in front of the 3D jaw ( Fig. 8(a)). The user grabs once again the food and brings it to his/her own mouth, an animation is then triggered (opening and closing cycle of the jaw, Fig. 8(a) and (b)), forbidding any other action from the user, this essential step is designed to draw attention on the importance of the chewing time. The user repeats this procedure as long as he or she Fig. 5 User interaction with the two parts (red and gray) of the virtual stomach during the second step of HOLO NUTRI. The user virtually picks the gray shape of the stomach to take it away from the main red shape that represents the retained stomach part during surgery. From the user point of view, its hand and the object to be moved are aligned (photo taken from the HoloLens camera induces a misalignment) believes it is necessary to add foods to fulfill her/his meal. When the user is done, a first recap is shown to inform about the number of food selected and the portion size associated (Fig. 9). In the next step, qualitative ( Fig. 10(a)) and quantitative ( Fig. 10(b)) messages are displayed (according to the ones delivered in the conventional workshop). The qualitative messages inform on the necessity to have a diet as various as possible and to observe the chewing time ( Fig. 10(a)). The quantitative messages point the significant stomach volume reduction out ( Fig. 10(b)). The user is then allowed to perform a second iteration of this part of the workshop. For this second iteration, supplementary indicators (number of swallowed foods (Fig. 11), transparent stomach (Fig. 12), etc.) may help her or him to better compose their meals.

Digital workshop presentation
HOLO NUTRI software has been used by patients during the DIGIT WORK workshop which goal was to make patients aware of the post-surgery diet modification they will endure due to the bariatric surgery they will undergo. This workshop is composed of nine steps (also depicted in Table 2 for a more global understanding): session introduction (step 1), identic to step 1 of CONV WORK workshop; debate around the central question "To feed, eat, when one has been operated on a bariatric surgery, what does it evoke you ?" (step 2), identic to step 2 of CONV WORK workshop; computer scientists team presentation (step 3); hardware and its using presentation (step 4) by showing a video. A dedicated software named LEARN EX has been developped by us to introduce basic gestures to control the HoloLens and essential to the good use of HOLO NUTRI (Fig. 13). The computer scientist team gives information about this LEARN EX learning exercise (step 5). This workshop starts with a discussion between all patients about the daily diet post-surgery (step 6), identic to step 5 of CONV WORK workshop; patients  are then invited to compose their own meal with HOLO NUTRI software (step 7); the nutritionist delivers several advices depending on the food and quantities selected by the patient (step 8); feedback forms are then filled by patients in order to evaluate the session (step 9), identic to step 9 of CONV WORK workshop. We have set up a specific

Experimental procedure
The software presented in this article has been used in a hospital hosted workshop that aimed to help patient prepare for a bariatric surgery (Centre HospitalierÉmile Roux, Le Puy-en-Velay, France), several sessions of one hour and a half had been organized. For each one of them, a group of three patients were supposed to simultaneously handle first the headset HoloLens then to use the application HOLO NUTRI. We, on purpose, limited the number of participants to three for the following reasons: size of the experimental room (Fig. 14), only three HoloLens at our disposal, involvement of three researchers to help patients in the use of both the equipment and the software. To allow this simultaneous participation of three patients, we have used the HoloLens App application [40] from the Microsoft Store in order to connect (using WiFi) the three HoloLens to three different computers to get access to a video feedback of each patient to better help them in their use of both this hardware and the software (Fig. 14).

Application evaluation by the patients
HOLO NUTRI has been used and assessed for 10 sessions by a total of 30 patients (27 women and 3 men) evenly distributed among those 10 sessions. These questions (12 in  number) have been gathered into five different categories; the answers to these questions are shown in Tables 3, 4 We have provided a series of questions to patients in order to get their feedback about this mixed reality experience. Those questions were focused on the patient technological knowledge (Table 3, Q1), on the physical feeling during the experience (Table 4, Q2), on the assessment of the software use (Table 5, Q3-Q7), on the assessment of this mixed reality technology compared to a more conventional one with the set computer/keyboard/screen/mouse ( Table 6, Q8-Q11) and on the expected educational benefit ( Table 7, Q12).
We present these five groups of results in the following tables from Tables 3 to 7. • (Group 1, Table 3). 80% of patients state that they have no technological skill at all in the use of augmented reality material or application. We can then conclude that this experience is the first with this kind of mixed reality material for most of them.  • (Group 2, Table 4). Half of our patient cohort does not seem to feel any general discomfort (Q2a). Only 2 patients among 29 (1 missing data) state feeling general tiredness following the experience (Q2b). 16 patients declare feeling a slight to severe ocular tiredness (Q2c). Only one patient has felt nauseous (Q2d). Over the 30 patients, 3 have found the headset to be heavy, 2 showed headaches with 1 who has declared feeling a severe ocular tiredness (Q2c). We can conclude that the physical feeling of patients is rather satisfying considering that this experience is the first with mixed reality for most of them.
• (Group 3, Table 5). 28 patients report not feeling the workload to be too much tiring (Q3). 24 patients have felt rather relaxed, satisfied and self-confident while using the HOLO NUTRI software (Q4). The duration of this experience has been perceived as rather satisfying by 28 patients (as a reminder, the session was planned to last one   (Q7): two of them for the reason previously stated in the question about recommendation to another person (Q6), and the last patient has not given us any feedback. In a summary, this experience has been rather tough for three patients (difficulties in performing the required gestures willing to a certain annoyance and preventing them from fulfilling the workshop goals, i.e. the total completion of two iterations of the second part of HOLO NUTRI). For the rest of the cohort, this experience has been perceived as satisfying to very satisfying, which is rather positive to repeat this kind of mixed reality experience in a medical context, despite technologies that could be seen as too modern and not user-friendly. The same bias seem to be refuted by the results we present on the assessment of this new technology over a more conventional approach (Table 4).
• (Group 4, Table 6). 28 patients have not been too much disturbed by the superimposition of virtual elements onto their real environment (Q8). To our great surprise, a majority of patients has believed to handle real objects (14 "yes absolutely" and 4 "yes rather") during the use of HOLO NUTRI (Q9). About the patient's focus on the workshop goal, more than 75% of them stated not being troubled by the use of this technology (17 "not at all" and 8 "not really"); 5 patients have stated to feel a focus trouble due to the use of this material (Q10). Finally, 60% of the patient cohort declared to be rather interested in having a workshop with this kind of material and set up rather than with a more conventional approach (the set computer/keyboard/screen/mouse or tablet) (Q11). The summary of the use of mixed reality in this awareness workshop on post bariatric surgery nutrition is, in accordance with the feedback we get from the patient themselves, rather positive and quite well perceived; this experience has been rather well received.
• (Group 5, Table 7). A question relative to the digitalization of this workshop is to understand if we can expect a better educational benefit with a mixed reality headset and an associated application compared to a classic information technology approach (computer/keyboard/screen/mouse set). We briefly speak about this question in this communication but it is presented with more detailed information in [5]. Thus, the majority of the patient cohort that represents 93% (excepting one patient and one missing data to this question) has found a real motivation in learning from the nutritional messages delivered through the handling of virtual objects (i.e. meal composition with 3D foods).

Discussion and conclusion
We will start by listing the positive and negative aspects we have noted during this mixed reality experience in a medical context. We will consider five criteria: 1. material, 2. software, 3. developer, 4. clinical, 5. patient. We will conclude this section by a discussion and by giving perspectives to this application and its framework. -an only educative use (e.g. the headset may work as a camera to broadcast a surgeon point of view to an audience of students in order to provide a better understanding of the surgical gestures to perform), -during a surgical operation, a visualization of the pre-operative data is given to the surgeon (e.g. patient clinical file, patient 3D organ models, etc.); this may represent a real help for the surgeon in its practice [19,27,28,31,47]. This visualization is mainly provided without any registration.
5. This new technology allows researchers to address new problems, by bringing efficient and innovative answer like in our experience we can note a reinforced learning from the gesture [5] and therefore bring a real benefit to the patient in the end.
Negative aspects: 1. This material carries some limitations such as its high financial cost, a low screen resolution, a narrow field of view and a quite heavy weight. 2. The release of HoloLens 2 less than two years and a half after the first version in France, asks us the necessity to update the HOLO NUTRI application. 3. For the developer, the learning curve is steep because it needs Unity knowledge, several in-development libraries and the dedicated documentation, which is not keep up-todate on a regular basis. A specific development is required to take in consideration the specificity of each environment where the application will be used. For us, a specific distance was tuned to allow patients to easily grab the virtual 3D food, size of the semi-cylinder was also tuned to fit with the HoloLens field-of-view [34]. 4. Non computer scientist members of this experience would not feel comfortable enough with this kind of material to perform this whole experience in an autonomous way. 5. From our side, it was not possible to let this kind of material at the free disposal of the patients, therefore, we could not consider to perform this experience at home (technical difficulties to handle this material without assistance) and even less a long term followup; unlike augmented reality based on the use of a sole smartphone (as described in Section 1.5), which would give more autonomy to the patients.
To conclude, we realised a translational research study (from the clinical needs towards the patient) with limited means (one student, three months of development, one Unity licence and one computer for programming), not forgetting the purchase of the HoloLens. In the frame of this nutritional workshop, we remind that 60% of patients who participated in this experience have assessed that this mixed reality approach was preferable to a more conventional one (computer/keyboard/screen/mouse set). Thus, we can easily extend this kind of work to other pathologies or to similar prevention workshops. We would like to insist on the fact that it would have been really hard to consider the same application without the HoloLens material. The use of 3D models in our application does not limit to the sole visualization, to a spatial positioning in a virtual environment or to a scaling of those latest but it tends to make the patient the more active as possible by giving them the opportunity to handle them directly. This gesture reinforced learning is reviewed in [5]. Our future work would be to add nutritional informations per food in order to give a more detailed analysis than the one currently proposed on the meal composed by the patient (only the number and the size of the portions selected are given). Finally, our experience on this kind of application development for HoloLens in a medical context has also allowed us to develop a prototype of needle inserting assistance during hemodialysis sessions [32] or another one to assist surgeons during trocar placement [33].