55: Interaction between Real and Virtual Worlds

Vir­tu­al real­i­ty (VR) tech­nol­o­gy asso­ci­at­ed for a long time sole­ly with video games, has since expe­ri­enced a major boom, par­tic­u­lar­ly in the field of train­ing. The «democ­ra­ti­sa­tion» of VR head­sets is no stranger to this. The num­ber of head­sets sold has explod­ed from 5 mil­lion units in 2014 to 68 mil­lion in 2020, their cost has dropped and the tech­nol­o­gy itself has evolved. We are now talk­ing about immer­sive tech­nolo­gies includ­ing vir­tu­al, aug­ment­ed and mixed real­i­ties (VR/AR/MR). UTC has been a pio­neer since it intro­duced, as ear­ly as 2001, teach­ing in vir­tu­al real­i­ty and launched, with­in its Heudi­asyc lab­o­ra­to­ry, research on both the fun­da­men­tal and appli­ca­tion lev­els. The inter­ac­tion between the real and vir­tu­al worlds opens up immense fields of appli­ca­tion, par­tic­u­lar­ly in rela­tion to robot­ics. For exam­ple, we can inter­act with a drone that maps the dam­age caused by a nat­ur­al dis­as­ter in places that have become inac­ces­si­ble. Obvi­ous­ly, these new pos­si­bil­i­ties can be used for mali­cious pur­pos­es, and this rais­es sev­er­al eth­i­cal issues. UTC’s aca­d­e­mics are aware of this. 

Pro­fes­sor Philippe Bon­ni­fait, has been direc­tor of the UTC-Heudi­asyc Lab­o­ra­to­ry, cre­at­ed in 1981, since Jan­u­ary 2018. A cut­ting-edge lab­o­ra­to­ry ded­i­cat­ed to dig­i­tal sci­ences, which spe­cialis­es in sci­en­tif­ic meth­ods relat­ed to arti­fi­cial intel­li­gence, robot­ics, data analy­sis, automa­tion and vir­tu­al reality.


Cre­at­ed in 1981 and asso­ci­at­ed with the CNRS since its foun­da­tion, the UTC Heudi­asyc Lab­o­ra­to­ry (Heuris­tics and Diag­no­sis of Com­plex Sys­tems) is attached to the INS2I (Insti­tute of Infor­ma­tion Sci­ences and their Inter­ac­tions), one of the ten CNRS nation­al insti­tutes and ini­tial­ly head­ed by Prof. Ali Charara, for­mer director.

Can we sin­gle out a par­tic­u­lar­i­ty of UTC Heudi­asyc ? “It is a lab­o­ra­to­ry that brings togeth­er research sci­en­tists in com­put­er sci­ence and com­put­er engi­neer­ing, two spe­cial­i­ties that are often addressed sep­a­rate­ly; we are one of the first lab­o­ra­to­ries to have this vision in France,” explains Philippe Bonnifait. 

As the world of com­put­er sci­ence is, by nature, evolv­ing, the themes addressed by the lab­o­ra­to­ry’s lec­tur­er cum research sci­en­tists have them­selves nat­u­ral­ly evolved and quite con­sid­er­ably so.. The proof ? The devel­op­ment of research in the field of vir­tu­al real­i­ty. “A devel­op­ment that coin­cid­ed with the arrival of Indi­ra Thou­venin in 1995 at the UTC, a per­son­al­i­ty who is very well known in her field. The projects led by Indi­ra were in strong inter­ac­tion with cog­ni­tion, human and social sci­ences, and involved numer­ous col­lab­o­ra­tions with oth­er lab­o­ra­to­ries, notably UTC-Costech,” empha­sis­es Prof. Bonnifait. 

Since the restruc­tur­ing of the lab­o­ra­to­ry in Jan­u­ary 2018 and the down-siz­ing from four to three teams — CID (Knowl­edge, Uncer­tain­ty, Data), SCOP (Safe­ty, Com­mu­ni­ca­tion, Opti­mi­sa­tion) and SyRI (Inter­act­ing Robot­ic Sys­tems) — var­i­ous inter­dis­ci­pli­nary themes have been iden­ti­fied. This is the case of VR (vir­tu­al real­i­ty), an emi­nent­ly cross-dis­ci­pli­nary sub­ject, which thus ‘strad­dles’ the SyRI and CID teams. “With­in­CID, we are inter­est­ed in adap­tive sys­tems and the per­son­al­i­sa­tion of sys­tems; themes where we find a num­ber of ele­ments linked to immer­sive envi­ron­ments, one of Domi­tile Lour­deaux’s research fields. At SyRI, where Indi­ra Thou­venin is based, we are par­tic­u­lar­ly inter­est­ed in robot­ics, with sub­jects such as robot auton­o­my- smart, autonomous vehi­cles and drones for our part -, con­trol, per­cep­tion and data fusion, and inter­act­ing mul­ti-robot sys­tems,’ explains Philippe Bon­ni­fait. In short? “In the case of mul­ti­ro­bots, there are three types of inter­ac­tion: those with their envi­ron­ment, those with oth­er robots and final­ly those with humans. In my opin­ion, it is very impor­tant to put the human being back at the cen­tre of all the projects we devel­op. If we take the con­crete case of the wind­screen of the future, for exam­ple, a project by Indi­ra, it is a ques­tion of cre­at­ing mixed real­i­ty on the wind­screen, head-up dis­plays, an appli­ca­tion intend­ed for the autonomous vehi­cle in the long term,” he explains. 

These inter­ac­tions are evolv­ing as we talk more and more about sym­bi­ot­ic autonomous sys­tems. “We are mov­ing towards the sym­bio­sis of intel­li­gent machines with humans in a large num­ber of areas. One exam­ple? The Xenex robot, an Amer­i­can inven­tion, which has been deployed in a num­ber of hos­pi­tals for dis­in­fect­ing rooms using UV light. This is very use­ful, espe­cial­ly in these times of Covid,” he says. While in this case we are deal­ing with a tru­ly autonomous machine, this is not the case in all sec­tors. “Let’s take drones. They always need a remote pilot, and what we are try­ing to do at Heudi­asyc is to have a remote pilot for sev­er­al drones; a remote pilot who can take con­trol in the event of a prob­lem, because a drone in a crowd, for exam­ple, can cause a lot of dam­age. This is also the case for the autonomous vehi­cle, which, even in the long term, will need the vig­i­lance of the dri­ver. How­ev­er, there is a mid­dle way open to us: mov­ing towards this human-machine sym­bio­sis so that we can imag­ine new ways to dri­ve,” adds Philippe Bonnifait. 

The inter­ac­tion between the real and vir­tu­al worlds opens up huge fields of appli­ca­tion. For the bet­ter — the drone that mon­i­tors the state of drought in cer­tain areas — and for the worse- cer­tain mil­i­tary appli­ca­tions. This pos­es emi­nent­ly eth­i­cal prob­lems. “For a long time, ethics remained rather remote from our con­cerns, but the tech­no­log­i­cal devel­op­ments of the last ten years, with dan­ger­ous or even lethal uses, have put ethics back at the cen­tre of the reflec­tions on AI (arti­fi­cial intelligence). 

As a lec­tur­er-cum-research sci­en­tist, Indi­ra Thou­venin set up a vir­tu­al real­i­ty (VR) and, more recent­ly, an aug­ment­ed real­i­ty (AR) activ­i­ty when she arrived at the UTC in 2001. An activ­i­ty that is to be under­stood as tak­ing place as much at the lev­el of course work as at the lev­el of research.


Ini­tial­ly attached to the UTC-Rober­val lab­o­ra­to­ry, Indi­ra Thou­venin joined the UMR CNRS UTC Heudi­asyc Lab in 2005. Among her fields of research? “I am work­ing on informed inter­ac­tion in a vir­tu­al envi­ron­ment. In oth­er words, ‘intel­li­gent’ inter­ac­tion where VR adapts to the user. This involves analysing the user’s ges­tures, behav­iour and gaze. We then define descrip­tors for his or her lev­el of atten­tion, inten­tion, con­cen­tra­tion or under­stand­ing, for exam­ple. I also work on inter­ac­tion in an aug­ment­ed envi­ron­ment. In this case, the aim is to design explic­it and implic­it assis­tance, the lat­ter being invis­i­ble to the user,’ she explains.

This research is lead­ing to appli­ca­tions in the fields of train­ing, health, edu­ca­tion and indus­try, among oth­ers. “The descrip­tors allow us to define the user’s atti­tude, in par­tic­u­lar, his lev­el of con­cen­tra­tion or dis­trac­tion, his skills: does he already have expe­ri­ence, or is he a begin­ner? From all this data, we will make a selec­tion of “sen­so­ry feed­back” or adap­tive feed­backs, since VR uses visu­al, sound, tac­tile or hap­tic feed­back. All these sen­so­ry feed­backs will thus allow us to help the user to under­stand the envi­ron­ment with­out impos­ing all the feed­backs but only those which are most appro­pri­ate to him. In short, it is a high­ly per­son­alised mod­el­ling of inter­ac­tion in a vir­tu­al envi­ron­ment,” explains Indi­ra Thouvenin. 

Is a task that calls for lots of sim­u­la­tion and mod­el­ling? “For my part, I work on three sim­u­la­tion plat­forms ded­i­cat­ed respec­tive­ly to the rail­way, the autonomous vehi­cle and VR. The lat­ter con­sists of two main pieces of equip­ment: VR head­sets intend­ed, in par­tic­u­lar, for teach­ing, and the Cave sys­tem (cave auto­mat­ic vir­tu­al envi­ron­ment, i.e. an immer­sive room) in which you can visu­alise at scale 1. We have a Cave with four sides, each sup­port­ing a screen with a rear pro­jec­tion sys­tem. The screens are dis­played in stere­oscopy or 3D. The Cave allows you to see your own body, where­as in a hel­met you have to recon­struct a vir­tu­al body,” explains Yohan Bou­vet, head of Heudi­asy­c’s sim­u­la­tion & mod­el­ling department. 

Today, vir­tu­al real­i­ty and aug­ment­ed real­i­ty are boom­ing in var­i­ous fields. Hence the mul­ti­pli­ca­tion of projects both at the aca­d­e­m­ic and appli­ca­tion lev­els. A flag­ship aug­ment­ed real­i­ty project? “We are work­ing with Voxar, a Brazil­ian lab­o­ra­to­ry spe­cial­is­ing in aug­ment­ed real­i­ty, to devel­op an aug­ment­ed wind­screen for semi-autonomous cars. A project that is the sub­ject of Bap­tiste Wojtkowski’s doc­tor­ate, as part of the chair of excel­lence on intel­li­gent sur­faces for the vehi­cle of the future, financed by Saint-Gob­ain, the UTC foun­da­tion, the FEDER (Euro­pean Region­al­Fund) and the Hauts-de-France region,” she points out.

What does this mean in con­crete terms? “It will be a ques­tion of defin­ing what we will visu­alise on this “future win­dow”. Should there be visu­al feed­back from AR? How and when? When does the user need to under­stand the state of the robot (vehi­cle) and what does the vehi­cle under­stand about the user’s actions? In real time, we will not dis­play the same feed­back­sall the time but take into account the user’s fatigue, con­cen­tra­tion or lack of con­cen­tra­tion,” explains Indi­ra Thouvenin. 

Oth­er projects in progress? “In par­tic­u­lar, there is a project on “social touch” between a vir­tu­al agent embod­ied in the Cave and a human. This is a project fund­ed by the ANR, led by Télé­com Paris, inte­grat­ing UTC-Heudi­asyc and ISIR (the UPMC robot­ics lab­o­ra­to­ry). In this project called “Social­touch”, Fabi­en Bou­caud is doing his PhD and devel­op­ing the agen­t’s deci­sion mod­el. Final­ly, the last project under­way is the Con­tin­u­um team. The idea? To bring togeth­er the major­i­ty of VR plat­forms, thir­ty in all in France, in order to advance inter­dis­ci­pli­nary research between human-com­put­er inter­ac­tion, vir­tu­al real­i­ty and the human and social sci­ences,” she concludes. 

Pedro Castil­lo is a CNRS research sci­en­tist who works at the UTV-Heudi­asyc Lab. and a mem­ber of the Robot­ic Sys­tems in Inter­ac­tion (SyRI) team. He is spe­cial­ized in auto­mat­ic con­trol applied to robot­ics. His research focus­es, in par­tic­u­lar, on the auto­mat­ic con­trol of UAVs, autonomous UAVs but also, more recent­ly, vir­tu­al UAVs.


Arriv­ing from Mex­i­co with a schol­ar­ship, Pedro Castil­lo start­ed a PhD the­sis, in 2000, in auto­mat­ic con­trol, more par­tic­u­lar­ly on the auto­mat­ic con­trol of drones, at UTC. 

His the­sis won him the nation­al prize for the best the­sis in auto­mat­ic con­trol in ear­ly 2004. Dur­ing these years, he did a series of post-docs in the Unit­ed States at Mass­a­chu­setts Insti­tute of Tech­nol­o­gy (MIT), in Aus­tralia and in Spain, then applied for a posi­tion with the CNRS, which he joined in 2005, appoint­ed to the Heudi­asyc mixed research unit, where he con­tin­ued his research on the con­trol of minia­ture drones. “As ear­ly as 2002, dur­ing my the­sis, we con­duct­ed the ini­tial tests. We were the first team in France to work on this sub­ject at that time and one of the first to devel­op an autonomous four rotor drone,” he explains.

This explains the recog­ni­tion that the lab­o­ra­to­ry enjoys in this field, both in terms of the­o­ret­i­cal and exper­i­men­tal research. “UTC-Heudi­asyc is known for hav­ing devel­oped fun­da­men­tal plat­forms. And it was dur­ing my the­sis that we start­ed to devel­op exper­i­men­tal plat­forms ded­i­cat­ed to flight modes in order to val­i­date the the­o­ret­i­cal research that we were car­ry­ing out. As ear­ly as 2005, we worked on set­ting up a com­mon plat­form for the val­i­da­tion of aer­i­al drone con­trol sys­tems, which was com­plet­ed in 2009”, he explains. 

While many researchers are plac­ing their bets on autonomous drones, Heudi­asyc has adopt­ed a dif­fer­ent gam­ble. “We realised that we can­not leave all the work to a drone. There may be sit­u­a­tions that it will not be able to han­dle. In this case, the human must be able to take over. In robot­ics, we talk about a con­trol loop where the human can inter­act with the robot at any time,” says Pedro Castillo. 

Until recent­ly, in the field of human-machine inter­ac­tion, rather clas­si­cal approach­es have dom­i­nat­ed. Those that use joy­sticks or remote oper­a­tion, which, thanks to feed­back from the sys­tem to the oper­a­tor, allow the con­trol of a remote robot. The idea of intro­duc­ing vir­tu­al real­i­ty? “It’s to intro­duce visu­al and audio feed­back. In a word: to see what the robot sees by equip­ping it with cam­eras and to be able to hear, for exam­ple, in the event of a motor prob­lem. This will make it eas­i­er to con­trol and there­fore nav­i­gate the drone,” he explains.

But Pedro Castil­lo and Indi­ra Thou­venin decid­ed to go fur­ther and explore a new theme: vir­tu­al robot­ics. “We decid­ed to rep­re­sent our aer­i­al drone test room in the Cave, i.e., with high­ly immer­sive tech­nol­o­gy. We also cre­at­ed a small vir­tu­al drone that can be manip­u­lat­ed, which can be giv­en dif­fer­ent tra­jec­to­ries and car­ry out dif­fer­ent mis­sions. It is a sort of assis­tant to the real drone, since the lat­ter will then car­ry out all the tasks that the oper­a­tor indi­cates to the vir­tu­al drone, “explains Pedro Castillo.

Con­crete appli­ca­tions? “We are inter­est­ed in civ­il appli­ca­tions and there are many. We can men­tion the inspec­tion of build­ings, for exam­ple, or the occur­rence of any nat­ur­al dis­as­ter. There may be inac­ces­si­ble places and drones allow us to take stock of the mate­r­i­al or human dam­age and act quick­ly and in the right place,” he concludes. 

Sébastien Dester­cke is a CNRS research sci­en­tist and head of the Knowl­edge, Uncer­tain­ty and Data (CID) team at UTCHeudi­asyc. His field of research con­cerns mod­el­ling and rea­son­ing under uncer­tain con­di­tions, in par­tic­u­lar in the pres­ence of high or severe lev­els of uncertainty.


Con­crete­ly? “Strong uncer­tain­ties are­de­fined as miss­ing or impre­cise data, poor or qual­i­ta­tive information.

What are the under­ly­ing ideas? “The main idea is to mod­el this type of infor­ma­tion in a math­e­mat­i­cal lan­guage in order to car­ry out rea­son­ing tasks. This can be auto­mat­ic learn­ing, i.e., learn­ing from exam­ples, or mak­ing deci­sions in the face of uncer­tain­ty, “explains Sébastien Destercke.

Is there a tran­si­tion to vir­tu­al real­i­ty? “At Heudi­asyc, we have strong exper­tise in the­o­ries that gen­er­alise prob­a­bil­i­ties, such as the­o­ries of evi­dence or impre­cise prob­a­bil­i­ties. These are rich math­e­mat­i­cal lan­guages that allow uncer­tain­ty and incom­plete­ness of infor­ma­tion to be mod­elled in a very accu­rate man­ner. Such expres­sive­ness is par­tic­u­lar­ly use­ful in cer­tain appli­ca­tions of vir­tu­al real­i­ty, espe­cial­ly in the case of design­ing train­ing aids. Among the uncer­tain­ties requir­ing fine mod­el­ling, we can cite those con­cern­ing the learn­er’s com­pe­tence pro­file or even his emo­tion­al states. Tak­ing uncer­tain­ty into account in the rea­son­ing will make it pos­si­ble to bet­ter adapt train­ing sce­nar­ios, which can be bet­ter per­son­alised for each pro­file,” he adds. 

This work on uncer­tain­ty has, among oth­er things, led to the Kiva project, which is built around an “informed vir­tu­al envi­ron­ment for train­ing in tech­ni­cal ges­tures in the field of alu­mini­um cylin­der head man­u­fac­tur­ing”; a project that won an award in the “Train­ing and Edu­ca­tion” cat­e­go­ry at the Laval Vir­tu­al trade fair. What is Kiva’s objec­tive? “We focused on a spe­cial train­ing ges­ture: how to blow impu­ri­ties off the sur­face of alu­mini­um alloy cylin­der heads that have just been cast? The aim is get the trainee to repro­duce the ‘expert’ ges­ture. How­ev­er, in this sit­u­a­tion, we have at least two sources of uncer­tain­ty: on the one hand, the “expert” ges­ture can change from one expert to anoth­er and, also, the recog­ni­tion of the ges­ture which can­not be done per­fect­ly. The trainee is equipped with sen­sors; he or she will make move­ments that are not nec­es­sar­i­ly reg­u­lar or col­lect­ed in a con­tin­u­ous man­ner. This gives us par­tial infor­ma­tion on the basis of which we must define the recog­ni­tion of the trainee’s ges­ture and try to mea­sure how this ges­ture match­es the ‘expert’ ges­ture. Now, in the sys­tem that will guide the learn­er, it is nec­es­sary to include these sources of uncer­tain­ty” , adds Sébastien Destercke. 

“An ‘expert’ ges­ture that the learn­er is asked to repro­duce in a Cave. The use of vir­tu­al real­i­ty in com­pa­nies for train­ing pur­pos­es is recent. Until now, it has often been lim­it­ed to improv­ing the ergonom­ics of work­sta­tions, where issues relat­ed to man-machine inter­ac­tion are less crit­i­cal. With these train­ing objec­tives, they have become fun­da­men­tal,” he concludes. 

Domi­tile Lour­deaux, a uni­ver­si­ty lec­tur­er at UTC, is a mem­ber of the Knowl­edge, Uncer­tain­ty, Data (CID) research team at the UTC-Heudi­asyc Lab. Her inves­ti­ga­tions focus on per­son­alised adap­tive sys­tems in VR (Vir­tu­al Reality).


VR is a field of research that she has been explor­ing since gain­ing her PhD. “I am inter­est­ed in the adap­ta­tion of script­ed con­tent accord­ing to a dynam­ic user pro­file and, more par­tic­u­lar­ly, in vir­tu­al envi­ron­ments ded­i­cat­ed to train­ing. In as much as we are deal­ing with learn­ers, I am work­ing in par­tic­u­lar on the adap­ta­tion of edu­ca­tion­al con­tent and nar­ra­tion. In a word: how to stage learn­ing sit­u­a­tions in vir­tu­al real­i­ty (VR),” explains Domi­tile Lourdeaux.

The fields of appli­ca­tion are var­ied and past or cur­rent projects attest to this. There was Victeams on the train­ing of ‘med­ical lead­ers’, a project fund­ed by the ANR and the Direc­torate Gen­er­al of the French Armed Forces (DGA) and ongo­ing pro­grammes Orches­traa as well as Infin­i­ty¹, a Euro­pean project involv­ing eleven coun­tries and twen­ty part­ners includ­ing Man­za­l­ab, coor­di­nat­ed by Air­bus Defense & Space. The for­mer was launched in Novem­ber 2019, thanks to fund­ing from the DGA and has among its part­ners Révi­at­e­ch, Thalès as well as CASPOA, a NATO cen­tre of excel­lence, the lat­ter in June 2020; both for a dura­tion of three years. 

“Orches­traa aims to train air com­mand lead­ers in mil­i­tary air oper­a­tions cen­tres. There are about forty peo­ple in the room coor­di­nat­ing mil­i­tary oper­a­tions. The learn­er is giv­en a VR head­set and inter­acts with his team of autonomous vir­tu­al char­ac­ters; these are linked to fight­er pilots, heli­copters, drones, etc. They will there­fore play out a sce­nario planned sev­er­al days in advance. They will there­fore play­out a sce­nario planned sev­er­al days in advance, but on the day, there may be unfore­seen cir­cum­stances that dis­rupt the orig­i­nal sce­nario and that they must be able to man­age. These may be unfore­seen requests for air sup­port or a plane crash­ing, in which case the pilots must be res­cued in the field. These haz­ards will require a real­lo­ca­tion of resources, ulti­mate­ly cre­at­ing a chain reac­tion impact­ing the entire oper­a­tion. In this case, the adap­ta­tion con­cerns the dif­fi­cul­ty of the sce­nario, which will increase accord­ing to what the learn­er is capa­ble of man­ag­ing. This will require them to demon­strate pro­gres­sive skills,” she explains. 

Infin­i­ty con­cerns a com­plete­ly dif­fer­ent field. This large-scale Euro­pean project aims to pro­vide tools ‑AI (arti­fi­cial intel­li­gence), VR for data visu­al­i­sa­tion and analy­sis — to improve the col­lab­o­ra­tion of Euro­pean police in inves­ti­ga­tions against cyber­crime and ter­ror­ism. “We have three use cas­es: analy­sis of the behav­iour of cyber­at­tacks dur­ing an ongo­ing event, rapid analy­sis in the after­math of a ter­ror­ist attack and final­ly hybrid threats, result­ing from the con­ver­gence of cybert­er­ror­ism and ter­ror­ism, “explains Domi­tile Lourdeaux. 

Infin­i­ty is a project in which URC-Heudi­asyc is the leader of a mon­i­tor­ing tool on rec­om­men­da­tions to guar­an­tee the well-being of police offi­cers using VR, a tech­nol­o­gy that indeed caus­es side effects due to the hel­mets (cyber­sick­ness, visu­al fatigue) and can also gen­er­ate cog­ni­tive over­load and stress relat­ed to the tasks to be per­formed in VR. “In this project, we are focus­ing on the side effects. We are try­ing to mea­sure them in real time while the users have the head­set on to diag­nose their con­di­tion. We are inter­est­ed in “three states” in par­tic­u­lar: the cyber­sick­ness com­mon to all VR head­set users, the men­tal load relat­ed to the com­plex­i­ty of the task and stress and its mul­ti­ple caus­es. To detect these side effects, we use phys­i­o­log­i­cal sen­sors ((ECGs) elec­tro­car­dio­grams, elec­tro­der­mal activ­i­ty, ocu­lom­e­try and pupil­lom­e­try), behav­iour­al data (task effi­cien­cy) and ques­tion­naires”, stress­es Alex­is Souchet, a post-doc stu­dent also at the UTC Heudi­asyc Lab..

Vir­tu­al real­i­ty does, how­ev­er, raise var­i­ous legal and eth­i­cal prob­lems. “Man­u­fac­tur­ers are going to make VR tools avail­able to peo­ple with­out tak­ing into account the harm­ful impacts on their health. This is a real prob­lem when you con­sid­er the explo­sion in the sale of head­sets, which has risen from 5 mil­lion units in 2014 to 68 mil­lion in 2020,” con­cludes Domi­tile Lourdeaux. 

¹https://cordis.europa.eu/project/id/883293

Le magazine

Novembre 2023 - N°61

Activité physique, nutrition & santé

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram