Research


Past projects

OpenInterface – Component-based software platform for the rapid development of multimodal interactive systems. It's an open source platform connecting modules of signal processing algorithms to provide human interactions.

Técnicas de interação com realidade aumentada para aplicações médicas - Projeto submetido como requisito a bolsa de pós-doutorado junior (PDJ - CNPq). O presente projeto objetiva, em linhas gerais, a criação de “interfaces aumentadas” para uso em aplicações de computação gráfica na área médica. As pesquisas na área visam permitir o melhor entendimento da técnica de realidade aumentada quando aplicada à área médica, sendo que um objetivo mais concreto do projeto é o desenvolvimento de sistemas que permitam guiar o usuário na execução de tarefas específicas, através do uso de equipamentos de realidade aumentada.


PhD Thesis - Design, Implementation and Evaluation for Continuous Interaction in Image-guided Surgery. The main objective of this Thesis is identifying theoretical and practical basis for how mixed reality interfaces might provide support and augmentation maximizing the continuity of interaction. We start proposing a set of design principles organized in a design space which allows to identify continuity interaction properties at an early stage of the development system. Once the abstract design possibilities have been identified and a concrete design decision has been taken, an implementational strategy can be developed. Two approaches were investigated: markerless and marker-based. The last one is used to provide surgeons with guidance on an osteotomy task in the maxillo-facial surgery. The evaluation process applies usability tests with users to validate the augmented guidance in different scenarios and to study the influence of different design variables in the final user interaction.


eNTERFACE The SIMILAR NoE Workshop on Multimodal Interfaces – Multimodal Focus Attention Detection in an Augmented Driver Simulator. This project proposes to develop a driver simulator, which takes into account information about the user state of mind (level of attention, fatigue state, stress state). The user’s state of mind analysis is based on video data and physiological signals. Facial movements such as eyes blinking, yawning, head rotations… are detected on video data: they are used in order to evaluate the fatigue and attention level of the driver. The user’s electrocardiogram and galvanic skin response are recorded and analyzed in order to evaluate the stress level of the driver. A driver simulator software is modified in order to be able to appropriately react to these critical situations of fatigue and stress: some visual messages are sent to the driver, wheel vibrations are generated and the driver is supposed to react to the alertness messages.

HEROL Design of a global assistance device for the maxillo-facial surgery – Provide visual and intuitive guidance in the intra-operative scenario using optical tracking.