Research

medialab_logo_sem_fundo

Perception Simulation with Social Planning and Abductive Reasoning for Emergent Storytelling

In the context of story generation, one of the most used methods involves the simulation of virtual worlds inhabited by agents that reason and act as characters. Arguably, stories contain elements of unpredictability, misunderstanding and failures. However,
generation works available at the literature have been based on correct and perfect reasoning, and as such, they are not likely to make scenarios based on mistakes emerge. 

 My work proposes a perception simulation process that allows characters to make wrong but coherent choices, and to consider each other's mistakes. This is made through the modeling of knowledge with support to theory of mind and uncertainty. Given that to think about the actions of the others it is also necessary to have their knowledge, I also propose the adaptation and combination of abductive reasoning processes such as goal and plan recognition, to allow characters to try to understand what others have in mind given their actions.

Provenance In Games

The outcome of a game session is derived from a series of events, decisions, and interactions that are made during the game. Understanding and extracting data from these sessions is important to analyze the gameplay, to the understanding of the player's profile, and even to validate the business model applied in the game. Many tools and techniques have been developed by the game industry in order to track and store data from a gaming session.

One successful method is game analytics, which aims at understanding player behavior patterns to improve game quality and enhance the player experience. However, the current methods for analytics are not sufficient to capture the underlying cause-and-effect influences that shaped the outcome of a game session. These relationships allow developers and designers to better identify possible mistakes in gameplay design or to fine-tune their games. Thus, in our previous work we proposed a novel approach based on provenance to track and record these causal
relationships, providing the necessary groundwork to use provenance information in game analytics. The current work extends our original approach by providing an improved method for tracking provenance data during game sessions as well as introducing georeferencing capabilities during analysis. Through this work, we can plot the provenance data in the game
level map in order to improve the data mining process, allowing developers and analysts to know exactly where each action or event occurred in the game, along with the previous temporal notion of when it happened. Furthermore, we take the provenance analysis to a new level, allowing the analysis of multiple provenance graphs simultaneously by generating a summarized provenance graph for analysis. This summarized graph is useful for game designers, aiding the detection of patterns in player's behaviors, identifying issues not reported by game testers, confirm hypotheses formulated by the development team, and even test monetization issues.
 

provenance in games

imagemJoaoGazolla

Dynamic Intelligent Kernel Assignment in Heterogeneous Multi-GPU Systems

Designing software to take advantage of heterogeneous hardware resources to perform efficient High Performance Computing (HPC) is complex and has many associated trade-offs, but also provides an opportunity for solving huge problems that a solo device cannot handle. Depending on the way tasks are mapped to the hardware, vastly different execution times for the same tasks are possible. This work exploits a task programming library for hybrid architecture, StarPU, with the objective of optimizing execution time in heterogeneous environments.

We explore two abstraction concepts provided by StarPU: contexts and schedulers. Contexts map StarPU tasks to hardware resources defined by the programmer, and schedulers define the policies that will be used to execute tasks within given contexts. This research will involve the creation of two engines with unique strategies that will involve creating, combining and testing execution on different context and scheduler combinations, aiming to achieve performance gains like time and energy consumption in comparison to statically and dynamically mapped tasks.

An Architecture for Multi-layer and Multi-Resolution Real Time Video Streaming for Cloud Gaming

Advances  in  cloud  computing  have  enabled  many  application  types  to  be  accessed remotely  with  dedicated  servers  performing  most  of  the  processing  load  and  delivering  the  result to the user through the network. Computer networks reliability increase and bandwidth availability  enabled  cloud  gaming  systems.  In  those  systems,  game  logic  and  rendering  is processed remotely in a cloud server and audio and video outputs are encoded and streamed to a thin client with limited processing power. 

There  are  still  many  challenges  involved  in  the  task  of  providing  cloud  games, especially  when  trying  to  reduce  encoding  time  and  complexity  on  the  server  and  video bitrate.  In  this  work,  we  propose techniques  based  on  video  segmentation  into  separate layers, or image planes, and layer caching to reduce the processing load on the server, which comes from the encoding for every video frame, and to reduce the video bitrate. In order to achieve it, we segment and group together game objects into different video layers that are cached  to  be  reused  when  no  changes  happen.  Also,  we  propose  the  usage  of  image processing  and  image-based  rendering  techniques  to  reuse  layer  cache  even  when  camera parameters change within a threshold.

medialab_logo_sem_fundo

3dfullarmmotion

Integration of Myo Armband and Leap Motion to create a 3D full arm motion

The Myo Armband is a sensor that can be attached to the forearm, and allows to send to devices equipped with Bluetooth muscular patterns of arm movements triggered by some gestures made by hand. Also it produces rotation data based on a gyroscope, accelerometer and magnetometer embedded on it. The Leap Motion is a sensor-based cameras and infrared lights that allows accurately track the hands motion, including fingers.

Therefore, the objective of this research is to use techniques and algorithms to combine the data from both sensors in order to create a 3D virtual simulation in real time of an arm motion, including the arm, forearm, hand and fingers. The purpose of this combination is also improving performance, aiming to overcome the main limitations of the sensors when used each individually.

GameLoop (PUC Partnership)

Nowadays, distributed computing and multithread hardware architectures like multi-core CPUs and GPUs found on PCs and game consoles (as Microsoft Xbox 360 and Sony Playstation 3) are a trend. Hence, real-time simulation and visualization systems, such as scientific visualization, games and virtual reality environments, will not get the best performance on such systems running sequentially in a single-thread loop.

For this reason, multithread real-time loop models that take advantage of such systems are gaining importance. This paper developed a new architecture for real-time loops that can detect and analyze the user hardware in order to adapt itself to a specific loop model, achieving the best performance for a specific hardware and application.
The latest article explaining this project can be seen http://sbgames.org/sbgames2010/proceedings/computing/full/full9.pdf

gameloop

jplay

JPlay

The framework JPlay was built using the Java language and it has as objective to be easy to use for students who begin in computer programming.

The framework gives to the student an easy way of build games in 2D in a little time.

For more information about JPlay access:
www.ic.uff.br/jplay.

Simulation of Acoustic Wave Propagation for Oil Prospecting (CENPES – Petrobras)

The scattering of acoustic waves have been considered interesting in many areas. Relevant works have been reported in geophysics, medical images, structures’ damage identification, oil prospecting, etc. Working with CENPES (Petrobras) we are developing a high performance computer non-homogeneous architecture for acoustic waves simulation based on the finite difference method.

The acoustic waves simulation problem requires a high computational effort and in some cases makes the proposed simulation impractical. Thus, the approach consider some tools as: MPI (Message Protocol Interface) and GPUs. These tools are used to guarantee the scalability and speedup, making viable any length of desired domain.

wave

crowd

Crowd Simulation on GPUs

In a typical natural environment it is common to find a huge number of animals, plants and small dynamic particles. This is also the case in other densely populated systems, such as sport arenas, communities of ants, bees and other insects, or even streams of blood cells in our circulatory system. Computer simulations of these systems usually present a very limited number of independent entities, mostly with very predictable behavior.

In order to achieve a massive number of entities with emergent behavior, this project is researching and developing new algorithms for crowd simulation, using the Graphics Cards running CUDA, in wich it could simulate and render more than one million of interactive entities in a common comercial graphics card.

A Simple Architecture for Digital Games On Demand using low Performance Resources under a Cloud Computing Paradigm

“Cloud computing is becoming an increasingly viable source of low cost computing power for developers and users in diverse areas. Services from the creation of presentations, spreadsheets and text processing, to picture and video editing, and more recently high performance scientific computing are some examples of systems currently available in the cloud.

While these applications are typically executed in the form of batch jobs, responsiveness or timeliness is not usually an issue. Executing interactive applications, i.e. applications that require real time responsiveness in the cloud, however, is more challenging and still not so common in this environment. This work proposes and evaluates the feasibility of building a simple off-theshelf architecture for an on demand gaming service. Our proposal consists on running an appropriate remote server in the cloud so that the client need only perform a few basic tasks, such as reading user input and displaying the resulting game screens. Consequently, even low-power computing systems, such as mobile devices or digital TV setup boxes, can have access to sophisticated graphic rich or complex games, since most of the processing is performed remotely.”

cloud

behavious

Generating Emergent Behaviors using Machine Learning to Strategy Games

Our work propose the use of machine learning for the creation of a basic library of experiences, which will be used for the generation of emergent behaviors for characters in a strategy game. In order to create a high diversification of the agents’ story elements, the characteristics of the agents are manipulated based on their adaptation to the environment and interaction with enemies. We define important requirements that should be observed when modeling the instances. Then, we propose a new architecture paradigm and suggest what would be the most appropriate classification algorithm for this architecture.

Children Gesture Recognition for imitation game tasks

This research was motivated by the Jecripe Project that originated a game for children with Down syndrome in pre-scholar age. In this game there is an activity for stimulating the imitation cognitive ability but this exercise must be assisted by a speech therapist, parents or teacher of a child to judge if the imitation is all right. In order to make a game exercise not assisted we are developing a system of Computer Vision to recognise the movements. In this project we consider Matching Shapes, Shape Context and Temporal Self-Similarities.

jecripe2

pnf

PNF based architecture for Storytelling

Storytelling is an important feature in games and also other types of (semi) automated entertainment systems such as machinima and digital-TV. The majority of the current research in storytelling use precedence-based directed acyclic graphs, or even linear sequences, to model the ordering of events in a story. This approach makes it easier to plan, recognize and perform these events in real-time, but it is also too simple to represent complex human actions, which form the basis of the most interesting stories in this niche. PNF-Networks and Interval Scripting are frameworks to represent, recognize and perform human action that was proposed in the context of computer-aided theatre. In this project we study some extensions to this framework that were designed and developed to enable its use in larger scale storytelling systems.

 

Emotions simulation on human faces using Bump Mapping and Morphing algorithms implemented on the GPU

This research has as main objective the development of a facial animation system for simulating real-time emotions. The big challenge is to produce virtual facial expressions with the highest level of realism possible and with as few polygons. The contribution is the fact that each facial expression wrinkles are modeled by normal maps used in the algorithm for Bump Mapping. For this, the Bump Mapping was implemented on the GPU. Then the simulation of emotions is accomplished through the animation of wrinkles in real time. For the animation was implemented algorithm Morphing, also on the GPU.

 

morphing

ipad

A tool for developing games and applications for iPad.

The research aims to develop a tool that will aid communication with the iPad and iMac, allowing a wide range of new applications for tablets.

The main focus of this project are:
-The capture of touch on the screen and the movements made by the user on the iPad
-Send and receive data packets to the server and the handling
-Processing of information received.

Polygonization of Volumetric Models

This research proposes a new method for the polygonization and texture map extraction from volumetric objects. The overall aim of this work is to propose a simple methodology in which it is efficient and easy to allow game artists to only use geometric and voxel manipulation techniques to produce highly detailed 3D polygonal meshes with several kind of texture maps to be used in their games, without requiring anyone to master advanced computer graphics, maths and even physics concepts. Our current method produced very promising results and enabled us to extract smooth texturized models that preserve the features of the original volumetric object at interactive times and it has been published at SBGAMES 2010. The next steps of this research includes the auto-generation of tangent space normal maps, development of mesh optimization techniques and improvements on the quality of diffuse maps and texture atlas.

 

polygonization

fluid

Two-way real-time fluid simulation with heterogeneous multicore CPU and GPU

This projects is related to solving fluid and rigid body real time simulation with two-way interaction using an heterogeneous multicore CPU and GPU scalable architecture

In this research, fluid are simulated using the Smoothed Particle Hydrodynamics, a meshfree Lagrangian method.

Preliminary results show this architecture provides a speedup of almost 7x in relation to GPU bounded architecture.

Using Real Time Hardware Tessellation and Displacement Mapping as Topographic Representations in Games

Tessellation and displacement mapping are well known methods in computer graphics field, but little used in real-time applications due to high processing costs, which directly influences performance. With recent advances in GPU architectures, which has new stages in the graphic pipeline, these techniques are earning strength and conquering its spaces in real-time applications, especially in digital games. This work proposes a strategy for using this technology applied for topography in video games.

 

tessellation

grmobile

gRmobile

Mobile phone games are usually design to be able to play using the traditional number pads of the handsets. This is stressfully difficult for the user interaction and consequently for the game design. Because of that, one of the most desired features of a mobile games is the usage of few buttons as possible. Nowadays, with the evolution of the mobile phones, more types of user interaction are ap- pearing, like touch and accelerometer input. With these features, game developers have new forms of exploring the user input, being necessary to adapt or create new kinds of game play. With mobile phones equipped with 3D accelerometers, developers can use the simple motion of the device to control the game or use complex accelerated gestures. And with mobile phones equipped with the touch feature, they can use a simple touch or a complex touch gesture recognitions. For the gesture to be recognized one can use different methods like simple brute force gestures, that only works well on simple gestures, or more complex pattern recognition techniques like hidden Markov fields, fuzzy logic and neural networks. This project developed a novel framework for touch/accelerometer gesture recognition that uses hidden Markov model for recognition of the gestures. This framework can also be used for the development of mobile application with the use of gestures.
An article explaining this project can be seen in: http://www.sbgames.org/papers/sbgames09/computing/full/cp18_09.pdf

GPUEngine

The GPUs (Graphics Processing Units) have evolved into extremely powerful and flexible processors, allowing its usage for processing different data. This advantage can be used in game development to optimize the game loop. Most GPGPU works deals only with some steps of the game loop, allowing to the CPU to process most of the game logic. This work differ from the traditional approach, by presenting and implementing practically the entire game loop inside the GPU. This is a big breakthrough on game development, since the CPUs are evolving to multi-core, and future games will need similar parallelism as the GPUs programs.
An article explaining this project can be seen in: http://portal.acm.org/citation.cfm?id=1803364

gpuengine

lambda

SAGE (FIU Partnership)

We tested the use of a tiled-display wall platform for use as a general purpose collaboration and learning platform. The main scenario of emphasis for this work is online learning by users in different countries. We describe the general efficacy of this platform for our purposes and describe its shortcomings for this purpose empirically. We discuss its advantages and also the shortcomings that we found. We also describe an enhancement made to make it more viable for our target usage scenario by implementing an interface for a modern human interface device.
An article explaining this project can be seen in: http://portal.acm.org/citation.cfm?id=1536294

Brand Recognition using Mellin Transform and Fourier Descriptors in Mobile architecture

The objective of this research is to allow mobile devices, in our case, smartphones to be able to process exhaustive methods such as the brand recognition ones. Our main goal is to be able to produce a low cost algorithm capable of detecting and identify any brand, based on previous pre-prossessed information, running on real time on smartphones.

Relief Mapping using Relaxed Cone Stepping for Unity 3.0

What has been done on this project was the integration of Fabio Policarpo implementation of Relaxed Cone Stepping for Relief Mapping in the game engine Unity 3.0.
Some features were not implemented yet, like the shadow map.

relief

stereoscopy

Stereoscopy using Depth of Field

My research plan is to develop a stereoscopy method to be applied to the robotic arm simulator project.
I intend, in order to achieve this, using the support of depth of field methods, which is one of the ways to give people 3D perception.

Metaheuristics on GPU

A major goal of research in optimization is to develop methods and techniques to obtain good quality computing solutions to difficult problems in reasonable time. One way to achieve this is through the exploration of the evolution of hardware technology. Recently, the GPU has evolved into a hardware architecture for high performance. Computational power has become comparable to some grids and clusters, while their cost is reduced. This project aims to study the methods and optimization techniques, and to adapt them to the GPU, using the highly parallel GPU interface.

Real Time Visualization and Geometry Reconstruction of Large Oil and Gas Boreholes based on Caliper Database

“The evaluation of technical and economical viability before starting the drilling process of a gas or oil reserve is very important and strategic. Among other attributes, the soil structure around the borehole must be analyzed in order to minimize the risks of a collapse. This stability analysis of a gas or oil reserve is a challenge for specialists in this area and a good result at this stage could bring a deep impact in reduction of drilling costs and major gains in security.

A tool known as caliper is inserted into the drilling spot to perform a series of mea-surements used to evaluate the well’s viability. For each position along the borehole, information such as sensors’ position, orientation, well’s resistivity and acoustic data are obtained and recorded. These data allow the user to find flaws in the soil, leaving to the geologist the decision whether the well is feasible or not, and help them to study possible actions to minimize its usage risk. Currently, the data obtained by the caliper are used for the visualization of individual sections of the well, projected in a bi-dimensional plane, considering a cylinder projection. However, an overview of the borehole’s entire structure is necessary for a higher quality analysis. This work proposes a novel technique for a precise geometry re-construction of the borehole from these data, allowing the geologist to visualize the borehole, making easier to find possible critical points and allowing an intui-tive visualization of a large set of associated data. The three-dimensional geometry reconstruction is made from data collected from the caliper log, which includes the tool orientation and sensors’ measures for each section. These measures are used as control points for the construction of smooth layers, through splines interpolation. Finally, the sections are joined in sequence to form a polygonal mesh that represents a reliable vision of the borehole in three dimensions.”

oil

Application of a Data Structure on GPU for a Raytracing Lighting Algorithm

Using the concept of SIMD (Single Instruction, Multiple Data), in this project we study the application of a GPU data structure for space partitioning for the raytracing lighting algorithm. The raytracing algorithm runs in parallel, using the advantages of modern programmable graphics hardware, which already adds a significant performance boost compared to CPU implementations. But we also try to speed up the rendering process using an octree for scene partitioning. This octree is implemented on the GPU, avoiding data traffic between GPU and CPU during the render stage, and has some special pointers that allows fast neighborhood navigation, making it easier to the raycaster to traverse the scene graph.

 

raytracing

A parallel approach for GPU Ray Tracing using CUDA architecture and octree based data structures

Ray Tracing is a technique that aims generation of photorealistic images. The main idea is to cast rays from the camera through the scene in order to follow the path of a light ray. One of the bigger problems about Ray Tracing is its high computational cost. The Ray Tracing problem is naturally parallel. It’s is solved straightforward using high-parallel processors like GPUs. The goal of the project is to develop ray tracing to the point where it can be bound to the tradicional graphics pipeline in order to achieve visual effects such as global Illumination, caustic, physically-correct lighting and so on.

 

Object Tracking

With the technology’s growth and the constant improvements of computer using, it is inherent the necessity of exploring new methods of interaction, more natural and efficient, between men and machines. Thus, the study of human-computer interaction is returning to increase its visibility after a long time in background. Aiming to contribute to this science evolution, the purpose of this project is to create a object tracking library with simple implementation and low processing cost, based on strong concepts about HCI and Software Engineering, using objects in real-world to enable a richer navigation through interfaces and virtual environments on RIA applications from its detection and tracking.

 

draw3d

Kinect3D

Generating real-time anaglyphs using stereoscopic vision and the Microsoft Kinect

Anaglyph images are used to provide a stereoscopic 3D effect, when viewed with glasses where the two lenses are different (usually chromatically opposite) colors, such as red and cyan. This project presents an application that generate anaglyph images in real-time which can be viewed using regular red/cyan glasses. The application uses the depth information of the Microsoft Kinect’s camera as a parameter to shift the red and cyan channels based on their distance to the focus point.

 

Three-dimensional viewer of waves intensity.

This project aims the development of a three-dimensional viewer which can be seen the intensity of a wave at points in time. The viewer turns data, previously stored in binary files for visual and interactive data, then it can get a better view of the work being done with such data. Initially the project development objective is to support other project from MediaLab. The project uses the programming language C++ with OpenGL for generating three-dimensionals models.

 

visualizadorPetro

Morphing (1)

Image morphing on GPU

Animation texture based on morphing is an important feature in games and also is able to be applied in Graphic Computer in general. Therefore, we are working with a real time 2D parallel feature-based for image morphing and warping algorithm fully implemented in GPU (Graphics Processing Units). We applied the proposed algorithm to animate the appearance of a 3D character’s face by morphing its texture map.

 

Hybrid of plot-based and character-based storytelling for manipulation of stories

This work has as objective a demonstration of a hybrid of plot-based and character-based storytelling for manipulation of stories with pre-defined plots, but that allows flexibility in the performance of the characters. For such, a classic story (little red riding hood) was modified to obtain these characteristics. Your principle characters, scenarios and other elements coordinators of narrative were implemented as BDI agents with the agent system JADEX. By the use of sometimes random decisions and parameters that are not directly controled by the agents, as execution time, it is possible to create properly controlled variations in the develop of the story that do not compromise the conclusion.

Colaborations:

  • VisionLab – PUC Rio
  • Lab3D – COPPE / UFRJ
  • Florida International University
  • Universidade de Coimbra
  • Medialab MIT
  • Instituto Tecnologico de Buenos Aires
  • Instituto de Matemática e Computação da Universidade de São Paulo
  • Universidade Estadual da Bahia
  • Universidade Federal de Santa Maria – Rio Grande do Sul
  • Universidade Federal de Mato Grosso do Sul
  • Deparamento de Artes – PUC-Rio