Research & Interest
I am currently interested in both computer graphics and virtual reality. A goal of the EventLab is to try to measure PRESENCE of people immersed in Virtual Reality (VR) and to try to obtain realistic responses whenever people are confronted to virtual environments.
The research in Event Lab is focused on the theme of PRESENCE in VR i.e., how to make people react af is real (RAIR) when they are immersed in Virtual Environments.
In order to obtain RAIR, we focus on two main notions, namely:
Place Illusion: a perceptual level that covers the sensorimotors contigencies of human people. Basically this is achieved when the visual display is updated correspondingly to the movements made by the subject in the Virtual Environments. To sum-up, the visual feedback should match the tracked motion of the subjects.
Plausibility: which is the illusion that the events in VR are really happening. This covers the following aspects:
Correlation of actions with responses,
Events relate to you,
Credibility i.e., the events correspond to expectations.
These two key notions, when they happen as the same time i.e. we can induce both Place Illusion and Plausibility, can alter the Body Perception of the participant.
That means we can give the illusion that the virtual body is your body. If this illusion is achieved, then VR can be used in a lot of different applications, such as:
Previously, during my PhD, I have been interested in both Computer Graphics and Constraint Programming. These two research areas were merged in my PhD into the problem of controlling a camera in virtual (i.e., 3D) environments. Constraints on camera movements and/or features in the images produced by the camera were applied by a user, before a constraint solver computed automatically positions and orientations of the camera that fulfils the constraints.
Camera control and Constraint Programming
My research interests can be roughly divided into three different "areas" that are somehow connected through my main research interest that is constrained virtual camera control. The underlying idea of constrained camera control is to position and orientate a virtual camera wrt. a set of properties given by the user. The properties are defined either on a single object that composes the 3D scene or on couples of objects of the scene. Properties are for example "see object A's rigfht profile" or "shoot a close-up of object B".
One of most difficult property to handle with camera control is occlusion, it consists in preventing objects overlapping on the screen. This can be really tedious given the arbitrary complexity of the objects composing the scene as weel as relations between them. Thus we propose a method to prevent occlusions while tracking objects in a dynamic scene.
Finally in order to solve the problems mentionned above, we need to rely on solving techniques applied on continuous domains, since the parameters of a virtual camera (namely 3D position, 3D orientation and focal length) are by nature continuous. Thus we have been interested in developping a Local Search method for continuous domains.
More precise information on each of these three research interests are available through theses links:
Constrained Virtual Camera Control
Continuous Constraints and Local Search
Constrained Virtual Camera Composition:
My main research theme was constrained virutal camera composition. It consists in positioning and orienting a virtual 3D camera in order that the resulting image fulfils a set of user defined properties concerning the objects of the 3D scene.
We will illustrate the approach by presenting an example. It consists in a 3D scene composed of 5 objects (characters) : 3 of them being in front of the 2 others. The user then specifies that he wants to see those objects on the screen without any occlusions between them (no object overlaps the projection of any oterh one). The following scheme is a 2D topview of the scene with the occlusions depicted as dotted lines between the objects :
The idea is to partition the 3D search space wrt. user properties in order to generate the solution areas (striped zones in the previous scheme). The following image is an illustration of the approach in 3D:
Unlike existing approaches the Semantic Space Partitioning (SSP) approach allows to further characterize the solution areas wrt. properties on the objects of the scene evene if these properties have not been defined by the user. The above scheme presents a characterization of the solution space wrt. properties on the objects of the scene. The purple areas are solution of the non-occlusion problem, but we can qualify those solutions wrt. other properties as for example the orientation of the objects. A white arrow illustrates the orientation of each object, with this property we can characterize the purple areas more precisely and help the user to choose between the solutions.
Moreover one can note that each connected component of the search space (each purple area) corresponds to a different class of solution to the original problem.
Here is a screenshot of the SSP approach showing us that each connected component correponds to a different solution, indeed :
in the left one object A is between objects D and E on the screen,
in the middle one, object B is between D and E,
while in the right one it is object C that is between D and E.
Occlusion consists in the overlap of the projection of objects on the screen, it can be considered at different levels:
an object is occulted as soon as a pixel of its projection is covered by a pixel of any other object,
an object is considered occulted only if one cannot recognize its shape (although this definition is subject to interpretation),
an object is considered occulted if a pourcentage (threshold) of its projection is covered on the screen,
Each definition can be considered valid and conduct to a different kind of occlusion management.
The following image shows an example of occlusion between 4 objects composing a 3D scene:
Occlusion is particularly difficult to handle when considering complex objects (shapes and/or number of faces). In order to simplify the problems, objects are often abstracted by bounding volumes with much more simple shapes and much less faces. Another way to ease occlusion handling is to take advantage of graphic cards capabilities and to use realtime hardware rendering techniques in order to draw the scene in low resolution buffers and state on the occlusion of the current scene.
We propose an approach based on hardware rendering that will render the scne from objects of interest pointing towards current camera position. Indeed an object is not occluded as soon as you put a camera on this object pointing towards the camera and that the resulting pixel is ,see folloxing picture for more details:
The advantage of this approach is that the rendering from the objects will not be in a unique pixel but on an image, thus we can state for many possible camera positions around current position:
In the previous image we can see the rays issued from the objects towards a screen centered on the current camera position. A green circle corresponds to a 3D position that is unoccluded for both objects, a yellow circle corresponds to a 3D position that is occluded for one object, while a red circle corresponds to a 3D position occluded for both objects and that must not be chosen.
Continuous Constraints and Local Search:
I'm also interested in meta-heuristics and more precisely combining Local Search and continuous domains. Local Search is a meta-heuristic that consists in solving optimization or constraint satisfaction problems by adding randomization during the solving process. LS has proven to be very efficient when applied to discrete CSPs however it requires a fine tuning of its parameters that can be tedious.
Applying LS to continuous domains (variables' domains are intervals instead of floating point values) has only been tackled by a few papers and generally consists in "discretizing" continuous domains, that is to say to reduce intervals to a finite subset of floating point values.
Our approach is somehow orthogonal since we want to maintain the interval aspect of continuous domains by relying on interval arithmetics. We want to give new definitions to the basic components of LS when applied to continuous domains since their continuous specifications give rise to new questions on :
definition of a continuous configuration (how do we specify it?),
definition of the neighbourhood of a configuration,
management of the moves ebtween configurations,
as well as definition of penalty functions.
We propose to use a canonical Cartesian product of intervals as a configuration (also called a box), to choose as neighbours boxes chosen randomly around the current configuration, to define a move as changing the current configuration as well as its associated neighbourhood and to reduce this neighbourhood wrt. previous move in order to enforce intensification.
I am currently working on the BEAMING project, and has been working in 2009 in both PRESENCCIA and IMMERSENSE European projects.
BEAMING Being in Augmented Multi-Modal Naturally-Networked Gatherings.
Today, in spite of advanced video conferencing, shared virtual environments, and gaming environments such as Second Life, it is still simply much more efficient to physically travel to remote location for business, scientific or family meetings�even if at a huge environmental, energetic and opportunity cost. The science and technology developed in BEAMING will for the first time give people a real sense of physically being in a remote location with other people, and vice versa�without actually physically travelling. BEAMING is a four year FP7 EU collaborative project (#248620) which started on Jan 1st 2010.
PRESENCCIA Research Encompassing Sensory Enhancement, Neuroscience, Cerebral-Computer Interfaces and Applications – a 4 year EU Integrated Project, total project budget 6.5M euro, 903,677 € at UPC. (Project Investigator: Mel Slater, who is also coordinator of the 14 partner PRESENCCIA project).
IMMERSENCE Immersive Multi-Modal Interactive Presence – a 4 year EU Integrated project, funding at UPC 240,562 €, partner. (Project Investigator: Mel Slater).
Jean-Marie Normand obtained his PhD in Computer Science from the University of Nantes, France in January 2008. The subject of the thesis was: Virtual Camera Control.
Previously he obtained both BSc and MSc degrees in Computer Science, at the University of Nantes, France.
Before starting his PhD, Jean-Marie Normand received a Master in Computer Graphics from the University Claude Bernard of Lyon 1, France and another Master in Computer Science from the Universtiy of Nantes.
Jean-Marie was a PhD student and Research and Teaching Assistant at the LINA – Computer Science Laboratory of Nantes, of the University of Nantes, France. During his PhD his research interests were both control of virtal cameras as well as contraint programming.