CRS4
URI permanente di questa Community
Fondato negli anni '90, il CRS4 è un centro di ricerca interdisciplinare che promuove lo studio, lo sviluppo e l'applicazione di soluzioni innovative a problemi provenienti da ambienti naturali, sociali e industriali. Tali sviluppi e soluzioni si basano sulla Scienza e Tecnologia dell'Informazione e sul Calcolo Digitale ad alte prestazioni. L'obiettivo principale è l'Innovazione.
La missione del Centro è quella di aiutare la Sardegna a dar vita e far crescere un tessuto di imprese hi-tech essenziali per il suo sviluppo economico e culturale.
Dal 2010 il Centro occupa circa 200 persone, tra ricercatori, tecnologi e staff di supporto, che operano in 4 settori strategici della ricerca scientifica: Biomedicina, Data Fusion, Energia e Ambiente e Società dell'Informazione. Il CRS4, inoltre, gestisce uno dei principali centri di calcolo italiani, la prima piattaforma in Italia dedicata alla genotipizzazione e al sequenziamento massivo del DNA e un laboratorio di Visual Computing allo stato dell'arte.
Sito istituzionale del CRS4
Sfogliare
Mostra il contenuto di CRS4 per Subject "3D interaction"
Ora in mostra 1 - 19 di 19
Risultati per pagina
Opzioni di ordinamento
- ItemA medical volume visualization system supporting head-tracked stereoscopic viewing and direct 3D interaction(1997) Zorcolo, Antonio; Pili, Piero; Gobbetti, EnricoWe have developed an experimental medical volume visualization system supporting head-tracked stereoscopic viewing registered with direct 3D interaction. Our aim is to assess the suitability of these techniques for surgical planning tasks in real medical settings. In particular, vascular surgeons examining the distal site of the aneurysmatic sack are assisted by visualizing the artery aneurysm in depth. A better understanding of such complex spatial structures is achieved by incorporatingmotion parallax and stereoscopic cues to depth perception not available from static images. Our display when positioned as a surgical table provides theimpression of looking down at the patient in a naturalistic way. With simple head motions, good positions for observing the pathology are quickly established.
- ItemA virtual reality cookbook. Tutorial notes(1993-06) Balaguer, Jean-Francis; Gobbetti, EnricoThis file contains the handouts of a half day tutorial that was given at the Computer Graphics International Conference held in Lausanne in 1993.
- ItemA volumetric virtual environment for catheter insertion simulation(2000-06) Zorcolo, Antonio; Gobbetti, Enrico; Zanetti, Gianluigi; Tuveri, MassimilianoWe present an experimental catheter insertion simulation system that provides users co-registered haptic and head-tracked stereoscopic visual feedback. The system works on patient-specific volumetric data acquired using standard medical imaging modalities. The actual needle insertion operation is simulated for individual patients, rather than being an example of a model surgical procedure on standard anatomy. Patient specific features may thus be studied in detail by the trainees, overcoming one of the major limitations of current training techniques.
- ItemAn Integrated Environment to Visually Construct 3D Animations(ACM, 1995) Gobbetti, Enrico; Balaguer, Jean-FrancisIn this paper, we present an expressive 3D animation environment that enables users to rapidly and visually prototype animated worlds with a fully 3D user-interface. A 3D device allows the specification of complex 3D motion, while virtual tools are visible mediators that live in the same 3D space as application objects and supply the interaction metaphors to control them. In our environment, there is no intrinsic difference between user-interface and application objects. Multi-way constraints provide the necessary tight-coupling among components that makes it possible to seamlessly compose interactive and animated behaviors. By recording the effects of manipulations, all the expressive power of the 3D user-interface is exploited to define animations. Effective editing of recorded manipulations is made possible by compacting all continuous parameter evolutions with an incremental data-reduction algorithm, designed to preserve both geometry and timing. The automatic generation of editable representations of interactive performances overcomes one of the major limitations of current performance animation systems. Novel interactive solutions to animation problems are made possible by the tight integration of all system components. In particular, animations can be synchronized by using constrained manipulation during playback. The accompanying video tape illustrates our approach with interactive sequences showing the visual construction of 3D animated worlds. All the demonstrations were recorded live and were not edited.
- ItemAnimating Spaceland(IEEE, 1996-08-29) Balaguer, Jean-Francis; Gobbetti, EnricoModern 3D animation systems let a growing number of people generate increasingly sophisticated animated movies, frequently for tutorials or multimedia documents. However, although these tasks are inherently three dimensional, these systems' user interfaces are still predominantly two dimensional. This makes it difficult to interactively input complex animated 3D movements. We have developed Virtual Studio, an inexpensive and easy-to-use 3D animation environment in which animators can perform all interaction directly in three dimensions. Animators can use 3D devices to specify complex 3D motions. Virtual tools are visible mediators that provide interaction metaphors to control application objects. An underlying constraint solver lets animators tightly couple application and interface objects. Users define animation by recording the effect of their manipulations on models. Virtual Studio applies data-reduction techniques to generate editable representations of each animated element that is manipulated.
- ItemBuilding an interactive 3D animation system(Prentice Hall, 1993) Gobbetti, Enrico; Balaguer, Jean-Francis; Mangili, Angelo; Turner, RussellThe continued improvement and proliferation of graphics hardware for workstations and personal computers has brought increasing prominence to a newer style of software application program. This style relies on fast, high quality graphics displays coupled with expressive input devices to achieve real-time animation and direct-manipulation interaction metaphors. Such applications impose a rather different conceptual approach, on both the user and the programmer, than more traditional software. The application program can be thought of increasingly as a virtual machine, with a tangible two or three dimensional appearance, behavior and tactile response. Dynamic graphics techniques are now considered essential for making computers easier to use, and interactive and graphical interfaces that allow the presentation and the direct manipulation of information in a pictorial form is now an important part of most of modern graphics software tools
- ItemCatheter insertion simulation with co-registered direct volume rendering and haptic feedback(IOS, 2000-01) Gobbetti, Enrico; Tuveri, Massimiliano; Zanetti, Gianluigi; Zorcolo, AntonioWe have developed an experimental catheter insertion simulation system supporting head-tracked stereoscopic viewing of volumetric anatomic reconstructions registered with direct haptic 3D interaction. The system takes as input data acquired with standard medical imaging modalities and regards it as a visual and haptic environment whose parameters are interactively defined using look-up tables. The system's display, positioned like a surgical table, provide a realistic impression of looking down at the patient. Measuring head motion via a six degrees-of-freedom head tracker, good positions to observe the anatomy and identify the catheter insertion point are quickly established with simple head motion. By generating appropriate stereoscopic images and co-registering physical and virtual spaces beforehand, volumes appear at fixed physical positions and it is possible to control catheter insertion via direct interaction with a PHANToM haptic device. During the insertion procedure, the system provides perception of the effort of penetration and deviation inside the traversed tissues. Semi-transparent volumetric rendering augment the sensory feedback with the visual indication of the inserted catheter position inside the body.
- ItemCatheter insertion simulation with combined visual and haptic feedback(1999-05) Zorcolo, Antonio; Gobbetti, Enrico; Pili, Piero; Tuveri, MassimilianoWe have developed an experimental catheter insertion system supporting head-tracked stereoscopic viewing of volumetric reconstruction registered with direct haptic 3D interaction. The system takes as input patient data acquired with standard medical imaging modalities and regards it as a visual and haptic environment whose parameters are defined using look-up tables. By means of a mirror, the screen seems to be positioned like a surgical table providing the impression of looking down at the patient in a natural way. Co-registering physical and virtual spaces beforehand means that the patient appears at a fixed physical positionj on the surgical table and inside the workspace of the PHANToM device which controls catheter insertion. During the insertion procedure the system provides perception of the force of penetration and positional deviation of the inserted catheter.
- ItemCollaborative holographic environments for networked tasks(2004-01-01) Gobbetti, Enrico; Holografika Kft-Hungary-coordinator; CRS4-Italy-contractor; Istituto Superiore di Sanità (ISS)-Italy-contractor; Peugeot Citroen Automobiles SA-France-contractor; Glasgow School of Art-United Kingdom-contractor; CS Systemes d'Information-France-contractor; Rheinische Friedrich-Wilhelms-Universitaet Bonn-Germany-contractorAdvances in networked audiovisual communication facilitate the emergence of computer-supported collaborative work (CSCW). In the COHERENT project, six leading European organisations provide complementary competencies to create a new networked holographic audio-visual platform to support real-time collaborative 3D interaction between geographically distributed teams. The display component will be based on innovative holographic techniques that can present, at natural human interaction scale, realistic animated 3D images to an unlimited number of freely moving simultaneous viewers. The design of the basic networked audiovisual components will be driven by two innovative demanding applications - a collaborative medical visualisation system and a collaborative design review system for the automotive industry - that will constitute by themselves an advancement of the state of the art in their specific domains. Both applications will provide intuitive access and interaction with shared 3D models through a sensory rich 3D user interface based on non-intrusive wireless interaction devices and offering 3D audio cues. Research will strongly concentrate on enabling technology for intuitive multi user access and interaction with complex 3D signals and objects. This project proposes to build a working high-resolution display in the one metre size range that, thanks to its human scale work area, will be ideally suited for multi-user collaborative working in true 3D. The challenge of providing the large visualisation data flow needed to drive such a device will be met using a cost-effective parallel solution based on commercial-off-the-shelf graphics and computing technology. Using GEANT, the pan-European Gigabit Research Network, the project will conduct distributed testing and validation of the system concepts for the two representative application scenarios. The research will be conducted in a 30-month schedule, to guarantee evaluation and demonstration of tangible results. Rapidly evolving advances in networked audiovisual communication technology are facilitating the emergence of computer-supported collaborative work (CSCW) systems. These systems are striving to seamlessly support collaboration between geographically distant teams for the purpose of achieving higher levels of participation, productivity, and creativity. They therefore address a major societal and economic challenge. Since visualisation is one of the most natural and intuitive ways to exchange information between humans, it has become the principal medium used in co-operative and multi-user situations. At the present time, however, state of the art collaborative real-time audiovisual systems typically rely on essentially 2D environments (traditional flat screens) to share information. For many professional applications, however, the main goal is to share the physical 3D object of common interest. These applications typically include clinical discussions among teams of medical specialists, multi-disciplinary scientific debate, design reviews between OEM's and suppliers using computer aided design (CAD), where the objects may be anatomical, molecular and product models respectively. Since these are almost exclusively very complex 3D objects, providing collaborative environments able to process, transmit and display 3D data in ways that match human perceptual abilities is therefore of primary importance and would represent a significant technology breakthrough. However, at present the only computer displays able to provide all the depth cues processed by the human brain to reconstruct a three-dimensional scene are unfortunately limited to single user configurations. Quite ironically, these limitations have led to networked solutions that facilitate remote collaboration only at the expense of the isolation of each participant from their local physical environment. In the COHERENT project, six leading European organisations in their respective fields provide complementary competencies to create a new networked holographic audio-visual platform striving to seamlessly support real-time collaborative 3D interaction between geographically distributed teams. The display component will be based on innovative holographic techniques that can present, at natural human interaction scale, realistic animated 3D images to an unlimited number of freely moving simultaneous viewers. The design of the basic networked audiovisual components will be driven by two innovative demanding applications - a collaborative medical visualisation system and a collaborative design review system for the automotive industry - that will constitute by themselves an advancement of the state of the art in their specific domains. Both applications will provide intuitive access and interaction with shared 3D models through a sensory rich 3D user interface based on non-intrusive wireless interaction and offering 3D audio cues. Research will strongly concentrate on enabling technology for intuitive multi user access and interaction with complex 3D signals and objects. The technical feasibility of the proposed holographic display solution has been recently demonstrated with the development of a ''small scale'' proof-of-concept, using white light based, 24 bit true colour, holographic 3D display. This project proposes to build on this earlier success to produce a working high-resolution display in the one metre size range that, thanks to its human scale work area, will be ideally suited for multi-user collaborative working in true 3D. The challenge of providing the large visualisation data flow needed to drive such a device will be met using a cost-effective parallel solution based on commercial-off-the-shelf graphics and computing technology. The driving applications have been chosen in two important sectors where collaborative 3D technology and networked audiovisual communication have a clear potential impact and provide a sizeable market for the future exploitation of the project results. Moreover, the need for distant teams to work together for a collaborative goal is becoming increasingly common in many industrial and social situations. Therefore, the best practice and methods opened-up by this project will have implications in other application domains. In particular, they will concern high potential, industry-driven domains such as next generation 3D-TV, electronic cinema, virtual and tele-presence and future mixed-reality-based communication services. The consortium has centred the project workplan around continuous and detailed end-user involvement in the research, development, evaluation, and validation activities. The end-users will also play an instrumental role in reaching their larger community as part of the dissemination and exploitation strategy. The research will be conducted against an ambitious, but achievable, 30-month schedule, to guarantee early delivery, evaluation, and demonstration of tangible results.
- ItemFOX: The Focus Sliding Surface Metaphor for Natural Exploration of Massive Models on Large-scale Light Field Displays(ACM, 2011-12) Marton, Fabio; Agus, Marco; Pintore, Giovanni; Gobbetti, EnricoWe report on a virtual environment for natural immersive exploration of extremely detailed surface models on light field displays. Our specialized 3D user interface allows casual users to inspect 3D objects at various scales, integrating panning, rotating, and zooming controls into a single low-degree-of-freedom operation, while taking into account the requirements for comfortable viewing on a light field display hardware. Specialized multiresolution structures, embedding a fine-grained per-patch spatial index within a coarse-grained patch-based mesh structure, are exploited for fast batched I/O, GPU accelerated rendering, and user-interaction-system-related geometric queries. The capabilities of the system are demonstrated by the interactive inspection of a giga-triangle dataset on a large scale 35MPixel light field display controlled by wired or vision-based devices.
- ItemInteractive Scene Walkthrough Using a Physically-Based Virtual Camera(Vieweg, 1991) Turner, Russell; Balaguer, Jean-Francis; Gobbetti, Enrico; Thalmann, DanielOne of the most powerful results of recent advances in graphics hardware is the ability of a computer user to interactively explore a virtual buildin gor landscape. The newest three-dimensional input devices, together with high speed {3D} graphics workstations, make it possible to view and move through a {3D} scene by interactively controlling the motion of a virtual camera. In this paper, we describe how natural and intuitive control of building walkthrough can be achieved by using a physically-based model of the virtual camera's behavior. Using the laws of classical mechanics to create and abstract physical model of the camera, we then simulate the virtual camera motion in real time in response to force date from the various {3D} input devices (e.g. the Spaceball and Polhemus 3Space Digitizer). The resulting interactive behavior of the model is determined by several physical parameters such as mass, moment of inertia, and various friction coefficients which can all be varied interactively, and by constraints on the camera's degrees of freedom. This allows us to explore a continuous range of physically-based metaphors for controlling the camera motion. We present the results of experiments using several of these metaphors for virtual camera motion and describe the effects of the various physical parameters.
- ItemObject-oriented design of dynamic graphics applications(Wiley, 1992) Gobbetti, Enrico; Turner, Russell
- ItemPhysically-based interactive camera motion control using 3D input devices(Springer, 1991) Turner, Russell; Balaguer, Jean-Francis; Gobbetti, Enrico; Thalmann, DanielThe newest three-dimensional input devices, together with high speed graphics workstations, make it possible to interactively specify virtual camera motions for animation in real time. In this paper, we describe how naturalistic interaction and realistic-looking motion can be achieved by using a physically-based model of the camera's behavior. Our approach is to create an abstract physical model of the camera, using the laws of classical mechanics, which is used to simulate the virtual camera motion in real time in response to force data from the various 3D input devices (e.g. the Spaceball, Polhemus and DataGlove). The behavior of the model is determined by several physical parameters such as mass, moment of inertia, and various friction coefficients which can all be varied interactively, and by constraints on the camera's degrees of freedom which can be simulated by setting certain friction parameters to very high values. This allows us to explore a continuous range of physically-based metaphors for controlling the camera motion. We present the results of experiments with several of these metaphors and contrast them with existing ones.
- ItemSketching 3D animations(Wiley, 1995-09) Balaguer, Jean-Francis; Gobbetti, EnricoWe are interested in providing animators with a general-purpose tool allowing them to create animations using straight-ahead actions as well as pose-to-pose techniques. Our approach seeks to bring the expressiveness of real-time motion capture systems into a general-purpose multi-track system running on a graphics workstation. We emphasize the use of high-bandwidth interaction with 3D objects together with specific data reduction techniques for the automatic construction of editable representations of interactively sketched continuous parameter evolution. In this paper, we concentrate on providing a solution to the problem of applying data reduction techniques in an animation context. The requirements that must be fulfilled by the data reduction algorithm are analyzed. From the Lyche and Moerken knot removal strategy, we derive an incremental algorithm that computes a B-spline approximation to the original curve by considering only a small piece of the total curve at any time. This algorithm allows the processing of the user's captured motion in parallel with its specification, and guarantees constant latency time and memory needs for input motions composed of any number of samples. After showing the results obtained by applying our incremental algorithm to 3D animation paths, we describe an integrated environment to visually construct 3D animations, where all interaction is done directly in three dimensions. By recording the effects of user's manipulations and taking into account the temporal aspect of the interaction, straight-ahead animations can be defined. Our algorithm is automatically applied to continuous parameter evolution in order to obtain editable representations. The paper concludes with a presentation of future work.
- ItemSupporting interactive animation using multi-way constraints(Springer, 1995) Balaguer, Jean-Francis; Gobbetti, EnricoThis paper presents how the animation subsystem of an interactive environment for the visual construction of 3D animations has been modeled on top of an object-oriented constraint imperative architecture. In our architecture, there is no intrinsic difference between user-interface and application objects. Multi-way dataflow constraints provide the necessary tight coupling among components that makes it possible to seamlessly compose animated and interactive behaviors. Indirect paths allow an effective use of the constraint model in the context of dynamic applications. The ability of the underlying constraint solver to deal with hierarchies of multi-way, multi-output dataflow constraints, together with the ability of the central state manager to handle indirect constraints are exploited to define most of the behaviors of the modeling and animation components in a declarative way. The ease of integration between all system's components opens the door to novel interactive solution to modeling and animation problems. By recording the effects of the user's manipulations on the models, all the expressive power of the 3D user interface is exploited when defining animations. This performance-based approach complements standard key-framing systems by providing the ability to create animations with straight-ahead actions. At the end of the recording session, animation tracks are automatically updated to integrate the new piece of animation. Animation components can be easily synchronized using constrained manipulation during playback. The system demonstrates that, although they are limited to expressing acyclic conflict-free graphs, multi-way dataflow constraint are general enough to model a large variety of behaviors while remaining efficient enough to ensure the responsiveness of large interactive 3D graphics applications.
- ItemTecniche di visualizzazione volumetrica di carotaggi(1997) Gobbetti, Enrico; Pili, Piero; Scateni, Riccardo
- ItemVB2: an architecture for interaction in synthetic worlds(ACM, 1993) Gobbetti, Enrico; Balaguer, Jean-FrancisThis paper describes the VB2 architecture for the construction of three-dimensional interactive applications. The system's state and behavior are uniformly represented as a network of interrelated objects. Dynamic components are modeled by active variables, while multi-way relations are modeled by hierarchical constraints. Daemons are used to sequence between system states in reaction to changes in variable values. The constraint network is efficiently maintained by an incremental constraint solver based on an enhancement of SkyBlue. Multiple devices are used to interact with the synthetic world through the use of various interaction paradigms, including immersive environments with visual and audio feedback. Interaction techniques range from direct manipulation, to gestural input and three-dimensional virtual tools. Adaptive pattern recognition is used to increase input device expressiveness by enhancing sensor data with classification information. Virtual tools, which are encapsulations of visual appearance and behavior, present a selective view of manipulated models' information and offer an interaction metaphor to control it. Since virtual tools are first class objects, they can be assembled into more complex tools, much in the same way that simple tools are built on top of a modeling hierarchy. The architecture is currently being used to build a virtual reality animation system.
- ItemView-dependent Exploration of Massive Volumetric Models on Large Scale Light Field Displays(Springer, 2010-06) Iglesias Guitián, José Antonio; Gobbetti, Enrico; Marton, FabioWe report on a light-field display based virtual environment enabling multiple naked-eye users to perceive detailed multi-gigavoxel volumetric models as floating in space, responsive to their actions, and delivering different information in different areas of the workspace. Our contributions include a set of specialized interactive illustrative techniques able to provide different contextual information in different areas of the display, as well as an out-of-core CUDA based raycasting engine with a number of improvements over current GPU volume raycasters. The possibilities of the system are demonstrated by the multi-user interactive exploration of 64GVoxels datasets on a 35MPixel light field display driven by a cluster of PCs.
- ItemVirtuality Builder II: on the topic of 3D interaction(1993) Gobbetti, Enrico; Balaguer, Jean-FrancisMost of today's user interfaces for 3D graphics systems still predominantly use 2D widgets, even though current graphical hardware should make it possible to create applications in which the user directly manipulates aspects of three-dimensional synthetic worlds. The difficulties associated with achieving the key goal of immersion has led the research in virtual environments to concentrate far more on the development of new input and display devices than on higher-level techniques for 3D interaction. It is only recently that interaction with synthetic worlds has tried to go beyond straightforward interpretation of physical device data. The design space for 3D interaction tools and techniques remains mostly unexplored, while being far larger than in standard 2D applications. Moreover, as stated by Myers, "the only reliable way to generate quality interfaces is to test prototypes with users and modify the design based on their comments". The creation of complex interactive applications is an inherently iterative process that requires user interface tools, such as toolkits or frameworks. The lack of experience in 3D interfaces makes it extremely difficult to design 3D interface toolkits or frameworks. We believe that offering the possibility to rapidly prototype and test novel interaction techniques should be the primary goal of such tools. It is therefore more important for these tools to provide a wide range of interaction components, than to enforce a particular interface style. In this paper we present the Virtuality Builder II (VB2) framework developed at the Swiss Federal Institute of Technology for the construction of 3D interactive applications. First, we'll give an overview of the design concepts of VB2. Next, we'll concentrate on how users interact with dynamic models through direct manipulation, gestures, and virtual tools.