CRS4
URI permanente di questa Community
Fondato negli anni '90, il CRS4 è un centro di ricerca interdisciplinare che promuove lo studio, lo sviluppo e l'applicazione di soluzioni innovative a problemi provenienti da ambienti naturali, sociali e industriali. Tali sviluppi e soluzioni si basano sulla Scienza e Tecnologia dell'Informazione e sul Calcolo Digitale ad alte prestazioni. L'obiettivo principale è l'Innovazione.
La missione del Centro è quella di aiutare la Sardegna a dar vita e far crescere un tessuto di imprese hi-tech essenziali per il suo sviluppo economico e culturale.
Dal 2010 il Centro occupa circa 200 persone, tra ricercatori, tecnologi e staff di supporto, che operano in 4 settori strategici della ricerca scientifica: Biomedicina, Data Fusion, Energia e Ambiente e Società dell'Informazione. Il CRS4, inoltre, gestisce uno dei principali centri di calcolo italiani, la prima piattaforma in Italia dedicata alla genotipizzazione e al sequenziamento massivo del DNA e un laboratorio di Visual Computing allo stato dell'arte.
Sito istituzionale del CRS4
Sfogliare
Mostra il contenuto di CRS4 per Autore "Agus, Marco"
Ora in mostra 1 - 20 di 24
Risultati per pagina
Opzioni di ordinamento
- ItemA multiprocessor decoupled system for the simulation of temporal bone surgery(Springer, 2002-07) Agus, Marco; Giachetti, Andrea; Gobbetti, Enrico; Zanetti, Gianluigi; Zorcolo, AntonioA training system for simulating temporal bone surgery is presented. The system is based on patient-specific volumetric object models derived from 3D CT and MR imaging data. Real-time feedback is provided to the trainees via real-time volume rendering and haptic feedback. The performance constraints dictated by the human perceptual system are met by exploiting parallelism via a decoupled simulation approach on a multi-processor PC platform. In this paper, system components are detailed and the current state of the integrated system is presented.
- ItemAdaptive techniques for real–time haptic and visual simulation of bone dissection(IEEE, 2003-03) Agus, Marco; Giachetti, Andrea; Gobbetti, Enrico; Zanetti, Gianluigi; Zorcolo, AntonioBone dissection is an important component of many surgical procedures. In this paper we discuss adaptive techniques for providing real-time haptic and visual feedback during a virtual bone dissection simulation. The simulator is being developed as a component of a training system for temporal bone surgery. We harness the difference in complexity and frequency requirements of the visual and haptic simulations by modeling the system as a collection of loosely coupled concurrent components. The haptic component exploits a multi-resolution representation of the first two moments of the bone characteristic function to rapidly compute contact forces and determine bone erosion. The visual component uses a time-critical particle system evolution method to simulate secondary visual effects, such as bone debris accumulation, blooding, irrigation, and suction.
- ItemAdvances in massive model visualization in the CYBERSAR project(Consorzio COMETA, 2009-02) Agus, Marco; Bettio, Fabio; Marton, Fabio; Zorcolo, Antonio; Pintore, GiovanniWe provide a survey of the major results obtained within the CYBERSAR project in the area of massive data visualization. Despite the impressive improvements in graphics and computational hardware performance, interactive visualization of massive models still remains a challenging problem. To address this problem, we developed methods that exploit the programmability of latest generation graphics hardware, and combine coarse-grained multiresolution models, chunk-based data management with compression, incremental view-dependent level-of-detail selection, and visibility culling. The models that can be interactively rendered with our methods range from multi-gigabyte-sized datasets for general 3D meshes or scalar volumes, to terabyte-sized datasets in the restricted 2.5D case of digital terrain models. Such a performance enables novel ways of exploring massive datasets. In particular, we have demonstrated the capability of driving innovative light field displays able of giving multiple freely moving naked-eye viewers the illusion of seeing and manipulating massive 3D objects with continuous viewer-independent parallax.
- ItemAn integrated environment for steroscopic acquisition, off-line 3D elaboration, and visual presentation of biological actions(IOS, 2001-01) Agus, Marco; Bettio, Fabio; Gobbetti, Enrico; Fadiga, LucianoWe present an integrated environment for stereoscopic acquisition, off-line 3D elaboration, and visual presentation of biological hand actions. The system is used in neurophysiological experiments aimed at the investigation of the parameters of the external stimuli that mirror neurons visually extract and match on their movement related activity.
- ItemAn interactive 3D medical visualization system based on a light field display(Springer-Verlag, 2009-09-01) Agus, Marco; Bettio, Fabio; Giachetti, Andrea; Gobbetti, Enrico; Iglesias Guitián, José Antonio; Marton, Fabio; Nilsson, Jonas; Pintore, Giovanni; CRS4We present a prototype medical data visualization system exploiting a light field display and custom direct volume rendering techniques to enhance understanding of massive volumetric data, such as CT, MRI, and PET scans. The system can be integrated with standard medical image archives and extends the capabilities of current radiology workstations by supporting real-time rendering of volumes of potentially unlimited size on light field displays generating dynamic observer-independent light fields. The system allows multiple untracked naked-eye users in a sufficiently large interaction area to coherently perceive rendered volumes as real objects, with stereo and motion parallax cues. In this way, an effective collaborative analysis of volumetric data can be achieved. Evaluation tests demonstrate the usefulness of the generated depth cues and the improved performance in understanding complex spatial structures with respect to standard techniques.
- ItemAn interactive 3D medical visualization system based on a light field display(Springer, 2009-09) Agus, Marco; Bettio, Fabio; Giachetti, Andrea; Gobbetti, Enrico; Iglesias Guitián, José Antonio; Marton, Fabio; Nilsson, Jonas; Pintore, GiovanniThis paper presents a prototype medical data visualization system exploiting a light field display and custom direct volume rendering techniques to enhance understanding of massive volumetric data, such as CT, MRI, and PET scans. The system can be integrated with standard medical image archives and extends the capabilities of current radiology workstations by supporting real-time rendering of volumes of potentially unlimited size on light field displays generating dynamic observer-independent light fields. The system allows multiple untracked naked-eye users in a sufficiently large interaction area to coherently perceive rendered volumes as real objects, with stereo and motion parallax cues. In this way, an effective collaborative analysis of volumetric data can be achieved. Evaluation tests demonstrate the usefulness of the generated depth cues and the improved performance in understanding complex spatial structures with respect to standard techniques.
- ItemCreating and presenting real and artificial visual stimuli for the neurophysiological investigation of the observation/execution matching system(2000-06) Agus, Marco; Bettio, Fabio; Gobbetti, EnricoRecent neurophysiological experiments have shown that the visual stimuli that trigger a particular kind of neurons located in the ventral premotor cortex of monkeys and humans are very selective. These textitmirror neurons are activated when the hand of another individual interacts with an objects but are not activated when the actions, identical in purpose, are made by manipulated mechanical tools. A Human Frontiers Science Program project is investigating which are the parameters of the external stimuli that mirror neurons visually extract and match on their movement related activity. The planned neurophysiological experiments will require the presentation of digital stimuli of different kinds, including video sequences showing meaningful actions made by human hands, synthetic reproductions of the same actions made by realistic virtual hands, as well as variations of the same actions by controlled modifications of hand geometry and/or action kinematics. This paper presents the specialized animation system we have developed for the project.
- ItemExploring virtual prototypes using time-critical rendering(1999-11) Gobbetti, Enrico; Scateni, Riccardo; Agus, MarcoWe present an application of our time-critical multiresolution rendering algorithm to the visual and possibly collaborative exploration of large digital mock-ups. Our technique relies upon a scene description in which objects are represented as multiresolution meshes. We perform a constrained optimization at each frame to choose the resolution of each potentially visible object that generates the best quality image while meeting timing constraints. We present numerical and pictorial results of the experiments performed that support our claim that we can maintain a xed frame-rate even when rendering very large datasets on low-end graphics PCs.
- ItemFOX: The Focus Sliding Surface Metaphor for Natural Exploration of Massive Models on Large-scale Light Field Displays(ACM, 2011-12) Marton, Fabio; Agus, Marco; Pintore, Giovanni; Gobbetti, EnricoWe report on a virtual environment for natural immersive exploration of extremely detailed surface models on light field displays. Our specialized 3D user interface allows casual users to inspect 3D objects at various scales, integrating panning, rotating, and zooming controls into a single low-degree-of-freedom operation, while taking into account the requirements for comfortable viewing on a light field display hardware. Specialized multiresolution structures, embedding a fine-grained per-patch spatial index within a coarse-grained patch-based mesh structure, are exploited for fast batched I/O, GPU accelerated rendering, and user-interaction-system-related geometric queries. The capabilities of the system are demonstrated by the interactive inspection of a giga-triangle dataset on a large scale 35MPixel light field display controlled by wired or vision-based devices.
- ItemHaptic and visual simulation of bone dissection(2004-03) Agus, Marco; Picasso, Bruno; Zanetti, Gianluigi; Gobbetti, EnricoIn bone dissection virtual simulation, force restitution represents the key to realistically mimicking a patient--specific operating environment. The force is rendered using haptic devices controlled by parametrized mathematical models that represent the bone--burr contact. This dissertation presents and discusses a haptic simulation of a bone cutting burr, that it is being developed as a component of a training system for temporal bone surgery. A physically based model was used to describe the burr--bone interaction, including haptic forces evaluation, bone erosion process and resulting debris. The model was experimentally validated and calibrated by employing a custom experimental set--up consisting of a force--controlled robot arm holding a high--speed rotating tool and a contact force measuring apparatus. Psychophysical testing was also carried out to assess individual reaction to the haptic environment. The results suggest that the simulator is capable of rendering the basic material differences required for bone burring tasks. The current implementation, directly operating on a voxel discretization of patient-specific 3D CT and MR imaging data, is efficient enough to provide real--time haptic and visual feedback on a low--end multi--processing PC platform
- ItemHardware-accelerated dynamic volume rendering for real-time surgical simulation(2004-09) Agus, Marco; Giachetti, Andrea; Gobbetti, Enrico; Zanetti, Gianluigi; Zorcolo, AntonioWe developed a direct volume rendering technique, that supports low latency real time visual feedback in parallel with physical simulation on commodity graphics platforms. In our approach, a fast approximation of the diffuse shading equation is computed on the fly by the graphics pipe-line directly from the scalar data. We do this by exploiting the possibilities offered by multi-texturing with the register combiner OpenGL extension, that provides a configurable means to determine per-pixel fragment coloring. The effectiveness of our approach, that supports a full decoupling of simulation and rendering, is demonstrated in a training system for temporal bone surgery.
- ItemHierarchical higher order face cluster radiosity(2002-03-29) Gobbetti, Enrico; Spanò, Leonardo; Agus, MarcoWe present an algorithm for simulating diffuse interreflection in scenes composed of highly tessellated objects. The method is a higher order extension of the face cluster radiosity technique. It combines face clustering, multiresolution visibility, vector radiosity, and higher order bases with a modified progressive shooting iteration to rapidly produce visually continuous solutions with limited memory requirements. The output of the method is a vector irradiance map that partitions input models into areas where global illumination is well approximated using the selected basis. The OpenGL register combiners extension can be used to render illuminated models directly from the vector irradiance map, exploiting hardware acceleration for computing vertex radiosity on commodity graphics boards.
- ItemIERAPSI. Petrous bone surgical simulation platform(2003-01) Agus, Marco; Giachetti, Andrea; Gobbetti, Enrico; Zanetti, Gianluigi; Zorcolo, AntonioThis Report has been prepared in fulfilment of Deliverable D4.2, required as a result of Work Package 4 (Real time physically based surgical simulators) of the EU Framework V Project IERAPSI, An Integrated Environment for the Rehearsal and Planning of Surgical Interventions (IST-1999-12175) Deliverable D4.2 relates to the Petrous bone surgical simulation platform, the second and last of the two main expected results of Work Package 4. The present document provides a technical description of the software system produced. The report concludes with a bibliography of cited reference work. An accompanying video (available on the deliverable CD-ROM) further illustrates the petrous bone surgical simulation platform with live sequences comparing a real and a simulated surgical procedure performed on the temporal bone.
- ItemIERAPSI. Surgical simulator software kernel(2002-02) Agus, Marco; Giachetti, Andrea; Gobbetti, Enrico; Zanetti, GianluigiThis Report has been prepared in fulfilment of Deliverable D4.1, required as an intermediate result of Work Package 4 (Real time physically based surgical simulators) of the EU Framework V Project IERAPSI, An Integrated Environment for the Rehearsal and Planning of Surgical Interventions (IST-1999-12175). Deliverable D4.1 relates to the Surgical simulation software kernel, the first of the two main expected results of Work Package 4. The “Surgical simulation software kernel” will be used as the foundation upon which task T4.6 “Surgical simulator prototypes” will build deliverable D4.2, Petrous bone surgical simulation platform. The present document provides a technical description of the software system produced.
- ItemInterfacce uomo-macchina nella realtà virtuale(Polimetrica, 2008) Iglesias Guitián, José Antonio; Agus, MarcoQuesto capitolo fornisce una descrizione dei principali elementi che influenzano l'interazione uomo-macchina in riferimento alla realtà virtuale, per come si configurano attualmente, e per come si prevede si svilupperanno in un prossimo futuro. Il capitolo è organizzato nel modo seguente: la sezione 1.1 presenta il concetto di realtà virtuale soprattutto in relazione alle possibilità offerte per quanto riguarda l’interazione tra uomo e macchina, ed alle applicazioni di nuova generazione. La sezione successiva descrive i principali requisiti ed i vincoli che un sistema di realtà virtuale deve soddisfare per riuscire a fornire all’utente un’impressione convincente e delle esperienze realmente immersive. La sezione 1.3 si concentra sul feedback sensoriale principale, descrivendo le principali tecnologie di nuova generazione per la realizzazione di dispositivi in grado di fornire delle sensazioni visive e tattili estremamente realistiche. Infine la sezione 1.4 descrive brevemente alcuni esempi di applicazioni di realtà virtuale realizzate dagli autori, nel campo della simulazione chirurgica, dei musei virtuali e dei sistemi di visualizzazione autostereoscopici multiutente, e la sezione 1.5 discute brevemente la situazione attuale ed il potenziale futuro della disciplina.
- ItemMastoidectomy simulation with combined visual and haptic feedback(IOS, 2002-01) Agus, Marco; Giachetti, Andrea; Gobbetti, Enrico; Zanetti, Gianluigi; Zorcolo, Antonio; Nigel, John W.; Stone, Robert J.Mastoidectomy is one of the most common surgical procedures relating to the petrous bone. In this paper we describe our preliminary results in the realization of a virtual reality mastoidectomy simulator. Our system is designed to work on patient-specific volumetric object models directly derived from 3D CT and MRI images. The paper summarizes the detailed task analysis performed in order to define the system requirements, introduces the architecture of the prototype simulator, and discusses the initial feedback received from selected end users.
- ItemPseudo-holographic device elicits rapid depth cues despite random-dot surface masking(Pion, 2007) Brelstaff, Gavin; Agus, Marco; Gobbetti, Enrico; Zanetti, GianluigiExperiments with random-dot masking demonstrate that, in the absence of cues mundanely available to 2-D displays (object occlusion, surface shading, perspective foreshortening, and texture gradients), Holografika's large-screen multi-projector video system (COHERENT-IST-FP6-510166) elicits useful stereoscopic and motion-parallax depth cues, and does so in under 2 s. We employed a simplified version of Julesz's (c. 1971) famous spiral ramp surface: a 3-layer cylindrical wedding-cake--via an openGL model that subjects viewed along its concentric axis. By adjusting its parameters, two sets of model-stimuli were rendered: one with a uniform large field of depth and one where the field was effectively flat. Each of eleven, pre-screened, subjects completed four experiments, each consisting of eight trials in a 2IFC design whereby they indicated in which interval they perceived the greatest field of depth. The experiments tested one-eye static, one-eye head-swaying, two-eye static, and two-eye head-swaying observation--in that order. Scores improved also in that order.
- ItemReal-time haptic and visual simulation of bone dissection(MIT, 2003-02) Agus, Marco; Giachetti, Andrea; Gobbetti, Enrico; Zanetti, Gianluigi; Zorcolo, AntonioBone dissection is an important component of many surgical procedures. In this paper, we discuss a haptic and visual simulation of a bone-cutting burr that is being developed as a component of a training system for temporal bone surgery. We use a physically motivated model to describe the burr-bone interaction, which includes haptic forces evaluation, the bone erosion process, and the resulting debris. The current implementation, directly operating on a voxel discretization of patient-specific 3D CT and MR imaging data, is efficient enough to provide real-time feedback on a low-end multiprocessing PC platform.
- ItemReal-time haptic and visual simulation of bone dissection(IEEE, 2002-02) Agus, Marco; Giachetti, Andrea; Gobbetti, Enrico; Zanetti, GianluigiBone dissection is an important component of many surgical procedures. In this paper, we discuss a haptic and visual implementation of a bone-cutting burr that is being developed as a component of a training system for temporal bone surgery. We use a physically motivated model to describe the burr-bone interaction, which includes haptic force evaluation, the bone erosion process and the resulting debris. The current implementation, directly operating on a voxel discretization of patient-specific 3D CT and MRI data, is efficient enough to provide real-time feedback on a low-end multiprocessing PC platform.
- ItemRecent results in rendering massive models on horizontal parallax-only light field displays(Consorzio COMETA, 2009-02) Agus, Marco; Bettio, Fabio; Marton, Fabio; Zorcolo, AntonioIn this contribution, we report on specialized out-of-core multiresolution real-time rendering systems able to render massive surface and volume models on a special class of horizontal parallax-only light field displays. The displays are based on a specially arranged array of projectors emitting light beams onto a holographic screen, which then makes the necessary optical transformation to compose these beams into a continuous 3D view. The rendering methods employ state-of-the-art out-of-core multiresolution techniques able to correctly project geometries onto the display and to dynamically adapt model resolution by taking into account the particular spatial accuracy characteristics of the display. The programmability of latest generation graphics architectures is exploited to achieve interactive performance. As a result, multiple freely moving naked-eye viewers can inspect and manipulate virtual 3D objects that appear to them floating at fixed physical locations. The approach provides rapid visual understanding of complex multi-gigabyte surface models and volumetric data sets.