PHILIPPE BURLINA

home biobioimaging | machine intelligencepapers | collaborators | contact



 

Associate research professor, Johns Hopkins University, Department of Computer Science

Section lead, machine intelligence section, JHU/APL/REDD

Joint faculty, Johns Hopkins University School of Medicine, Wilmer Eye Institute






BIOIMAGING

Automated Retinal Image Analysis

We are developing automated pre-screening tools that use machine learning and machine vision techniques for finding individuals at risk of developing age related macular degeneration (AMD).

AMD is the leading cause of blindness for individuals over 50. With the demographic shift of the baby boomer generation in this age category, AMD is poised to become the leading cause of blindness in the years ahead, exceeding diabetic retinopathy. One particular interest is to find individuals with the intermediate stage of AMD, who are asymptomatic for vision loss and have therefore no incentive to seek an ophthalmology exam. Prescreening for these individuals before they develop the advance stage of AMD would allow their referral to a physicians for follow up and management. Additionally, individuals with the neovascular form of AMD  are candidate for anti-Vascular Endothelial Growth Factor (VEGF) therapy where vision can be stabilized. The NIH-funded AREDS study has also shown the benefits of certain supplements. Our work in developing automated prescreening tools has followed several strategies. One is to address the problem as one of fundus image classification and use techniques such as visual words. This approach was tested on the NIH AREDS dataset and led to very promising performance. Another approach was to look for retinal abnormalities, which are found by using machine learning techniques such as the support vector data description. Examples of detected anomalies, which include drusen and other abnormal pigmentations, are shown below.

anomaly detection in




Preoperative surgical planning / 3D  patient  specific modeling

We are designing techniques that combine machine vision and biomechnical modeling to turn real time 3D echographic images into accurate, patient-specific computational models of the heart that allow surgeons to preoperatively test and simulate reconstructive cardiovascular surgical interventions.

preoperative cardiac surgical planning

Cardiovascular disease (CVD) is one of the leading causes of death among Americans. The primary treatment for many types of CVDs entails some form of cardiac surgical reconstruction. The complex physiology and 3D anatomy of the heart presents substantial challenges when performing many of these reconstructive operations. These repairs are generally performed on an arrested heart under cardiopulmonary bypass. Because of the complexity and critical nature of this problem, it is important to develop computer aided methods allowing surgeons to preoperatively create more precise surgical plans for a given patient. An important use case is mitral valve disease because of its significance among CVDs and its clinical relevance. Our efforts have  been to develop  3D and 3D+time echocardiographic image analysis tools for computing patient-specific 3D motion and anatomical information through automated segmentation, mesh generation, velocity flow estimation, and dynamic tracking. Using these primitives, modifiable computational biomechanical models can be developed to accurately predict the mitral valve closure behavior resulting from a virtual surgical reconstruction. Our work has also looked at careful validation of anatomical and computational models. The example below shows an original 3D image obtained using real time 3D Transesophageal Echocardiography (3D TEE), along with the corresponding segmented endocardial walls, and the reconstructed mitral valve model.



automated segmentation
              and modeling of the mitral valve


4D Ultrasound Image Exploitation

Real time 3D ultrasound (also called 4D ultrasound) is an imaging modality that is unique for its ability to image the very fast motion of certain structures in the heart, such as the mitral and aortic valves. It has other benefits when compared to other 3D imaging modalities in terms of costs, form factor, and safety. It has  certain drawbacks in terms of image quality (artifacts, obscuration, resolution) which make the automatic segmentation of anatomical structures more challenging. We are developing tools to automatically segment cardiac structure and compute cadiac motion in 3D.
 


Patient Specific Biomechanical Modeling

Our effort has been to develop computational and physical models of the mitral valve and the heart complex to predict the outcome of a candidate surgical procedure, answering the question "will the candidate reconstructed valve close competently?”. The approach we used takes as a starting point the patient specific valve anatomy, recovered from 3D echocardiography, and modified to incorporate the surgeon's proposed reconstruction. It then performs a shape finding procedure using a finite element approach to predict the closed valve configuration based on physical modeling of the closure. It assumes that the valve is subject to several forces including blood pressure, internal hyperelastic forces, tethering forces from the chordae tendineae, and collision forces to avoid leaflets intersection. The closed valve configuration is then predicted by using an energy minimization process which finds the valve at the closed equilibrium position during systole. The figure below shows a set of intermediary steps in the computation of the closed valve configuration from an assumed open configuration.  We have also developed dynamical models that aim to evaluate the behavior of the valve when immersed in patient specific hemodynamic conditions.

 

modeling of the mitral
              valve closure based on 3D ultrasound input


Hemodynamics

In addition to modeling, we have been pursuing validation and have been interested in means by which echographic imagery can be combined with echogenic contrast agents to tease out information on myocardial and blood velocity fields. The images below show the computation of blood flow through the intraventricular chamber during diastolic and systolic phases using contrast enhanced ultrasound. This information can be exploited using machine learning techniques to do classification to provide automated diagnostics. It is also important from a scientific perspective to elucidate the mechanisms that accompany certain heart pathologies such as hypertrophic cardiomyopathy.

frame-lab-40
 


Dynamic Cell Microscopy

We are interested in methods for tracking the movement of multiple movers in challenging mutli-target environments, such as for example cells and their lineage in microfluidics chips. Our work has dealt with the design of methods based on the Gaussian Mixture Probability Hypothesis Density (GM-PHD) filter, a multi-target tracking algorithm, to track the motion of multiple cells over time and to keep track of the lineage of cells as they spawn.

cells.png




MACHINE INTELLIGENCE / MACHINE VISION / MACHINE LEARNING

Visual Tracking

Our efforts have looked at  novel methods for performing visual tracking and Bayesian filtering. We have developed kernel based methods that perform efficient Bayesian filtering using particle filters. Particle filters (PFs) are Bayesian filters capable of modeling non-linear, non-Gaussian, and non-stationary dynamical systems.  Recent work in PFs has looked at ways to appropriately sample from the posterior distribution, maintain multiple hypotheses, and alleviate computational costs while preserving tracking accuracy.  To address these issues, our work has led us to the design of a filter leveraging Support Vector Data Description (SVDD) density estimation methods within the particle filtering framework. The SVDD density estimate can be integrated into various types of PFs and has several benefits.  It gives a sparse representation of the posterior density that reduces the computational complexity of the PF.  The proposed approach also provides an analytical expression for the posterior distribution that can be used to identify its modes for maintaining multiple hypotheses and computing the MAP estimate, and to directly sample from the posterior. We have also been investigating ways to perform Bayesian filtering in the presence of multiple movers while avoiding the need for establishing correspondences in time between tokens, by exploiting recent developments in unlabeled tracking such as the Probabilistic Hypothesis Density filter (PHD).


tracking of video using kernel-based PF

Distributed Vision

Machine vision traditional paradigm to treat visual information acquired by multiple cameras has been to collect and process the visual input in a single centralized processing node.  The alternative scenario we have worked on involves multiple cameras trying to reach a consensus about what they see by locally processing partial information while communicating with only a few of their neighboring cameras through communication channels with limited bandwidth.  This work has examined how such processing cameras can best reach a consensus about the pose of an object they see when they each know a model of the object, defined by a set of world point coordinates, but can potentially only see a subset of these points in the midst of clutter points from the background, not knowing at first which image points match which object points, nor which points are object points or background points. We have shown that the cameras can reach the most accurate pose consensus by exchanging the parameters characterizing the object's pose which are generated by 3D world coordinates penalized to agree with the input model.  The cameras use these parameters to reconstruct the object's world coordinates using their knowledge of the model, and perform consensus updates on these world coordinates.




distributed camera network

Hyperspectral Video

One area we have been interested in is video exploitation using video rate hyperspectral sensors. Hyperspectral video cameras are much like traditional cameras but measure hundreds or thousands of spectral bands in place of the three RGB channels found in traditional video cameras, therefore offering additional benefits for detection, classification, or tracking tasks. The exploitation and automated analysis of visual input has been the goal of the computer vision community, while hyperspectral cameras have been mostly studied and exploited by the remote sensing research community. Video rate hyperspectral cameras offer a triplet of spatial, temporal and spectral information and promise to combine capabilities and bring together both fields of research.


hyperspectral


CONTACT            p h i l   ( a t )   p m b u r l i n a  (d o t)  c o m




Copyright © P.Burlina 2009-2014