PhD project offered by the IMPRS-gBGC in

" style="display:none">Anchor
" target="_blank"> , " target="_blank">

Project description

Terrestrial ecosystems are complex in structure and dynamic through time. This spatio-temporal heterogeneity embeds uncertainty in global biogeochemical models and limits our understanding of how ecosystems will respond to changes in climate and land-use. We need better quantification of ecosystem spatio-temporal heterogeneity to improve global model predictions and inform land-management and biodiversity conservation under global change. Remote sensing plays a critical role in mapping and monitoring the structure and composition of ecosystems through time. Three-dimensional (3D) laser scanning (LiDAR) has proven particularly useful for the structural representation of ecosystem elements. Recent improvements in sensor technology enable rapid 3D characterization at individual tree, branch and leaf scales. These advances hold large potential for advancing the field of global biogeochemical cycling – by improving estimates of vegetation biomass, mapping plant functional traits, modeling canopy architecture, and quantifying rates of growth and mortality.
Capturing dense 3D point-clouds has become easier and faster – but turning these point-clouds into ecologically meaningful information remains a major challenge in global change ecology. To realize the full potential of 3D remote sensing for terrestrial ecosystem research, we need to turn to the field of Computer Vision for extracting relevant features from raw point-cloud data.
We seek a candidate with a strong background in computer science and a keen interest in terrestrial ecology to help bridge the gap between 3D data and information. The main direction of research will be to apply, extend, and develop modern machine learning techniques for tree segmentation from unstructured 3D point clouds. Recent advances in the area of semantic segmentation show that segmentation can be formulated as a machine learning problem with all the potentials arising from available techniques in that area.
The proposed PhD project will address the following research questions:
  1. Can we segment individual trees from 3D point clouds based on semantic segmentation, and how fine-grained can we distinguish individual parts of the tree (trunk, branches, leafs) by selecting appropriate features from the point clouds
  2. How can expert knowledge be integrated into the automatic analysis to support semantic segmentation
  3. How can we compute an appropriate representation of an individual tree (for example, skeleton, graph, etc.) to be able to estimate parameters like stem diameter, branch counting, branching angles, etc.
  4. Can we apply machine learning techniques to learn counting [5] to the area of 3D LIDAR data to estimate statistics from a recorded area, like number of trees, fraction of wooden parts, or biomass
  5. Are we able to classify individual tree species based on 3D point clouds
The whole project will follow an incremental approach, starting with tree detection, followed by individual tree segmentation, and finally reaching segmentation of tree parts including leafs. The main challenge and potential impact arises not only from the task itself, but also from the application of machine learning techniques to the problem of processing 3D point clouds. The selected candidate can build upon results from literature for analysis of urban scenes [1,2], and previous work of the Computer Vision Group on semantic segmentation [3,4].

[1] Jixian Zhang, Xiangguo Lin, Xiaogang Ning: SVM-Based Classification of Segmented Airborne LiDAR Point Clouds in Urban Areas. Remote Sens. 2013, 5, 3749-3775 [2] Pouria Babahajiani, Lixin Fan, Moncef Gabbouj: Semantic Parsing of Street Scene Images Using 3D LiDAR Point Cloud. Computer Vision Workshops (IC-CVW), 2013, IEEE International Conference on Computer Vision. [3] Björn Fröhlich, Eric Bach, Irene Walde, Sören Hese, Christiane Schmullius, Joachim Denzler: Land cover classification of satellite images using contextual information. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 1-6. 2013. [4] Björn Fröhlich, Erik Rodner, Joachim Denzler: Semantic Segmentation with Millions of Features: Integrating Multiple Cues in a Combined Random Forest Approach. Asian Conference on Computer Vision (ACCV). 218-231. 2012 [5] V. Lempitsky, A. Zisserman. Learning to Count Objects in Images. NIPS, 2010.

Working group & planned collaboration

The successful candidate will be based in the Computer Vision Group in the Department of Mathematics and Computer Sciences at the Friedrich Schiller University, and will collaborate closely with the Landscape Processes Group in the Department of Biogeochemical Processes at the Max Planck Institute for Biogeochemistry, Jena.

Requirements

Applications to the IMPRS-gBGC are open to well-motivated and highly-qualified students from all countries. Prerequisites for this PhD project are:
  • a Master’s degree in Computer Science
  • experience in Computer Vision and Signal Analysis, interest in modern machine learning techniques, programming skills
  • interest in global change ecology
  • very good oral and written communication skills in English

After you have been selected

The IMPRS-gBGC office will happily assist you with your transition to Jena.
The conditions of employment, including upgrades and duration follow the rules of the Max Planck Society for the Advancement of Science and those of the German civil service. The gross monthly income amounts about 2000 EUR, which will cover all your expenses in Germany.
The Max Planck Society seeks to increase the number of women in those areas where they are underrepresented and therefore explicitly encourages women to apply. The Max Planck society is committed to increasing the number of individuals with disabilities in its workforce and therefore encourages applications from such qualified individuals.

Segmented Tree from 3D Lidar data.
Segmented Tree from 3D Lidar data.


>> more information about the IMPRS-gBGC + application