The sensitivity of ground-based gamma-ray telescopes is ultimately limited by their ability to reconstruct the properties of gamma-rays from the particle shower produced when they interact with the atmosphere and reject the much more numerous background of showers from charged cosmic rays.
At this time, array sensitivity can be considered as “software limited” and as such has continuously improved in the past 20 years by exploiting advanced image reconstruction and classification algorithms  and multivariate classification techniques such as boosted decision trees  and neural networks. It is clear that using modern machine learning techniques can improve this performance further. Yet, telescope arrays typically make observations under a huge range of context conditions. As such, it is of utmost importance to integrate contextual data on the operation of the telescope as well as on atmospheric conditions into any data analysis pipeline. This is challenging not only because of the heterogeneity of contextual data, but also from a computational point of view.
This PhD project sets out to develop data processing pipelines that combine deep learning techniques for gamma-ray telescope data with diverse types of contextual data to dramatically improve the telescopes’ sensitivity. To this end, we will draw on initial results on employing state-of-the-art machine learning techniques to gamma-ray telescope data  as well as technical insights into integration of static datasets into stream processing pipelines .