Extracting concepts directly from associated time series data is a relatively new problem with limited prior work. Automated discovery of temporal relations from clinical narratives and doctor notes to uncover the patterns of disease progression and patient condition has been studied in clinical informatics. These works only extract temporal relations from clinical notes, and do not use the accompanying time series data. Recently, deep learning models have been explored for modeling diseases and patient diagnosis in healthcare domain using doctor notes, but they cannot be applied for concept time series prediction tasks since they do not model the associated time series data. Text captioning from images and videos has been successfully studied in deep learning community. However, these works are not designed to handle the multivariate nature of medical time series. In this work, we have two tasks:
Concept Prediction Task: Given patient’s medical multivariate time series, predict the set of all concepts of this patient (1D output)
Concept Localization Task: Given patient’s medical multivariate time series, predict the concepts time series of this patient (2D output)
Inspired by the recent success of deep learning in sequence modeling and classification tasks, we adapt these models to address the above tasks. We employ and compare the performance of feed-forward neural Networks, convolution neural networks, multi-layer LSTMs and sequence-to-sequence models. In addition, we also utilize popular non-deep learning models such as Logistic regression and random forests for concept time series prediction tasks and benchmark their performance with respect to the deep learning models.