ABSTRACT In this talk, we present our ongoing work to improve neural dependency parsing for German. While recent advances based on neural methods have increased results for dependency parsing up to 92% labelled attachment score (LAS) on newspaper text, the parser is still not very good at identifying core verbal arguments. Frequent errors include confusion between direct and indirect objects, and between subjects and predicates. Thus, we propose two approaches to overcome this problem: (1) we augment the labelling component of a parser with a decision history, and (2) we exploit subcategorisation frame (subcat frame) information. In the first part of the talk, we incorporate the labelling history for grammatical function labelling by replacing the local label classifier in Zhang et al., 2017 with LSTMs. We present different ways to encode the history, using different LSTM architectures, and show that our models yield significant improvements, resulting in a LAS for German that is close to the best result from the SPMRL 2014 shared task (without the reranker). In the second part, we hypothesize that subcat frame information, i.e., syntactic information on the type of argument that a verb can take, can be used to improve parsing. We give proof of concept for our idea by extending a state-of-the-art parser for German (Dozat & Manning, 2017) with gold subcat frames, which results in a significant improvement for core arguments on the German dataset from the SPMRL 2014 shared task. Then we report on work in progress toward predicting subcat frames and integrating this information in the parser.