Question Answering over Knowledge Graphs (KGQA) aims to compute answers for natural language questions over a knowledge graph. Recent KGQA approaches adopt a neural machine translation (NMT) approach, where the natural language question is translated into a structured query language. However, NMT suffers from the out-ofvocabulary problem, where terms in a question may not have been seen during training, impeding their translation. This issue is particularly problematic for the millions of entities that large knowledge graphs describe. We rather propose a KGQA approach that delegates the processing of entities to entity linking (EL) systems. NMT is then used to create a query template with placeholders that are filled by entities identified from the text in an EL phase. This approach gives rise to what we call the "entity filling" problem, where we must decide which placeholders to replace with which entities. To address this problem, we propose a solution based on sequence labelling and constraints. Experiments for QA with complex questions over Wikidata show that our approach outperforms pure NMT approaches: while the task remains challenging, errors relating to entities in the translated queries are greatly reduced.