With the increasing use of automation, users tend to delegate more tasks to the machines. Such complex systems are usually developed with “black box” Artificial Intelligence (AI), which makes these systems difficult to understand for the user. This assumption is particularly true in the field of automated driving since the level of automation is constantly increasing via the use of state-of-the-art AI solutions. We believe it is important to investigate the field of Explainable AI (XAI) in the context of automated driving since interpretability and transparency are key factors for increasing trust and security. In this workshop, we aim at gathering researchers and industry practitioners from different fields to brainstorm about XAI with a special focus on human-vehicle interaction. Questions like “what kind of explanation do we need”, “which is the best trade-off between performance and explainability” and “how granular should the explanations be” will be addressed in this workshop.