End-to-end approaches for autonomous driving gain popularity since NVIDIA demonstrated its capabilities with their PilotNet architecture in 2016. Although many research projects are subject to the field of end-to-end architecures, less effort is taken into the issue that arises when a vehicle with different physical properties uses an architecture that it was not trained on specifically. This results in something that we refer to as `system discrepancies’ and can be understood as a sim-to-real problem. In this thesis we implement and evaluate NVIDIA’s popular PilotNet against a system discrepancy which is an offset in the steering. This architecture maps an input image of the front facing camera to an absolute steering wheel angle. We use a custom data set including images with minimal complexity in regard to image features that we generated in the autonomous driving simulator CARLA. In the context of CARLA, we demonstrate that a steering offset negatively impact the driving performance and offsets the vehicle’s position to an amount that is not acceptable in a real world scenario. We propose, implement and evaluate a prototype architecture called PilotNet∆ (PilotNet Delta) that has increased robustness against the steering offset and leads to improved results when comsidering the lateral offset on the road. PilotNet∆ uses a convolutional LSTM layer to map a sequence of images to a relative steering angle, which is the difference to the prevoius steering prediction.