In this work, the Basel Face Model 2017 (BFM) will be examined with regard to the generation of learning data for a regression network. A regression network is created that infers parameter vectors for the BFM from input images of faces. These parameter vectors are a comparably low dimensional representation of faces which can then be provided in a next step as point cloud or mesh. The regression network is trained with different training data and the performance of the network and thus the quality of the learning data is then evaluated using face recognition. The event space of the BFM is examined and generative models are used to limit the event space such that no invalid faces are created during the generation of the learning data. The limitation of the event space on valid faces gives the Generative models the possibility to generate only realistic faces. The generative models created in this work are a selection of Gaussian Mixture models and a Generative Adversarial Network that can also be fitted with facial age and gender parameters to further restrict the output. The main findings of this thesis are that conventional generation of 100k images as training data with normal distributed initialization of parameter vectors does worse than uniform distributed values for the parameter vectors. The limitation of the BFM’s event space by generative models meant that realistic faces were created, but this also affected the diversity of the data. The loss of diversity can be a reason for the somewhat poorer performance of the generative models.