Audio deepfake detection has advanced significantly, in particular, thanks to the ASVSpoof challenge. However, existing approaches primarily rely on binary classification, which does not provide information about the origin of manipulated audio. In this paper, we address the problem of source tracing and propose two protocols to evaluate model performance in an open-set setting: (1) a few-shot identification protocol, where K reference audios are provided, and (2) a verification protocol inspired by speaker verification. We classify either the entire generation system or its components, such as the acoustic model or vocoder. Our models are trained both on an internal dataset and on the MLAAD source tracing dataset. Evaluation is done on five public datasets: three ASVSpoof sets, MLAAD and Blizzard23. Results show promising discrimination of unseen class attributes. Finally, we emphasize the need for a standardized ontology for source tracing in audio deepfake detection.