The paper presents a combined experiment in which recognition of a prosodic phrase position within a larger syntactic structure by human listeners is confronted with recognition by artificial neural networks. Apart from the success rate we are predominantly interested in similarities in the error pattern of the two recognition modes. The results suggest that the automatic recognition could help to determine which of the selected parameters are relevant for human listeners, since it provides linguistically interpretable outcome.