In this work, we present the SOMOS dataset, the first large-scale mean opinion scores (MOS) dataset consisting of solely neural text-to-speech (TTS) samples. It can be employed to train automatic MOS prediction systems focused on the assessment of modern synthesizers, and can stimulate advancements in acoustic model evaluation. It consists of 20K synthetic utterances of the LJ Speech voice, a public domain speech dataset which is a common benchmark for building neural acoustic models and vocoders. Utterances are generated from 200 different TTS systems including a variety of vanilla neural acoustic models as well as models which allow prosodic variations. An LPCNet vocoder is used for all systems, so that the variations in the final samples depend only on the acoustic models. The synthesized utterances provide a balanced and adequate domain, length and phoneme coverage. MOS naturalness evaluations are collected via crowdsourcing on Amazon Mechanical Turk. We present in detail the design of the SOMOS dataset, as well as provide baseline results by training and evaluating state-of-the-art MOS prediction models, while we show the problems that these models face when assigned to evaluate TTS samples.