A simple neural translator called RECONTRA (REcurrent CONnectionist TRAnslator) has recently shown to successfully approach simple text-to-text limited-domain Machine Translation tasks. In this approach the vocabularies involved in the translations were represented according to (simple and clear) local codifications. However, in order to deal with large vocabularies local representations would led to networks with an excessive number of connections to be trained. Consequently, distributed representations of both source and target vocabularies are required. This paper studies appropriate types of distributed codifications to represent large vocabularies in the RECONTRA translator.