Automating the time-alignment of phonetic labels in speech facilitates research in language documentation, yet such phonetic forced alignment requires pretrained acoustic models. For low-resource languages, this raises the question as to how and on which data the acoustic model should be trained. To align data from Panãra, an Amazonian indigenous language of Brazil, we investigated three approaches for forced alignment of low-resource languages using the Montreal Forced Aligner. First, we implemented a novel approach of manipulating the acoustic model granularity from phone-specific to increasingly broader natural class categories in training language-specific Panãra models. Second, we trained cross-language English models under two granularity settings. Third, we compared these models to a large, pretrained Global English acoustic model. Results showed that broadening phone categories can improve language-specific modeling, but cross-language modeling performed the best.