Sentence boundary detection (SBD) is a foundational step in natural language processing that usually occurs early on in the processing pipeline. Numerous studies report excellent performance of different SBD methods with respect to the data sets they were tested on (Brown, WSJ). More recent studies suggest that much previous work has interpreted the SBD task too narrowly, leading to overly optimistic estimates of SBD performance. It has also been observed that a move from edited, relatively formal language, to less formal language (e.g., user-generated web content) results in performance degradation. We assess performance of different SBD methods when applied to decisions of the US courts. Although the decisions are heavily edited rigorous text documents, we observe a significant drop in performance, as well. We analyze the causes of missed or erroneously predicted sentence boundaries and propose domain specific techniques to mitigate the degradation of performance.