Using Electronic health record (EHR) data to inform decision support and support data-driven quality measures is an ongoing challenge, often requiring the use of Natural Language Processing (NLP) techniques to extract computable representations from unstructured free text. Although significant advances in NLP systems have increased the scope and power of these approaches, use of these tools often requires resource-intensive active collaboration between clinical experts familiar with the problem domain, and NLP and computational experts capable of constructing the necessary models. The goal is to develop tools that close this gap by providing interactive features for review and revision of NLP models. I will present the design of a prototype tool combining novel text visualizations to help users interpret NLP results modeling Boolean variables extracted from clinical notes, revise models and understand changes between revisions. The interactive NLP loop begins when a user starts reviewing a list of documents. Users can review the text along with the features identified by the system to predict the variables. They can then revise the prediction values by giving feedback. Our evaluation suggests that physicians with limited or no machine learning experience can use our tools to enhance models based on limited training data, leading to substantial improvements in model performance after as little as 30 minutes of time per variable.