Automated essay scoring (AES) is the use of specialized computer programs to evaluate and assign grades to essays written in an educational setting. These systems are mainly used to help overcome time, cost, reliability, and generalizability issues in writing assessment. One of the important characteristics of the existing AES systems is that they are designed for scoring context-free essays and they does very well at surface traits of writing. Another kind of essay writing task is analytic text based writing (RTA) that is strongly emphasized in the 2010 common core state standards (CCSS). In this kind of essays students should write analytically in response to text. In this talk we will discuss how one of the existing methods of essay assessment performs on assessing RTA based writing and specifically on more substantive dimensions. Although this method did well on assessing holistic measures, our results show that we need to consider more specific features to get better results on different dimensions of RTA based writing.