We present methodology for developing functions that predict student scores on end of year state accountability exams from dynamic testing metrics developed from intelligent tutoring system log data. Our results confirm the findings of Heffernan et al. that online tutoring log based metrics provide better predictions than using paper and pencil benchmark tests. Our approach provides a family of prediction functions to be used throughout the year, in order to provide timely and valid feedback to teachers and schools about student progress. Since the same dynamic testing metrics are used in all prediction functions in the family, we can also begin to understand the changing influence of these metrics on prediction over time.