Jesse Gleason
Communicative approaches to language teaching that emphasize the importance of speaking (e.g., task-based language teaching) require innovative and evidence-based means of assessing oral language. Nonetheless, research has yet to produce an adequate assessment model for oral language (Chun 2006; Downey et al. 2008). Limited by automatic speech recognition (ASR) technology, which compares non-native speaker discourse to native-like discourse, most tests exclusively focus on accuracy while ignoring how examinees use language to make meaning. In order to offer stakeholders more trustworthy evidence of how examinees might use language in target language domains, a model anchored in systemic functional linguistics (SFL) is put forth. Specific examples are given of how SFL might be used to evaluate test task types, such as the story retell. Three examinees' responses are contrasted using genre analysis (Derewianka 1990) and transitivity analysis (Ravelli 2000) in order to demonstrate elements in their linguistic profiles that ASR-based assessment would overlook. In so doing, implications are drawn regarding the potential of SFL models for enhancing automated scoring procedures by focusing on the meaning-form relations in the linguistic construction of narrative.