Does topic choice affect high-stakes L2 writing scores?
- Marian Amengual-Pizarro a:1:{s:5:"en_US";s:34:"University of the Balearic Islands";} https://orcid.org/0000-0002-1645-3923
Abstract
This study sets out to investigate the potential effects of topic choice on test-takers’ L2 writing scores in a high-stakes context. Data were collected from a total of 150 essays that were assessed by three qualified raters who participated as judges in the administration of the high-stakes English Test (ET), included in the Spanish University Admission Examination (SUAE), in July 2020. Although test-takers showed a clear preference for one writing topic choice over the other, results did not reveal statistically significant differences between the average scores awarded to both essay options. Therefore, the data clearly indicate that topic choice does not affect L2 writing quality. Findings also show that choice of topic had little impact on test-takers’ overall performance on the ET. Additionally, no differences in choice patterns were either observed across test-takers’ proficiency levels, which suggests that topic choice may be closely related to test-takers’ characteristics (motivation, interest, relevance, etc.) rather than to the writing prompt itself. Lastly, the data show potential interactions between raters’ characteristics and essay topics which may affect final writing scores.
Downloads
References
Aitken, A. Angelique, Graham, Steve and Daniel McNeish (2022). The effects of choice versus preference on writing and the mediating role of perceived competence. Journal of Educational Psychology 114, 8: 1844-1865. DOI: https://doi.org/10.1037/edu0000765
Amengual-Pizarro, Marian (2010). Exploring the washback effects of a high-stakes English test on the teaching of English in Spanish upper secondary schools. Revista Alicantina de Estudios Ingleses 23: 149-170. DOI: https://doi.org/10.14198/raei.2010.23.09
Bachman, Lyle. and Adrian Palmer (1996). Language testing in practice: Designing and developing useful language tests. Oxford University Press.
Basten, Melanie, Meyer-Aherns, Inga, Fries, Stefan and Matthias Wilde (2014). The effects of autonomy-supportive vs. controlling guidance on learners’ motivational and cognitive achievement in a structured field trip. Science Education 98, 6: 1033-1053. DOI: https://doi.org/10.1002/sce.21125
Barkaoui, Khaled (2007). Rating scale impact on EFL essay marking: A mixed-method study. Assessing Writing 12: 86–107. DOI: https://doi.org/10.1016/j.asw.2007.07.001
Breland, Hunter M., Bridgeman, Brent and Mary E. Fowles (1999). Writing assessment in admission to higher education: Review and framework (ETS Research Report No. 99-3). Princeton, NJ: Educational Testing Service.
Bridgeman, Blent Morgan, Rick and Ming-mei Wang (1997). Choice among essay topics: Impact on performance and validity. Journal of Educational Measurement 34, 3: 273–286. DOI: https://doi.org/10.1111/j.1745-3984.1997.tb00519.x
Bonyadi, Alireza (2014). The effect of topic selection on EFL students’ writing performance. Sage Open 4, 3: 1-9. DOI: https://doi.org/10.1177/2158244014547176
Bonzo, Joshua D. (2008). To assign a topic or not: Observing fluency and complexity in intermediate foreign language writing. Foreign Language Annals, 41: 722-735. DOI: https://doi.org/10.1111/j.1944-9720.2008.tb03327.x
Brown, H. Douglas and Priyanvada Abeywickrama (2019). Language assessment: Principles and classroom practices (3rd ed.). Pearson Longman.
Brown, James Dean (2011). What do the L2 generalizability studies tell us? International Journal of Assessment and Evaluation in Education, 1: 1–37.
Calkins, Lucy (2020). Teaching writing. Heinemann.
Canziani, Bonnie Farber, Esmizadeh, Yalda and Hamid R. Nemati (2021). Student engagement with global issues: the influence of gender, race/ethnicity, and major on topic choice. Teaching in Higher Education 1-22. DOI: https://doi.org/10.1080/13562517.2021.1955340
Council of Europe (2001). Common European Framework of Reference for Languages: Learning, Teaching and Assessment. Council of Europe, Language Policy Unit: Strasbourg. www.coe.int/lang-cefr
Cumming, Alister, Kantor, Robert, Baba, Kyoko, Erdosy, Usman, Eouanzoui, Keanre and Mark James (2005). Differences in written discourse in independent and integrated prototype tasks for next generation TOEFL. Assessing Writing 10, 1: 5-43. DOI: https://doi.org/10.1016/j.asw.2005.02.001
Cumming, Alister, Kantor, Robert and Donald E. Powers (2002). Decision-making while rating ESL/EFL writing tasks: A descriptive framework. Modern Language Journal 86: 67–96. DOI: https://doi.org/10.1111/1540-4781.00137
Chalhoub-Deville, Micheline and Carolyn E. Turner (2000). What to look for in ESL admission tests: Cambridge certificate exams, IELTS, and TOEFL. System 28, 4: 523-539. DOI: https://doi.org/10.1016/S0346-251X(00)00036-1
Chapelle, Carol A., Mary K. Enright, and Joan M. Jamieson, eds. (2008). Building a validity argument for the Test of English as a Foreign Language TM. Routledge.
Chiste, Katherine Beaty and Judith O’Shea (1988). Patterns of question selection and writing performance of ESL students. TESOL Quarterly 22: 68l-684. DOI: https://doi.org/10.2307/3587275
Cho, Yeonsuk, Rijmen, Frank and JaKub Novák, (2013). Investigating the effects of prompt characteristics on the comparability of TOEFL iBTTM integrated writing tasks. Language Testing 30, 4: 513–534. DOI: https://doi.org/10.1177/0265532213478796
Chua, Roy Y. J. and Sheena S. Iyengar (2008). Creativity as a matter of choice: Prior experience and task instruction as boundary conditions for the positive effect of choice on creativity. The Journal of Creative Behavior 42, 3: 164-180. DOI: https://doi.org/10.1002/j.2162-6057.2008.tb01293.x
Deci, Edward L. and Richard M. Ryan (2000). The “what” and “why” of goal pursuit: “Human needs and the self-determination of behavior. Psychological Inquiry 11, 4: 227-268. DOI: https://doi.org/10.1207/S15327965PLI1104_01
Díez Bedmar, María Belén (2012). The use of the Common European Framework of Reference for Languages to evaluate compositions in the English exam section of the university admission examination. Revista de Educación 357: 55-80
Douglas, Dan and Carol A. Chapelle, eds. (1993). A new decade of language testing research: Selected papers from the 1990 Language Testing. Alexandria, VA: Teachers of English to Speakers of Other Research Colloquium Languages.
Eckes, Thomas and Kuan-Yu Jin (2021). Examining severity and centrality effects in TestDaF writing and speaking assessments: An extended Bayesian many-facet Rasch analysis. International Journal of Testing 21, 3-4: 131-153. DOI: https://doi.org/10.1080/15305058.2021.1963260
Eckes, Thomas, Müller-Karabil, Anita and Sonja Zimmermann (2016). Assessing writing. In Tsagari, Dina and Jayanti Banerjee, eds.,147-164. Berlin, Boston: De Gruyter.
Elder, Catherine, Iwashita, Noriko and Tim McNamara (2002). Estimating the difficulty of oral proficiency tasks: what does the test-taker have to offer? Language Testing 19, 4: 347-368. DOI: https://doi.org/10.1191/0265532202lt23
Elder, Catherine, McNamara, Tim and Peter Congdon (2003). Rasch techniques for detecting biasing performance assessments: An example comparing the performance of native and non-native speakers on a test of academic English. Journal of Applied measurement, 4, 2: 181–197.
Fitzpatrick, Anne R. and Wendy M. Yen (1995). The psychometric characteristics of choice items. Journal of Educational Measurement 32: 243-259. http://www.jstor.org/stable/1435296.
Flowerday, Terri and Gregory Schraw (2000). Teacher beliefs about instructional choice: A phenomenological study. Journal of Educational Psychology 92, 4: 634-645. DOI: https://doi.org/10.1037/0022-0663.92.4.634
Flowerday, Terri and Duane F. Shell (2015). Disentangling the effects of interest and choice on learning, engagement, and attitude. Learning and Individual Differences 40: 134–140. DOI: https://doi.org/10.1016/j.lindif.2015.05.003
Fox, Janna, Pychyl, Timothy A. and Bruno Zumbo (1997). An investigation of background knowledge in the assessment of language proficiency. In Huhta, Ari, Viljo Kohonen, Liisa Kurki-Suonio and Sari Luoma eds., 367-383. Jyvaskyla, Finland: University of Jyvaskyla Press.
García-Laborda, Jesús. (2012). De la Selectividad a la Prueba de Acceso a la Universidad: pasado, presente y un futuro no muy lejano. Revista de Educación 357: 17-27.
Gabrielson, Stephen, Gordon, Belita and George Engelhard (1995). The effects of task choice on the quality of writing obtained in a statewide assessment. Applied Measurement in Education 8: 273-290. DOI: https://doi.org/10.1207/s15324818ame0804_1
Golparvar, Seyyed Ehsan and Afshin Khafi (2021). The role of L2 writing self-efficacy in integrated writing strategy use and performance. Assessing Writing, 47: 100504. DOI: https://doi.org/10.1016/j.asw.2020.100504
Golub-Smith, Marna, L., Reese, Clyde M. and Karin Steinhaus (1993). Topic and topic type comparability on the Test of Written English™ (TOEFL Research Report No. 42; ETS RR-93-10). Princeton, NJ: ETS.
Graham, Steve (2006). Writing. In Alexander, P. and P. Winne, eds., Handbook of educational psychology (pp. 457– 478). Erlbaum.
Green, Anthony (2021). Exploring language assessment and testing: Language in action (2nd ed.). Routledge.
Green, Anthony (2007). IELTS washback in context: Preparation for academic writing in higher education (Vol. 25). Cambridge University Press.
Guapacha Chamorro, María Eugenia (2022). Cognitive validity evidence of computer-and paper-based writing tests and differences in the impact on EFL test-takers in classroom assessment. Assessing Writing 51: 100594 DOI: https://doi.org/10.1016/j.asw.2021.100594
Hamel, Fred L. (2017). Choice and agency in the writing workshop. Columbia University. New York and London: Teachers College Press.
Hamp-Lyons, Liz (1990). Second language writing: assessment issues. In Kroll, Barbara, ed., 69-87. Cambridge: Cambridge University Press.
Hamp-Lyons, Liz and Sheila Prochnow Mathias (1994). Examining expert judgments of task difficulty on essay tests. Journal of Second Language Writing 3: 49–68. DOI: https://doi.org/10.1016/1060-3743(94)90005-1
Harris, Karen R., Graham, Steve, Friedlander, Barbara, and Leslie Laud (2013). Bring powerful writing strategies into your classroom! Why and how. The Reading Teacher 66, 7: 538–542. DOI: https://doi.org/10.1002/trtr.2013.66.
He, Ling and Ling Shi (2012). Topical knowledge and ESL writing. Language Testing 29, 3: 443– 464. DOI: https://doi.org/10.1177/0265532212436659
Hinkel, Eli (2002). Second language writers' text: Linguistic and rhetorical features. Mahwah, NJ: Lawrence Erlbaum.
Hinkel, Eli. (2003). Simplicity without elegance: Features of sentences in L1 and L2 academic texts. TESOL Quarterly 37: 275–301. DOI: https://doi.org/10.2307/3588505
Huang, Jinyan (2011). Generalizability theory as evidence of concerns about fairness in large Scale ESL writing assessments. TESOL Journal 2, 4: 423–443. DOI: https://doi.org/10.5054/tj.2011.269751
Huang, Jinyan (2012). Using generalizability theory to examine the accuracy and validity of large-scale ESL writing. Assessing Writing 17, 3: 123–139. DOI: https://doi.org/10.1016/j.asw.2011.12.003
Hughes, Arthur. 1989: Testing for language teachers. Cambridge: Cambridge University Press.
Huhta, Ari, Viljo Kohonen, Liisa Kurki-Suonio and Sari Luoma, eds., (1997). Current developments and alternatives in language assessment: Proceedings of LTRC 1996. Jyvaskyla, Finland: University of Jyvaskyla Press.
In’nami, Yo and Rie Koizumi (2016). Task and rater effects in L2 speaking and writing: A synthesis of generalizability studies. Language testing 33, 3: 341-366. DOI: DOI: https://doi.org/10.1177/0265532215587390.
Jennings, Martha, Fox, Janna, Graves, Barbara and Elana Shohamy (1999). The test-takers’ choice: an investigation of the effect of topic on language-test performance. Language Testing 16,4: 426–456. DOI: https://doi.org/10.1177/026553229901600402
Kim, Bomin and Haedong Kim (2016). Korean college EFL learners’ task motivation in written language production. International Education Studies 9, 2: 42–50. DOI: https://doi.org/10.5539/ies.v9n2p42
Kim, Young-Suk Grace, Schatschneider, Christopher, Wanzek, Jeanne, Gatlin, Brandy and Stephanie Al Otaiba (2017). Writing evaluation: Rater and task effects on the reliability of writing scores for children in Grades 3 and 4. Reading and writing 30: 1287-1310. DOI: https://doi.org/10.1007/s11145-017-9724-6.
Khabbazbashi, Nahal (2017). Topic and background knowledge effects on performance in speaking assessment. Language Testing 34, 1: 23–48. DOI: https://doi.org/10.1177/0265532215595666
Kessler, Matt, Ma, Wenyue and Ian Solheim (2021). The Effects of Topic Familiarity on Text Quality, Complexity, Accuracy, and Fluency: A Conceptual Replication. TESOL Quarterly: 1-28. DOI: https://doi.org/10.1002/tesq.3096.
Kormos, Judit (2012). The role of individual differences in L2 writing. Journal of Second Language Writing 21, 4: 390-403. DOI: https://doi.org/10.1016/j.jslw.2012.09.003
Kroll, Barbara, ed. (1990). Second Language Writing (pp.69-87). Cambridge: Cambridge University Press. DOI: https://doi.org/10.1017/CBO9781139524551
Kroll, Barbara (1991). Understanding TOEFL’s Test of Written English. RELC Journal 22, 1: 20–33. DOI: https://doi.org/10.1177/003368829102200102
Kroll, Barbara and Joy Reid (1994). Guidelines for designing writing prompts: clarifications, caveats and cautions. Journal of Second Language Writing 3: 231-255. DOI: https://doi.org/10.1016/1060-3743(94)90018-3
Kroll, Barbara (1998). Assessing writing abilities. Annual Review of Applied Linguistics 18: 219- 240.
Kuiken, Folkert and Ineke Vedder (2008). Cognitive task complexity and written output in Italian and French as a foreign language. Journal of Second Language Writing 17, 1: 48-60. DOI: https://doi.org/10.1016/j.jslw.2007.08.003
Kyle, Kristopher (2020). The relationship between features of source text use and integrated writing quality. Assessing Writing 45: 100467. DOI: https://doi.org/10.1016/j.asw.2020.100467
Leblanc, Catherine and Miho Fujieda (2012). Investigating effects of topic control on lexical variation in Japanese university students’ in class timed-writing. Humanities Review: 17: 241-253.
Lee, Hee-Kyung and Carolyn Anderson (2007). Validity and topic generality of a writing performance test. Language Testing 24, 3: 307–330. DOI: https://doi.org/10.1177/026553220707720
Lee, Sunjung and Diana Pulido (2016). The impact of topic interest, L2 proficiency, and gender on EFL incidental vocabulary acquisition through reading. Language Teaching Research 22, 1: 118–135. DOI: https://doi.org/10.1177/1362168816637381
Lee, Yong-Won, Breland, Hunter, and Eiji Muraki (2004). Comparability of TOEFL CBT prompts for different native language groups. TOEFL Research Reports, RR-04-24. Princeton, NJ: Educational Testing Service. DOI: https://doi.org/10.1002/j.2333-8504.2004.tb01951.x
Lee, Yong-Won and Robert Kantor (2007). Evaluating prototype tasks and alternative rating schemes for a new ESL writing test through G-theory. International Journal of Testing 7: 353–385. DOI: https://doi.org/10.1080/15305050701632247
Li, Hang and Lianzhen He (2015). A comparison of EFL raters’ essay-rating processes across two types of rating scales. Language Assessment Quarterly 12, 2: 178-212. https://doi.org/10.1080/15434303.2015.1011738
Lim, Gad S. (2010). Investigating prompt effects in writing performance assessment. Spann Fellow Working Papers in Second or Foreign Language Assessment 8: 95–116.
Linn, Robert L. ed., (1989) (3rd ed.) Educational measurement. New York: American Council on Education and Macmillan.
Lo, Julia, and Fiona Hyland (2007). Enhancing students' engagement and motivation in writing: The case of primary students in Hong Kong. Journal of Second Language Writing 16: 219-237. DOI: https://doi.org/10.1016/j.jslw.2007.06.002
Messick, Samuel (1989). Validity. In Linn, Robert L. ed., (3rd ed.), 13-103. New York: American Council on Education and Macmillan.
O’Sullivan, Barry and Anthony Green (2011). Test taker characteristics. In Taylor, Lynda, ed., 36-64. Studies in Language Testing 30. Cambridge: UCLES/Cambridge University Press.
Patall, Erika A., Cooper, Harris and Jorgianne Civey Robinson (2008). The effects of choice on intrinsic motivation and related outcomes: A meta-analysis of research findings. Psychological Bulletin, 134: 270–300. DOI: https://doi.org/10.1037/0033-2909.134.2.270
Patall, Erika A. (2012). The motivational complexity of choosing: A review of theory and research. In Ryan, Richard M., ed., Oxford handbook of human motivation (pp. 249–279). Oxford University Press. DOI: https://doi.org/10.1093/oxfordhb/9780195399820.013.0015
Polio, Charlena and Margo Glew (1996). ESL writing assessment prompts: How students choose. Journal of Second Language Writing 5: 35–49. DOI: https://doi.org/10.1016/S1060-3743(96)90014-4
Powers, Donald E., Fowles, Mary E., Farnum, Marisa and Kalle Gerritz (1992). Giving a choice of topics on a test of basic writing skills: does it make any difference. The praxis series: professional assessments for beginning teachers, Educational Testing Service, Research Report: 92–19.
Powers, Donald E. and Mary E. Fowles (1998). Test takers judgments about GRE writing test prompts (GRE No. 94-13). Princeton, NJ: Educational Testing Service.
Preiss, David D., Castillo, Juan Carlos, Flotts, Paulina and Ernesto San Martín (2013). Assessment of Argumentative Writing and Critical Thinking in Higher Education: Educational Correlates and Gender Differences. Learning and Individual Differences 28: 193–203. https://doi.org/10.1016/j.lindif.2013.06.004
Purves, Alan C. (1992). Reflections on research and assessment in written composition. Research in the Teaching of English 26: 108–122.
Schneider, Sascha (2021). Are there never too many choice options? The effect of increasing the number of choice options on learning with digital media. Human Behavior and Emerging Technologies 3, 5: 759-775. DOI: https://doi.org/10.1002/hbe2.295
Schneider, Sacha, Nebel, Steve, Beege, Maik and Günter Daniel Rey (2018). The autonomy-enhancing effects of choice on cognitive load, motivation and learning with digital media. Learning and Instruction 58: 161-172. DOI: https://doi.org/10.1016/j.learninstruc.2018.06.006
Schoonen, Rob (2005). Generalizability of writing scores: An application of structural equation modeling. Language testing 22, 1: 1-30.
Schoonen, Rob (2012). The validity and generalizability of writing scores: The effect of rater, task and language. In Van Steendam, Elke, Tillema, Marion, Rijlaarsdam, Gert and Huub Van Den Bergh, eds., 1-22. Leiden: Koninklije Brill. https://doi.org/10.1163/9789004248489_002
Shaw, Stuart D. and Cyril J. Weir (2007). Examining writing: Research and practice in assessing second language writing. Studies in Language Testing 26, Cambridge: Cambridge University Press.
Shi, Bibing, Huang, Liyan and Xiaofei Lu (2020). Effect of prompt type on test-takers’ writing performance and writing strategy use in the continuation task. Language Testing 37, 3: 361-388. DOI: https://doi.org/10.1177/0265532220911626
Skehan, Peter (1998). A cognitive approach to language learning. Oxford: Oxford University Press.
Skehan, Peter (2009). Modelling second language performance: Integrating complexity, accuracy, fluency, and lexis. Applied Linguistics 30, 4: 510-532. DOI: https://doi.org/10.1093/applin/amp047
Slomp, David H. (2016). An integrated design and appraisal framework for ethical writing assessment. The Journal of Writing Assessment, 9, 1: 1–14.
Song, Bailin and Isabella Caruso (1996). Do English and ESL faculty differ in evaluating the essays of native English-speaking and ESL students? Journal of Second Language Writing, 5, 2: 163–182. DOI: https://doi.org/10.1016/S1060-3743(96)90023-5
Spaan, Mary (1993). The effect of prompt in essay examinations. In Douglas, Dan and Carol A. Chapelle, eds., 98-122. Alexandria, VA: Teachers of English to Speakers of Other Research Colloquium Languages.
Spratt, Mary (2005). Washback and the classroom: The implications for teaching and learning of studies of washback from exams. Language Teaching Research 9, 1: 5-29. DOI: https://doi.org/10.1191=1362168805lr152oa
Taylor, Lynda, ed., (2011) Examining speaking: Research and practice in assessing second language speaking- Studies in Language Testing 30. Cambridge: UCLES/Cambridge University Press.
Tedick, Diane J. (1990). ESL writing assessment: Subject-matter knowledge and its impact on performance. English for Specific Purposes 9: 123-143. DOI: https://doi.org/10.1016/0889-4906(90)90003-U
Troia, Gary Alan, Shankland, Rebecca K., and Kimberly A. Wolbers (2012). Motivation research in writing: Theoretical and empirical considerations. Reading and Writing, 28, 1: 5–28. DOI: https://doi.org/10.1080/10573569.2012.632729
Troia, Gary A., Harbaugh, Allen G., Shankland, Rebecca K., Wolbers, Kimberley A. and Ann M. Lawrence (2013). Relationships between writing motivation, writing activity, and writing performance: Effects of grade, sex, and ability. Reading and Writing 26, 1: 17–44. DOI: https://doi.org/10.1007/s11145-012-9379-2
Wainer, Howard and David Thissen (1994). On examinee choice in educational testing. Review of Educational Research 64: 159-195. DOI: https://doi.org/10.3102/00346543064001159
Weigle, Sara Cushing (1994). Effects of training on raters of ESL compositions. Language Testing 11: 197-223.
Weigle, Sara Cushing (1999). Investigating rater/prompt interactions in writing assessment: Quantitative and qualitative approaches. Assessing Writing 6: 145-178. DOI: https://doi.org/10.1016/S1075-2935(00)00010-6
Weigle, Sara Cushing (1998). Using FACETS to model rater training effects. Language Testing 15: 263-287.
Weigle, Sara Cushing (2002). Assessing writing. Cambridge, UK: Cambridge University Press.
Weigle, Sara Cushing. (2012). Assessing writing. In Coombe, Christine A., Peter Davidson, Barry O’Sullivan and Stephen Stoynoff, eds., 218-224. Cambridge: Cambridge University Press.
Weigle, Sara Cushing (2013). Assessment of writing. In Chapelle, Carol A., ed., 1-7. Oxford: Blackwell. DOI: https://doi.org/10.1002/9781405198431.wbeal0056.
Weigle, Sara Cushing, Boldt, Heather and María Inés Valsecchi (2003). Effects of task and rater background on the evaluation of ESL writing: A pilot study. TESOL Quarterly 37: 345–354. https://doi.org/10.2307/3588510
Weigle, Sara Cushing and Eric Friginal (2015). Linguistic dimensions of impromptu test essays compared with successful student disciplinary writing: Effects of language background, topic, and L2 proficiency. Journal of English for Academic Purposes 18: 25-39. DOI: https://doi.org/10.1016/j.jeap.2015.03.006
Wolfe, Christopher, Britt, M. Anne and Jodie A. Butler (2009). Argumentation schema and the myside bias in written argumentation. Written Communication 26, 2: 183–209. DOI: https://doi.org/10.1177/0741088309333019
Yang, Weiwei and YouJin Kim (2020). The effect of topic familiarity on the complexity, accuracy, and fluency of second language writing. Applied Linguistics Review 11, 1: 79–108. DOI: https://doi.org/10.1515/applirev-2017-0017
Yang, Weiwei, Lu, Xiaofei and Sara Cushing Weigle (2015). Different topics, different discourse: Relationships among writing topic, measures of syntactic complexity, and judgments of writing quality. Journal of Second Language Writing 28: 53–67. DOI: https://doi.org/10.1016/j.jslw.2015.02.002
Yoon, Hyung-Jo (2017). Linguistic complexity in L2 writing revisited: Issues of topic, proficiency, and construct multidimensionality. System 66: 130-141. DOI: https://doi.org/10.1016/j.system.2017.03.007
Yu, Guoxing (2010). Lexical diversity in writing and speaking task performances. Applied Linguistics 31: 236-259. DOI: https://doi.org/10.1093/applin/amp024
Zhao, Changhan and Jinyan Huang (2020). The impact of the scoring system of a large-scale standardized EFL writing assessment on its score variability and reliability: Implications for assessment policy makers. Studies in Educational Evaluation 67: https://doi.org/10.1016/j.stueduc.2020.100911
Article download
License
In order to support the global exchange of knowledge, the journal Complutense Journal of English Studies is allowing unrestricted access to its content as from its publication in this electronic edition, and as such it is an open-access journal. The originals published in this journal are the property of the Complutense University of Madrid and any reproduction thereof in full or in part must cite the source. All content is distributed under a Creative Commons Attribution 4.0 use and distribution licence (CC BY 4.0). This circumstance must be expressly stated in these terms where necessary. You can view the summary and the complete legal text of the licence.