Volume 11, Supplement 1, June 2014

(pp. 147-148)

Articles

One major development of computer technology involving English writing is automated essay scoring (AES). Previous research has investigated different aspects of AES in writing assessment, such as human and auto- mated scoring  differences (Bridgeman, Trapani, & Yigal, 2012), and students’ essay structure identification (Burstein & Marcus, 2003). This study addresses two research questions. First, how does automated scoring differ from human scoring in EFL writing? Second, what are EFL learners’ perceptions of AES and its effec- tiveness? The instruments involved in this study include an AES system developed by Educational Testing Service (ETS), Criterion, and a survey. The findings of the study suggest that the AES and human scoring are weakly correlated. Besides, the study also finds that an AES system such as Criterion is subject to deliberate human manipulation and can suffer from insufficient explanatory power of computer-generated feedback. The pedagogical implications and limitations of the study are also discussed.

This study reports on a one-year-and-a-half longitudinal case study that examined students’ behaviors and learning outcomes under an online learning environment by using a blog visualization technique. Information visualization is a new area of study that is currently being developed. However, research that examines this topic in CALL was difficult to find. This study was conducted in a blended-format undergraduate English course for students who are science majors. Throughout the semester, the students wrote blogs and essays in English on Moodle. A blog visualization technique was also implemented. Three methods, namely, pre-/post-writing proficiency tests, text analysis of blogs and essay posts on Moodle, and a post-semester online questionnaire, were used for the analysis. The students consistently exhibited high work performance throughout the semester in terms of blog and essay posts. In addition, the students’ average writing proficiency score improved as a class community. However, two groups, the progressed and the regressed, reacted differently to the given learning environment, a phenomenon called “the to-do-or-not-to-do dilemma” in this study. Findings show that a learning environment that visually presents students’ work performance can contribute to the high performance of the entire group in an online learning environment. However, countermeasures against demoralization and regression should be implemented.

Since 2003, podcasting has quickly established itself and attracted a large worldwide audience, predicted to reach 37.6 million by the end of 2013. Mirroring this trend is its increasing use as an educational medium. Efforts in materials development and language education research have thus far focused mostly on audio podcasting, despite the fact that the growing proliferation of mobile devices with video capability has also made video podcasting (vodcasting) a viable pedagogical option. Given the relative novelty of vodcasting, it is not surprising that little research has thus far been carried out on the use of this medium. There are as yet little empirically supported insights into the use of vodcasts or students’ preferences and perceptions. This paper outlines the design and development of a course vodcast for German language beginners at a university in Singapore as well as the findings of an accompanying study based on a mixed research design employing a questionnaire consisting of quantitative and qualitative items, and focus group discussions. The analysis of the data reveals that students had the necessary technical and Internet resources to receive and view the vodcast. While the access rate was high with 85.9% having viewed at least one unit of the vodcast, which was non-compulsory, and 65.0% at least half of the six units, these figures are lower than those reported in earlier studies for audio podcasts at NUS. Students’ perceptions of the vodcast’s design and quality were positive, although the mean ratings were slightly lower than those reported in previous studies for audio podcasts. Students would like to see improvements made to the technical quality of the units and to the design of the vodcast units, which they felt did not facilitate mobile use. A slim majority of the students preferred their course audio podcast to the vodcast, partly because the audio podcast had a broader coverage and contained more learning contents, but also because it was easier to access while on the move or performing other activities. The vodcast required more attentional capacity and was more distracting, making multitasking difficult.

Previous studies have shown that field independent (FI) learners dislike collaborative activities, while field dependent (FD) learners prefer to work with others. In addition, FI learners prefer an online learning environment, while FD learners feel disoriented in cyberspace. As research on computer-supported collaborative learning seldom investigates learners with different cognitive styles, it is worth examining how FI and FD learners perceive the learning experience when collaborative activities are implemented in an online learning environment. Thus, to fill the gap in the existing literature, 29 FI and 32 FD students at two universities participated in this study, and were requested to provide online peer  feedback inside dyads and create a group e-book. Instruments included questionnaires and the Group Embedded Figure Test. The researchers used descriptive statistics, t-tests, and the constant analysis method to analyze the data. The results of the peer feed-back activity showed that, for the FI students, the task of collaborating with four to five members to create a group e-book was more challenging than cooperating with just one group member. Even for FD students, using unfamiliar technological tools to interact with unfamiliar students could still be  rather awkward, especially when the communication was asynchronous. Some pedagogical implications are provided to conclude this study.

Yu-Chun Wang and Chien-Tzu Chou
Evaluation Criteria for English Listening and Speaking E-learning Courses (pp. 223–237)

Some principles or criteria are provided to learners when they use English learning websites or CALL materials (Economides, 2003; Jamieson & Preiss, 2005; Johnson, Hornikb, & Salas, 2008; Liu, Liu, & Hwang, 2011; Wang & Chen, 2009; Yang & Chan, 2008). However, little research has been conducted to describe a set of evaluation criteria for English courses, especially on English listening and speaking. The main purpose of the study is trying to construct a multi-dimensional set of criteria for English teachers to evaluate the quality of e-learning English listening and speaking courses. These criteria can assist English teachers in designing effective English listening and speaking courses to improve students’ English listening and speaking ability. The developmental research applied in this paper constructed and refined the evaluation criteria using literature review, procedure, experts’ reviews and document analysis (George & Mallery, 2003). These evaluation guidelines were based on the aspects of a) information for e-learning course, b) English teaching, and c) listening and speaking teaching. In order to achieve this goal, the researcher used a four-stage procedure to re-fine and form the evaluation criteria. In the first stage, the 98 preliminary criteria were conducted based on general information of e-learning course, English teaching and teaching English listening and speaking related researches. The second stage focused on experts’ opinions for the preliminary criterion through online Google Docs with five-Likert scale. The third stage was to conduct both experts’ and learners’ opinions ac-cording to the results of stage two. Last stage was to finalize the criteria based on quantitative and qualitative surveys. 90 items were finalized in the criteria for evaluating the English listening and speaking e-learning course.