When I first saw what the topic was going to be, I was reminded of a withdrawn student of mine who, having returned from his summer holiday, proudly told me that he had voluntarily taken on the role of interpreter and helped a foreign tourist communicate (successfully!) with a souvenir vendor. So I half expected us to reminisce fondly and at some length about our success stories. As it turned out, unsurprisingly, the chat was much more focused and quite a lot of ground was covered despite the relatively few participants. The moderator, @theteacherjames, made sure we stayed on track and helped us agree on what the track was.

 

It was agreed that “measurable evidence” in the opening question refers to formal modes of assessment (@esolcourses) and that the entire question could be rephrased as, “How do we know when the students are learning?” (@josipa74).

 

The first suggestion was that observation of learner progress – in terms of increased confidence and production of new language – indicates the teacher is doing something right (@esolcourses). It is, however, worth bearing in mind that progress may not be due (just) to instruction.

 

Observing (and possibly recording) students to see if they are doing particular tasks promptly and successfully was another suggestion (@MajaJerkovic).

 

Photo taken from http://flickr.com/eltpics by Georgia Psarra, used under a CC Attribution Non-Commercial license, http://creativecommons.org/licenses/by-nc/3.0/
Photo taken from http://flickr.com/eltpics by Georgia Psarra, used under a CC Attribution Non-Commercial license, http://creativecommons.org/licenses/by-nc/3.0/

It was pointed out that students often give us no tangible or verbal clues as to whether instruction has been successful (@HanaTicha), which raised the question of whether this means we have failed as teachers. It was generally agreed that a lack of tangible clues does not equal failure, as some students are shy and learning is not always a linear process. We also need to exercise caution when taking students’ enthusiasm as an indicator of successful instruction – their confidence might, in fact, mask a lack of improvement and/or make them appear more competent than quieter students(@theteacherjames).

 

Understandably, part of the discussion revolved around exams. Formal assessment was felt to be inadequate as the only means of gauging the success of either teachers or learners. It was said that students may perform worse under pressure and their scores often do not fully reflect what they have learned.

 

Portfolio-based assessment was suggested as an alternative to exams (@esolcourses), as well as peer assessment and self-assessment. There seemed to be some uncertainty as to whether portfolio-based assessment is applicable in different courses, but eventually those practicing it in various settings confirmed that it is. I was interested to find out how others felt about students using the ELP (European Language Portfolio) for self-assessment, as our language school had used it quite successfully with adult learners.

 

@HanaTicha felt that instruction success could only be judged relative to the teacher’s objectives and beliefs, and said she considered her teaching successful if students developed critical thinking skills, confidence and learner independence. This prompted the question of who usually decides the objectives (@josipa74). Some felt (regretfully) that it was mostly the teachers and the syllabus who were responsible for setting goals, while others advocated an approach where goals were negotiated with students, advising the adjustment of a syllabus that is not in line with student needs. It was noted that we might be underestimating the learners’ power to decide their own goals.

 

The issue of time as a relevant factor was also discussed. @theteacherjames said that it can take very long to notice a tangible improvement, which is a luxury teachers rarely have. This led to observations on how long it usually takes to see improvement in adult learners attending language schools, as opposed to primary school students and learners in intensive courses. It was also noted that the more advanced learners are, the longer it will take for progress to be evident.

 

Some other indicators of successful instruction were mentioned: how well business English students do in real-life meetings or presentations compared to before (@mary28sou), students paying for another semester or more in a language school (@Ven_VVE), and high attendance levels at lectures, learner satisfaction and positive feedback (@HanaTicha).

 

Towards the end of the discussion it proved difficult to stay away from the topic of standardized tests. It was suggested that their continued use may be ascribed to the fact that they are easy to administer and convenient for teachers. Even though they can be good for institutions, they are not tailored to individual learner needs. @LeaSobocan made the interesting point that tests may be too easy and may not reflect a future need. Several disadvantages were noted and the majority of participants concluded that such (and to an extent all) tests result in a lack of freedom for both teachers and learners.

 

While I was writing up the summary, the realization dawned that parts of the chat had touched upon the dichotomy between formative and summative assessment, even if not in so many words. To my mind, this article http://www.eltcommunity.com/elt/docs/DOC-1812 provides a useful overview of the different examples of both types of assessment.

 

About the author

vedranaSummary contributed by Vedrana Vojkovic and reposted here with her kind permission. Vedrana is @Ven_VVE on Twitter.

 

 

 

 

 

 

The Actual Tweets