Skip to content ↓

Listening: Problems from Learners’ Perspectives

Annie McDonald is an EAP teacher at the University of Chester and has taught EFL/ESP/EAP to secondary/university/adult students for 30+ years in Turkey, Brazil, Spain and England. She has co-authored English Result (OUP, 4 levels), Authentic Listening Resource Pack (Delta, 2015), and co-founded http://www.hancockmcdonald.com with Mark Hancock and she enjoys teaching listening. Email: anniebmcdonald@talktalk.net

 

 

 

 

 

Students often complain about the difficulty of listening to authentic texts and freely talk about their perceived needs, especially in post-listening activity discussions. Usually, apart from speed and accent, they mention 1. decoding, a bottom-up process or mental activity that involves identifying small units of speech, for example, analysing or deciphering sounds and syllables, and linking them to words, phrases and sentences, and 2. they blame their memories, claiming they have problems holding small and fleeting pieces of information long enough to work on and break down into individual words before they disappear.

Students are often acutely aware of how little they understand unscripted or unrehearsed speech created in real time. For example, one Saudi student told me that he was aware of how little he understood when struggling to follow conversations on a bus. Others expressed their worries about whether or not they were going to be able to function both in social and academic contexts. With so much audio material available on the internet, all students are becoming even more aware of what they can’t do; they lack confidence and tend to experience feelings of low self-efficacy.

A prerequisite for dealing with the listening challenges of authentic text is having some understanding of the characteristics of spontaneous speech, and we need to systematically pay attention to such characteristics in the classroom. To this end we can now consult A Syllabus for Listening – Decoding, Cauldwell (2018), which provides a thorough description of spoken language. Cauldwell catalogues, details and exemplifies, for example, how

  • there might be more than one way a word is pronounced, that is, there might be different ‘soundshapes’ for the same word
  • words which commonly occur (clusters) often run or blend together and contain well-known features of connected speech including, for example, consonant death (where consonants are either partly or completely elided)
  • other features beyond those of connected speech, like the complete loss of a syllable within words

A Syllabus for Listening – Decoding also describes innovative classroom activities and provides access to sound files. It is an extremely helpful tool for teachers wishing to bring activities into the classroom for students who need to give more attention to their bottom-up processes (decoding) as opposed to relying solely on top-down processes, that is using larger parts of text, for example information about a word to identify smaller parts like phonemes. Working on the sound substance can help students develop their ability to deal with two types of listening challenges they tend to report, decoding and memory issues.

 

Decoding challenges

Perhaps one of the most repeated comments I’ve heard from students in post-listening discussions about the difficulties posed by authentic texts, involves recognising sounds belonging to distinct words or groups of words. This accords with the research findings of both Goh (1999) Gao (2014) in which they report on Chinese students’ perceptions of listening difficulties. In Gao’s research, over three quarters of students referred to not knowing ‘how the pronunciation of words changes in connected speech’.

In self-reflection reports, my students have commented that words tend to sound differently from their dictionary form, and they say that they have problems breaking down the stream of speech into individual words. According to Cutler (2012), lexical segmentation, locating the beginning and end of words, including taking account of pronunciation variations in individual words that make up a chunk or group of words is a complex activity. It is also one of an impressive range of mental tasks an expert listener carries out effortlessly as they track various forms of information to locate word boundaries in their native language.

In the classroom, the ability to locate word boundaries tends to be taken for granted, with students being left to assume that there will be ‘white spaces’ (or pauses) between spoken words, that they will hear words in their citation forms and that strings or chunks will be made up of individual words which are carefully articulated. With word boundaries hard to determine and students not alerted to this as one of the reasons that spoken word recognition is difficult, non-expert listeners invariably attribute failure to their own inability rather than to the inherent characteristics of spontaneous speech.

We have already seen that the characteristics of spontaneous speech can create various obstacles for a student expecting words to be said as if detached from each other, and I suggest that, as part of a different approach to the teaching of listening, it seems almost churlish not to share information about pronunciation in spontaneous speech with our students. Doing this might also go some way to offering a part solution to another problem that students often report, that is, not being able to hold what has been said in their memories long enough to understand meaning.  

 

Memory challenges

I was listening to a news report the other day about a road accident involving a bus, and I understood that nobody on the bus was hurt but one person was taken to hospital. This did not seem to make sense, and when revisiting the text in my working memory, I accessed a fleeting trace of ‘a pedestrian’, a phrase said very quickly and in a low key. I had resolved my misunderstanding because my instinct was to ‘think again’, something that many students might not automatically do, even though they would do so when listening to their own language.

Expert listeners employ several mental tasks simultaneously to make sense of what they are hearing, and one of these involves exploiting the working memory, a functional part of our short-term memory. Spoken language is fleeting, it might leave a trance or vanish in a flash. Even though working memories are limited, they do allow us to temporarily store a small amount of information (for about 10 – 15 seconds) for processing and manipulation. Short-term memories, on the other hand, are storage places where we retain fixed information, that is, information without the option of review, organisation or change.  

Students often cite memory problems, for example, remembering what they have heard and simultaneously processing incoming information or not understanding the next part of a text as they are still concentrating on what had previously been said. Field, (2008), comments that ‘if decoding is uncertain and makes heavy demands upon attention, there is no room in their working memory to accommodate new incoming textual information’ and this is something that students are aware of.  Also, Vandergrift (2007) posits that much of what lower level learners hear may be lost, given fast speech and an inability to process information within time limitations.

Decoding makes heavy demands on attention, and this has consequences for non-expert listeners dealing with spontaneous speech. A non-expert listener may be struggling to decode individual phonemes or syllables which results in them being left with very little by way of working memory resources. To complicate matters, it is not unusual to find students putting further strain on their working memories by turning to translation as they listen. Obviously, while it is beyond the remit of a language teacher to work on improving memories, activities such as dictation and transcription can help. Raising awareness to the complexities of spontaneous speech facilitates more efficient decoding over time and this can help free up working memory resources. Post-listening discussions also provide the opportunity to discuss the advantages of resisting the compulsion to translate, especially when students are engaged in classroom activities designed to develop specific listening processes.  

 

Dictation and transcription: windows into listeners’ minds

How can we possibly talk about the results and challenges of listening if what is heard isn’t available for inspection in the classroom? One solution is to use dictation (done by the teacher or from an audio) and the resultant student transcription as tools for diagnosis. Together they provide evidence of students’ understanding, which can be compared with the audio script and diagnosed. In other words, the result of what has happened inside a listener’s head will be available for analysis and discussion. Transcriptions can be written on paper (possibly with write-in lines to help with lexical segmentation) or even tapped into mobile devices, with predictive text turned off of course.

There are various ways of carrying out dictation activities, for example, we can

  • present the text in a series of small chunks or tone groups of text and pause for students to write what they think was said;
  • audios can be paused periodically for student to write the last 4 or 5 words they heard;
  • insert pauses into (authentic) audio using audacity (an audio editing tool) say after every four or five words, again for students to write what they heard and
  • pause and ask students to write what they remember and then to offer predictions of what the speaker might say next.

It is interesting to look at a selection of students’ slips of the ear as these offer a flavour of the decoding of phoneme, syllable and word boundary challenges non-expert listeners might encounter. You might like to ponder on the following and the reasons for the differences between what was said and how students’ reasonable interpretations occurred.

Words:

  • seminar understood as cinema
  • passenger understood as passage
  • while understood as wild
  • brain understood as vein
  • qualified understood as calling

Word clusters:

  • a night owl understood as a next hour
  • what the whole understood as what the hell
  • gets on well understood as get some well  
  • a lot smaller than understood as lots more than
  • do my housework understood as do my coursework

Dictation and transcription activities provide the results of listening into the classroom for inspection. As Sheppard & Butler, (2017), observe, ‘Greater knowledge about what learners perceive when they listen could help language teachers better tailor their instruction to student needs.’ Once students have compared their transcriptions with the audio transcript teachers can present follow-up practice activities on difficult areas and these will serve to help students develop their decoding skill for future listening experiences. It goes without saying that students perceive the benefit of follow-up activities and are highly motivated to engage with the tasks.

Last year, with a new group and after a dictation activity and comparison and discussion of different types of word-linking using the audio script, a French student commented ‘Now I understand why I don’t understand’. He, along with the others, were beginning to be empowered to attribute listening difficulties and personal failure to the nature of spontaneous speech rather than their own inabilities. Shaking off the mantel of learned helplessness maintains and generates motivation, it gives non-expert listeners a greater chance of future listening successes and increases their listening confidence.

 

References

Audacity. Retrieved from https://www.audacityteam.org/

Cauldwell, R.T, (2018). A Syllabus for Listening – Decoding. Birmingham, UK: Speech in Action.

Cutler, A. (2012). Native Listening, London, The MIT Press.

Field, J. (2008). Listening in the Language Classroom. Cambridge, Cambridge University Press.

Gao, L. P. (2014). An Exploration of L2 listening problems and their causes. Downloaded from http://eprints.nottingham.ac.uk/28415/1/Liping%20final%20thesis%20%2031-10-2014

Goh, C. (1999). How much do learners know about the factors that influence their listening comprehension? Hong Kong Journal of Applied Linguistics 4, 1 (1999); pp.17–42  

Sheppard, B. & Butler, B. (2017). Insights into student listening from paused transcription. CATESOL Journal, 29.2, 81-107.

Vandergrift, L. (2007). Recent developments in second and foreign language listening comprehension research. State of Art Article. Language Teaching, 46. (3), 191-210.

 

Please check the English Course for Teachers and School Staff at Pilgrims website.

Please check the English Update for Teachers course at Pilgrims website.

Please check the Methodology and Language for Secondary course at Pilgrims website.

Please check the Teaching Advanced Students course at Pilgrims website.

Tagged  Various Articles 
  • Listening: Problems from Learners’ Perspectives
    Annie McDonald, UK

  • Learner Autonomy: Research and Practice
    Jo Mynard, Japan

  • Creating a Thinking Environment for English Language Learners
    Michelle Hunter, Germany

  • Community Counts. Reflecting on the Power of Relationships, Motivation and Connection in the Classroom and Beyond
    Sarah Elizabeth Sprague, Brazil

  • How to Teach Demotivated Students by Humanising Language Teaching
    Susan Brodar, Italy

  • Understanding Scaffolding and Organic Mediation
    Dr. Gabriel Díaz Maggioli, Uruguay

  • Overcoming the Speaking Headache: A Speaking Project Idea
    Maria-Araxi Sachpazian, Greece