The High Cost of Misplacing University Students in Intensive English Programs

When English communication skills are not thoroughly and accurately assessed, the implications for students, teachers, and Intensive English Program (IEP)/English as a Second Language (ESL) programs are higher than most people realize. It is a topic that has not been sufficiently studied, but when time, resources, and budgets are tight, there is little room for error.

The High Cost for Students

Perhaps more than any other group, the cost of misplacement is the largest for students. Most students are in ESL programs because they must have proficient English skills in order to study their chosen major. If a student is placed too low, they will spend more time and money than is necessary. If they are placed too high, they may fail and again, spend more time and money repeating classes.

The High Cost for Teachers

There are a two “costs” for teachers: One, teacher evaluations are crucial for teachers and they are often tied to compensation and even employment opportunities. A student that is frustrated by their misplacement may express their displeasure in the teacher’s evaluation, even though the quality of the content and the ability of the teacher are excellent. Second, misplaced students can disrupt the educational experience for all students and the teacher’s ability to deliver instruction. For students that are placed too high, teachers will have to adapt and scaffold their teaching to accommodate them.

The High Cost for Administration

The cost for administrators is more literal and persistent because student misplacement could directly affect the program’s ROI. When students pay for courses that they don’t feel they need (if they are, for example, placed too low), there is little incentive for that student to stay. They may move to a school with lower English requirements, which they can complete more quickly. Administrators’ goals are to retain as many students as possible to ensure a consistent and predictable revenue stream. They also want students to move out of remedial courses and into the degree-bearing stream as quickly as possible because those courses are more profitable.

Click here to read the entire article.



Is Your Content Challenging Your Learners?

Sara DavilaSara Davila

Learning Without Progress

I worked overseas for a number of years in a variety of settings, spending the longest time in Korea with students at almost every point on their language learning journey from kindergarten to university. One thing that was always fascinating to me was how much time learners devoted to language study versus what little progress they would make over the years. When I asked my A2-level university class how many years they had spent studying English, a majority of students reported that they spent roughly 10 years learning English, many in private schools or with private tutors. It was an alarming amount of study devoted to learning a language with little progress made. At the time I found myself asking why and dug in a bit more to understand the problem. Countless hours of research, interviews, and analysis of course materials later, I came to the conclusion that my students were never challenged beyond what they could do. Once they had achieved a certain ability, much like Bill Murray in the movie Groundhog Day, every English class was a constant repetition of information my students had already learned. With this in mind, it becomes even more important for teachers to have a sense of their learners’ level of ability so that they can provide content that will appropriately challenge learners in the classroom.

Content Creation and Challenge

In my last blog I spent a lot of time talking about communicating students’ ability to perform in English. To recap, using the Common European Framework of Reference (CEFR), we can give a quick, easy-to-understand, description of learner performance using a validated, publicly available scale. When talking to colleagues and peers in the field I no longer say my students are “low-intermediate.” I say they are “B1.” For those in the know, this provides a great deal more information about what a teacher can expect students to be able to do in the classroom. The Global Scale of English (GSE) allows me to get even more specific about the skills and abilities of my students by providing a data-driven teacher-calibrated bank of descriptors in three distinct categories: General Adult English, Academic English, and Professional (Business) English. Continue reading

Communicating Performance:
What does “intermediate” mean to you?

Sara Davila Sara Davila

A few years ago, one of my colleagues asked to come observe one of my sophomore conversation classes. I’d been talking in the break room about work I was doing to incorporate more collaborative task-based activities with a truncated, or completely eliminated, presentation. Basically, the students walked in, got into their learning teams, took the activity package for the day and got to work. This intrigued many and encouraged a few requests for observation. After colleagues watched the class, I got nothing but praise but also heard the following: “I could never do this with my students. You’re working with intermediate learners.” This was actually really interesting to me at the time, as I had just started using the Common European Framework of Reference (CEFR) to better evaluate my students. I clarified, that my students were at best A2 learners working toward A2+, not quite intermediate, but more high-beginner. This lead into an entirely different discussion about how we talk about students’ level of ability and the words we use to communicate proficiency.

One of the greatest challenges in language teaching continues to be communicating student performance. Given that the name of the language teaching game is communication, this is perhaps the biggest irony. While the underlining pedagogy of language teaching has changed radically in the last 30 years, many teachers still describe their students in terms of “beginner” “intermediate” or “advanced.” These general catchall terms only occasionally function as indicators of performance, largely due to interpretation. One teacher’s “beginner” is another’s “intermediate”. In the U.S. especially, the challenge of communicating student ability is hindered by a lack of universal reporting systems between EAPs, IEPs, colleges, and universities. It’s a challenge for everyone involved with language acquisition: teachers, students, and administrators. Where should a student go to receive instruction that will be optimal for their language needs?

The few measures we have that would communicate what a student can do with language aren’t perfect.  Far too often I meet a teacher who was faced with a classroom of students who all had exceptional scores on language tests, but were completely incapable of authentic communication in a classroom environment.  A test alone is simply not enough to indicate ability to perform. On top of that, tests fail to help us roadmap for a student’s strengths and weakness, which can be used to create a learning journey supportive of a learner’s needs.

It was this very challenge that lead to research conducted by the Council of Europe, which was used to create one of the first universal measures for communicating learner performance in language proficiency (though not strictly performance in English language proficiency. That tool is called the Common European Framework of Reference, or the CEFR (Council of Europe, 2001). For anyone that interacts with language, this is a truly amazing resource. Continue reading

H is for Hypotheses


Ken BeattyDr. Ken Beatty

“I’m a researcher! Why has no one ever told me?”

Teachers are inherently researchers, driven by natural curiosity to understand their students’ problems and to consider ways of addressing them. Sometimes they apply old approaches and methods that may have been key to their own first or second language acquisition. Sometimes teachers become creative and innovate new approaches and methods. In doing so, they tend to follow the scientific method:

  • ask a question
  • research the question
  • construct a hypothesis (a guess)
  • test the hypothesis with an experiment
  • analyze the data and draw a conclusion
  • share results

For example, a teacher asks a question: Why are students doing poorly on tests? She is surprised because she thinks they know (or should know) the content on which they’re being tested. Doing some research (starting online), she finds a variety of variables that could be responsible:

  • The tests are held after lunch; maybe students are too sleepy after eating.
  • The tests are one hour long; maybe students need more time to demonstrate what they know.
  • The tests are written; maybe students don’t perform as well when writing because most of the teacher’s evidence of their abilities is based on their spoken output.
  • The tests are too narrow; students have to study one or more chapters of material, but are only assessed on a small portion.

The teacher expands her research to directly ask her students what they think. She decides this last problem, of the tests being too narrow, is the possible pedagogical culprit.

Based on this research and student feedback, the teacher formulates a hypothesis. It’s messy at first because she isn’t exactly sure how to word it to cover all contingencies, but it basically looks like this:

Students asked to demonstrate what they know do better on tests than students who are asked to recall a subset of what the teacher expects them to know.

Hmm. It’s a bit vague, but that’s okay at this point because designing an experiment will help to make it clearer. It can be refined as the teacher goes along. She decides to use the class’s current area of study, the ten most common irregular verbs. This is the content for which the teacher expects her students to demonstrate mastery:

Base form Past tense Past participle
say said said
make made made
go went gone
take took taken
come came come
see saw seen
know knew known
get got got/gotten
give gave given
find found found

Continue reading

Australia’s Largest Provider of Education and Training
Uses Versant English Placement Test

Navitas is a leading global education provider that offers an extensive range of educational services to students and professionals, including university programs, creative-media education, professional education, English language training, and settlement services. University Programs is the largest division of Navitas; it prepares international and domestic students for tertiary study through pre-university and university pathway programs. The following case study will describe the challenges and solutions for administering English assessments for over 10,000 university students from 80 countries each year.

“Testing More Than 10,000 Students Annually is a Massive Undertaking”

Navitas had been assessing the English language skills of students seeking an education in an English-speaking country for many years using a customized test. Not only was the test laborious to administer and difficult to scale, but it also lacked the credibility of a third party marking, managing and moderating results. Navitas needed an objective test to satisfy both university partners and parents.

Navitas turned to Pearson to help create an online test that would meet the highest level of excellence for English language testing. When Pearson launched the Versant™ English Placement Test (VEPT) with Navitas in 2011, it was the first globally validated test that assessed the four modalities of English: speaking, listening, reading and writing. Continue reading