Pearson Global Scale of English

Will Pearson take the English Certification Crown from ETS?

Earlier this month Pearson launched its new Global Scale of English or short GSE. According to Pearson English

“there has never been a globally recognised standard in English – no single way of recognising and quantifying the level of an individual’s English”

which is, of course, something the company aims to change with its new product.

Access the Whole Article

With an EDUKWEST Membership you get full access to all published articles including the entire archive as well as exclusive discounts on our digital downloads.

10 EUR

per month rebilled monthly


100 EUR

for 12 months rebilled annually


Already a Member?

Sign In

Kirsten Winkler is the founder and editor of EDUKWEST. She also writes about Social Media, Digital Society and Startups at

  • bnleez

    Always appreciate your perspective Kirsten. Here’s mine…

    “there has never been a globally recognised standard in English – no single way of recognising and quantifying the level of an individual’s English”

    Literally, this is true. But for all intents and purposes, most course books and common English proficiency exams that I’ve come across within the last 10 years (e.g., TEOIC, TOEFL, etc.) have all referenced the Common European Framework (CEF). I consider
    alignments to CEF a decent enough standard that pretty much negates this argument.

    “The release states that GSE has been in development for the past 25 years and has been tested on over 10.000 students in 130 countries.”

    Actually, testing 10,000 students in 130 countries seems a little on the low side, considering they’ve been working on it for 25 years. It would be interesting to compare this evidence with ETS’s. And I wonder if Pearson is publishing this evidence to potential customers in order to make a more informed decision.

    “The investigation proved that students with poor English competencies had passed the TOEIC exam which is needed to obtain student visa extensions.”

    Actually, this is more of a problem in proctorship than in exam validity, reliability, and unbiased test items directly. Certainly if ETS ignores the problem, validity and reliability issues will continue, but I doubt they are going to allow this to happen.

    It seems to me that if Pierson’s goal is to provide “one precise, numeric, universal scale for businesses, governments and academic institutions, as well as for the 1 billion plus people estimated to be learning English worldwide”, then they need to provide evidence that their exam is more valid, reliable, and/or unbiased than its competition. Or (more likely), at least provide enough evidence that they are at least as valid, reliable, and unbiased but are less expensive. Either of these two considerations (or both) would have to be compelling enough for institutions (who often are slow going) to make a change in testing

    Also, it sounds as if they are wanting their universal scale to replace CEF as well? What’s more important for exam users, exam scores or how exam scores align with CEF (or other standardized scales that exist). Perhaps I’m wrong, but this sounds easy to say, hard to do. 🙂

    “Pearson GSE replaces the very broad levels of beginner, intermediate and advanced with a granular scale between 10-90, enabling the learner to assess more precisely her current level of English.”

    I’m not sure I follow. I agree that the range for the TOEFL is a bit narrow (weak for beginner learners, better for advanced), but the results for the TOEIC range from 10-990 with five separate categories: orange, brown, green, blue, and gold. I’d be interested in learning more about how GSE offers a more “granular scale” than the TOEIC.

    Regardless, competition is good (go for it Pearson and others!). It will be interesting to see how this all plays out.

  • Brett Laquercia

    Hi Kirsten, this is an interesting article, thank you.

    I thought you might be interested in learning about the widely used Proficiency Guidelines created by the American Council on the Teaching of Foreign Languages (ACTFL). In addition to its work in establishing language proficiency standards, ACTFL provides assessments that are aligned to these national standards, and can also be assessed according to the CEFR, and ACTFL assessments are among the most valid and reliable instruments available to the field. For this reason, they are relied upon for such diverse purposes as certifying K-12 world language teachers, bilingual teachers, translators and interpreters; awarding college credits; certifying government and Fortune 500 employees for jobs requiring multilingual skills, and generally for measuring student progress and outcomes at all levels of academia. I think you will find the way ACTFL describes the Proficiency Guidelines to be right on point, relative to your article:

    The ACTFL Proficiency Guidelines are a description of what individuals can do with language in terms of speaking, writing, listening, and reading in real-world situations in a spontaneous and non-rehearsed context. For each skill, these guidelines identify five major levels of proficiency: Distinguished, Superior, Advanced, Intermediate, and Novice. The major levels Advanced, Intermediate, and Novice are subdivided into High, Mid, and Low sublevels. The levels of the ACTFL Guidelines describe the continuum of proficiency from that of the highly articulate, well-educated language user to a level of little or no functional ability.

    These Guidelines present the levels of proficiency as ranges, and describe what an individual can and cannot do with language at each level, regardless of where, when, or how the language was acquired. Together these levels form a hierarchy in which each level subsumes all lower levels. The Guidelines are not based on any particular theory, pedagogical method, or educational curriculum. They neither describe how an individual learns a language nor prescribe how an individual should learn a language, and they should not be used for such purposes. They are an instrument for the evaluation of functional language ability.

    The ACTFL Proficiency Guidelines were first published in 1986 as an adaptation for the academic community of the U.S. Government’s Interagency Language Roundtable (ILR) Skill Level Descriptions. This third edition marks the third edition of the ACTFL Proficiency Guidelines includes the first revisions of Listening and Reading since their original publication in 1986, and a second revision of the ACTFL Speaking and Writing Guidelines, which were revised to reflect real-world assessment needs in 1999 and 2001 respectively. New for the 2012 edition are the addition of the major level of Distinguished to the Speaking and Writing Guidelines, the division of the Advanced level into the three sublevels of High, Mid, and Low for the Listening and Reading Guidelines and the addition of general level description at the Advanced, Intermediate, and Novice levels for all skills.

    Another new feature of the 2012 Guidelines is their publication online, supported with glossed terminology and annotated, multimedia samples of performance at each level for Speaking and Writing, and examples of oral and written texts and tasks associated with each level for Reading and Listening.

    The direct application of the ACTFL Proficiency Guidelines is for the evaluation of functional language ability. The Guidelines are intended to be used for global assessment in academic and workplace settings. However, the Guidelines do have instructional implications. The ACTFL Proficiency Guidelines underlie the development of the ACTFL Performance Guidelines for K-12 Learners (1998) and are used in conjunction with the National Standards for Foreign Language Learning (1996, 1998, 2006) to describe how well students meet content standards. For the past 25 years, the ACTFL Guidelines have had an increasingly profound impact on foreign language teaching and learning in the United States.
    – See more at: