Saturday 29 June 2013

Recruitment by Quiz?

An international agency has launched a new web-based "experts' roster". The organisation has an important mission and I have the kind of expertise they look for, so it seems natural to register.
The opening page of the roster looks friendly and clear. I complete a few text boxes with basic data on my life and work.
The following step is tedious -
I am asked to upload a form with full details on my professional career and educational history. A 24-year-old straight from university probably won't take much time to fill in such a form. If you have 24 years of professional experience, you easily spend half a day looking up the exact dates of every job you have ever held and the precise addresses of all employers. This is the kind of thing I would be ready to do if I knew my bid has made it to the client's shortlist - not at a time when it is not even clear whether the organisation offers any interesting assignments I find interesting. But fortunately I have worked for the agency before, so I only need to update the old form I have saved on my hard disk.
Next step: I must choose my areas of expertise, from a list of 10. For each area of expertise, I complete a self-assessment tool, which instantly visualises my skills on a spiderweb-type chart.Cute!
After "self assessment" comes "questionnaire". I click on the box and ten multiple-choice questions appear. IT IS A TEST! Well, why not. The "questionnaire" I have opened is on training. Here is a sample question:
An organisation has just finished evaluating the performance of two instructors it used over the past year. It found that one instructor who mixes role-play, small group activity, and oral reports from learners has been consistently more effective than the other who only teaches by lecturing. What should be recommended fo [typo in original - I suppose it stands for "to" or "for"] the less effective instructor?
  • Implement a variety of standard instructional methods.
  • Provide a rationale for the use of the lecture technique.
  • Consider the group dynamics for each of the methods.
  • Recommend revisions and changes to existing course material.
Right. I would say, all of the above - I would start by engaging the instructor with the question why she / he prefers the lecture technique, then explain why other "instructional methods" tend to be more effective than lectures (what are "standard instructional methods", anyway?), and that group dynamics can be used to support learning. And I would have a look at the course materials to see whether they can be redesigned in a way to encourage more participatory teaching. Do you see any wrong anwer in that list?
But I have only one choice. So I pick the "standard instructional methods", assuming that this term refers to, well, the usual repertoire of 21st century teaching methods, including questions, group work, role play and so forth.
Another question reads: An organisation's restructuring has resulted in learners, different from those originally scheduled, to be sent to a three-day training class. The trainer realises this on the first day of training on gender and notices that the learners' knowledge varies from no exposure to very experienced. Which of the following would be the BEST indicator of the course's success in this situation?
  • Each participant reported enjoying the course.
  • Each participant achieved the course objectives.
  • Each participant found the course content challenging.
  • Each participant reported that they would recommend the course to others.
Well... If I had to make a single choice among all these valid answers, I would tick off the second statement. But then, maybe there are plenty of very experienced participants who have achieved the course objectives even before the course starts, and sort of waste their time learning nothing new... So should I pick "each participant found the course content challenging"? But then, what if "challenging" is used here in a negative sense, meaning that participants found the course way too hard to follow? And what is meant by "success", anyway - that everyone leaves the course with the required level of knowledge, or that everyone has learned something new in the course?
Questions upon questions. I am growing increasingly uncomfortable about the idea of having my professional competencies judged on the basis of 10 out-of-context questions that use unclear, undefined terminology.
I take the test several times and come up with vastly different results, ranging from 4 to 9 "correct" answers out of 10. This is upsetting - after all, I am a highly experienced, much appreciated expert in the field this is supposed to quiz me about. Maybe the person who has designed needs a little help on psychometric testing techniques? A good place to start is this Wikipedia article (retrrieved on 1 July 2013). Central standards of educational testing are validity and reliability. Reliability means that the test measures things consistently across individuals, contexts and over time. Sloppily designed questions and vague expressions without definitions pose massive threats to reliability. If I take the test several times and the results vary massively, in no intelligible way, then there must be something wrong with its reliability.
Validity is about the test actually measuring what it is supposed to measure - in this case, training skills. My impression is that the current test reveals mainly its designer's difficulties in coming up with an effective test design - not my training skills. A good way to ensure a test is valid is to pre-test it with a crowd of people - if the results make no sense, then you need to redesign it. And maybe 10 questions are really not enough to capture complex skills.
In any case, at the end of my quiz exercise, I have decided I prefer not to be on a roster that appears to reduce highly complex skills to a lucky tick-box game. But it seems I can't remove my profile. Oh well! There is a "contact us" link. That is where I'll go next.

No comments:

Post a Comment