Expertise
Assessment and psychometric expertise
Our assessment and psychometric expertise is used to produce highly influential and groundbreaking work on a range of assessments including high profile, high stakes licensing exams.
Some examples of the work we have done are outlined below.
New assessment methodology for licensing exams in England and Wales
The Qualified Lawyers Transfer Scheme (QLTS), is an assessment for lawyers from other jurisdictions and barristers of England and Wales to qualify as solicitors of England and Wales.
Kaplan, working with the Solicitors Regulation Authority, has been solely responsible for the design, development and delivery of this new exam since its inception in 2010
QLTS was made up of new assessments involving the testing of applied knowledge through multiple choice tests, and of skills and applied knowledge through practical oral and written exercises including the use of standardised clients. It drew partly on testing methodologies in medicine and in other jurisdictions but also contained new elements.
Essential to the design was ensuring that key quality criteria as to reliability and accuracy were met.
The new assessment proved so successful that its design was highly influential in the design of the new Solicitors Qualifying Exam (SQE), introduced in 2021.
Piloting the development of a new licensing exam
In 2018, Kaplan was appointed as the sole provider by the profession’s regulatory body, the Solicitors Regulation Authority (SRA), to design, develop and deliver the new Solicitors' Qualifying Exam (SQE). This new centralised assessment was to be the sole route to qualification as a Solicitor of England and Wales.
Key criteria that the new exam had to meet were validity, reliability, precision, cost effectiveness and fairness. The exam was to include a multiple choice component to examine applied legal knowledge and a skills component to examine legal skills and applied legal knowledge. However, many aspects of the design needed testing in a pilot before final decisions were taken.
For example, issues which needed to be resolved by the pilot about the multiple choice test (SQE1) included the number of papers and questions. Stakeholders were particularly concerned that where a paper covered more than one subject, candidates might pass by significant compensation between subjects.
Detailed statistical analysis of the pilot underpinned decisions. For instance, subject scores were correlated against paper scores to assess the extent of compensation between subjects. Data on the reproducibility of test scores (reliability using coefficient alpha, and precision using the standard error of measurement (SEM) were calculated and, using generalisability theory, modelled for different test lengths and evaluated against current best assessment practice.
Read more here
Case study 2 : the SQE1 pilot