Skip to main content

Is it possible to impose responsible AI ethics in assessments?

10 June 2024
Computer graphic of judge's gavel and AI

Share

AI is a topical subject in the assessments community and for good reason.

It is changing how we approach assessment design, development, and delivery, however, is there a way to do it responsibly to uphold assessment ethics?

Assessments need to meet certain criteria and responsible AI is a way to meet these requirements.

But how do we translate the standards set by responsible AI into assessment reality?

Nikki Bardsley, Head of Client Solutions & Quality at Kaplan Assessments, recently chaired the e-Assessment Association’s (eAA) AI Special Interest Group event ‘Establishing Ethics and Responsible AI Standards in Assessments’ which explored these topics.

In this blog, we’ll hear from Nikki and try to determine if it is possible to embed responsible AI ethics in assessments as well as discuss how we use AI in our approach to assessments.

To learn more about how we use AI in our professional qualifications, contact Kaplan Assessments today.

Get in touch

 

What is responsible AI in assessments?

AI can be used in assessments for automated scoring and checking for plagiarism, but we all know that it can lead to more harm than good.

For example, the training data that is used to create the automation can lead to bias that could impact a candidate’s test score and the overall fairness of the qualification.

That’s where responsible AI (RAI) comes in.

RAI is:

A way to approach designing and developing AI tools and devices from a group of ethical principles.

These can include trust, safety, and transparency. This is to reduce AI bias and other negative risks that can come from using AI and boosting any positive factors.

Fortunately, this approach is already being used by organisations in the assessments sector, such as Duolingo.

How Duolingo has implemented responsible AI ethics in their tests

At the EEA AI Special Interest Group, Dr Jill Burstein, Principal Assessment Scientist at Duolingo, discussed ‘Responsible AI Standards for Assessments’.

She explained how the Duolingo team used a human-centred AI (HCAI) approach to design their Duolingo English Test (DET) - which is used by more than 5,000 programs in more than 100 countries.

It won’t come as a surprise that language tests are classed as high-stakes as they can hugely change someone’s life. Due to this, Duolingo utilised RAI to mitigate risks impacting test scores, such as:

  • Evaluating accuracy
  • Reducing bias
  • Accurately detecting cheating

In order to achieve this, they worked with various stakeholders from different backgrounds, like:

  • Machine learning experts
  • Psychometricians
  • Language assessors
  • External AI ethics experts from computer science

This collaborative approach to AI in assessments has allowed the team to rigorously implement AI within the qualification to ensure regulation requirements are still met without jeopardising candidate competency.

The team also publicly disseminated the DET standards - the first for an assessment program!

By doing this, the team demonstrates professional responsibility and helps other organisations by sharing ideas on how to approach AI ethics in assessments.

However, as AI development advances, it’s important to update assessment standards and practices to ensure they remain relevant and valid.

Are responsible AI principles already within assessments?

When designing assessments, there are numerous regulations to meet, such as:

  • Reliability
  • Validity
  • Robustness
  • Fairness

Interestingly, many of these assessment regulations overlap with the principles of responsible AI.

Therefore, can we surmise that it is the technology that’s new, rather than the principles and ethics?

If you are trying to find ways to add responsible AI into your qualification design, you may find you already have the core values ready, it’s just the technology you need help with.

How to implement a responsible AI framework to your assessments

It’s no secret that when designing and developing professional qualifications, the candidate should always be at the forefront.

For example, when designing assessments to be neuro-inclusive, as long as that is maintained, the principles of responsible AI should already be applied.

Here at Kaplan Assessments, we work with our customers to ensure that the design, development, and delivery of professional qualifications are tailored to your workforce to prevent skills gaps.

Nikki Bardsley, Head of Client Solutions & Quality at Kaplan Assessments, chaired the ‘Establishing Ethics and Responsible AI Standards in Assessments’ event and left inspired with new ideas for our approach to AI in assessment development.

She says:

“Here at Kaplan Assessments, RAI is pivotal in the design and delivery of qualifications and assessments.

”For example, GenAI (Generative AI) can be used to provide inspiration to a question author but cannot be relied upon to create questions that are fit for purpose or without bias.

“This highlights how essential keeping human expertise in the loop is in providing critique and assuring the assessment remains fair, valid, and reliable.

“The Special Interest Group emphasised to me that in a sector where regulation is absolutely essential to protect the integrity of high-stakes assessments, there should be no difference in our principles in design and delivery with or without the use of GenAI.”

Is collaboration the answer to responsible AI ethics within professional assessments?

The use of AI within professional qualifications can be extremely difficult to implement - especially when we consider the rapid development of AI.

According to the International Trade Association, the UK’s AI market is worth over £16.8 billion and is projected to grow to more than £800 billion by 2035!

For smaller organisations, this can be even harder to implement AI within day-to-day tasks.

Therefore, should we all follow the same approach as Duolingo and share our practices to ensure responsible AI use in assessments globally?

Kara McWilliams, Vice President of Product Innovation and Development and AI Labs at ETS, also spoke at the e-Assessment Association’s AI Special Interest Group.

She informed the attendees that one of ETS’s core principles is global collaboration and impact - essentially complete transparency of RAI approaches.

We can’t help but ask the question: Will organisations want to share their information?

If there was a set of global guidelines that could be adapted to different businesses (dependent on size or goals), we could all ensure best practices for candidates across the globe.

Not only preventing skills gaps but minimising the risks of AI and maintaining ethical responsibility for everyone.

Kaplan Assessments can help you maintain responsible AI in your assessment

The eAA AI Special Interest Group is just one of the many events that Kaplan Assessments is running this year.

If your workforce needs the skills required for AI use, did you know that our expert team can design, develop, and deliver tailored assessments for your workforce?

We can collaborate with you to ensure your professional qualification provides complete competence in the skills required for your sector to prevent any skill gaps.

If you’re interested in finding out more about how we can help your workforce, get in touch with our expert team today.

See all Insights

Categories