Skip to main content

Can professional assessments match the pace of AI development?

7 May 2024
Laptop with AI robot emerging from the screen


There is currently a lot of discussion about AI development and its impacts on assessments - and for good reason.

The developments in artificial intelligence are accelerating at an exponential rate.

More and more companies are using AI to fulfill mundane, everyday tasks, but do professionals have the competency to utilise the technology accurately?

With the developments in artificial intelligence persisting, this skills gap is going to increase.

Therefore, it's imperative that professional qualifications assess candidates' competency in AI use.

But how do regulatory bodies ensure that qualifications remain relevant despite the rapid pace of AI development while ensuring ethical standards are maintained in assessment regulations?

That’s what we’re trying to distinguish.

Fortunately, Kaplan Assessments designs and develops tailored professional qualifications to ensure professionals have the skills required for modern working.

Get in touch with our team today to find out more.

The ethics of using AI in assessments

It’s no secret that candidates are penalised for using AI to complete qualifications.

Although not always feasible in a controlled test environment due to rigorous security setting, candidates can potentially use AI in their qualifications if the assessment is project or portfolio-based. Another way is if the assessment is remotely proctored - essentially done from home with the candidate's own computer.

If this option is chosen to complete the qualification, security checks are undertaken before the day of the assessment and the candidate’s behaviour is observed. But it’s still not 100% secure.

If candidates are found to have used AI to answer questions, there can be serious repercussions, such as:

  • Reduction in marks - affecting overall grade
  • Marked zero for the assessment - again, impacting the final grade
  • In extreme cases, expulsion is the only option

The reasons why the use of AI to answer assessments is forbidden are pretty obvious - especially in terms of fairness.

But speaking of fairness, the use of AI raises more questions like: How fair is it to use AI to create assessments if its use within them is a penalised offence?

It doesn’t seem very fair to us…

Plus, when it comes to marking assessments, should AI be used?

When we think about AI bias, the answer is apparent. It’s not often discussed, but AI is not neutral when it comes to assessing what it’s presented. It has cultural and societal learnings from existing prejudice that can lead to bias.

When we consider the fairness that is both expected and needed within regulatory frameworks, AI bias doesn’t abide by the ethical expectations of qualifications.

Having said that, can we say for certain that humans are completely fair and neutral?

It’s definitely food for thought…

The pace of AI development and assessments

The aim of professional assessments is to improve competency in real working environments.

How far does AI reflect the practical applications that workers face day-to-day?

According to a 2023 report by McKinsey, one-third of respondents said their business regularly uses AI for at least one function.

With more and more industries using AI, there will quickly become a skills gap that needs to be addressed.

It leaves the question: What is needed from modern-day professional assessments?

For example, should part of the assessment be dedicated to testing the competency of AI use or tools specific to that industry?

But with the lightning pace of AI development, will qualifications be able to keep up?

Especially if we consider that assessments may only be evaluated every three years, it’s likely that they will quickly become outdated.

Perhaps one way regulatory bodies could use AI to their advantage is to automate a continual determination of assessment relevancy.

It's most probable that humans would have to check what has been flagged as outdated, but could save a great deal of time and resources.

How industries are utilising developments in artificial intelligence

35% of global businesses are using AI - with many more expected to start taking advantage of automated tasks in the next few years.

We’ve taken a look at how developments in artificial intelligence are impacting three of the biggest industries:

  • Accounting
  • Law
  • Cybersecurity

How will AI affect accounting professionals?

There are numerous ways accountants can implement AI to their advantage.

For example, various repeatable cloud-based accounting tasks can easily be automated, including:

  • Bookkeeping
  • Data entry
  • Auditing
  • Reporting

By automating these tasks, it allows accounting professionals to focus on strategies and new opportunities.

Another way the talent in this sector can use artificial intelligence to their benefit is by having it analyse large amounts of data sets.

As many AI bots are open source, the security for analysing the data could be problematic, but there is the potential to identify patterns that may not be instantly recognisable to humans, prevent potential human errors, as well as detect fraud!

As we can see, accounting professionals can employ AI to their advantage, however, with such a high-stakes industry, it’s vital they have the skills required to use the automation accurately as the consequences could be disastrous.

How will AI affect the legal industry?

Legal professionals can utilise AI in various ways, such as drafting contracts and creating skeletons of judgments.

That’s not to say that those who work in the legal sector will have their jobs replaced by artificial intelligence. Rather, in a similar vein to accountancy, professionals in the legal industry can use AI to enhance efficiency.

It’s fair to say that the clients lawyers work with could also have an understanding of AI and could start asking questions.

Therefore, professionals need to have a thorough grasp of how AI works in this industry to discuss it with clients. Not only its functions for this sector but the risks involved with using AI too.

The rapid pace of AI development means that upskilling is required to ensure complete competency within the field.

How will AI affect the cybersecurity sector?

Most of us use the advantages of AI in cybersecurity in our day-to-day - such as facial recognition software in place of passwords.

However, as great as cloud-based computing is for sectors like finance and accounting, it has created new threats and vulnerabilities. This includes:

  • Data breaches
  • Malware and ransomware
  • Phishing attacks

AI is a great tool for the cybersecurity industry to automate defence strategies against these cyber threats, such as secure authentication and sifting through data to more quickly and easily identify malicious activity.

However, the rapid development of AI within cybersecurity means professionals need to continually develop their skills to ensure they have the competence required to defend against these ever-evolving threats.

Kaplan Assessments can ensure your workforce has the skills required to keep up with AI development

With the ever-increasing development in artificial intelligence, it’s never been more important to ensure your workforce is prioritising professional development to prevent a skills gap - potentially weakening your workforce.

Yet with the fast-paced nature of AI developments, can professional qualifications keep up to date?

One way to ensure this is innovation within assessments, but how do we make sure the innovation isn’t taken too far? Qualification regulations need to remain fair and unbiased and AI complicates these boundaries.

Kaplan Assessments can provide tailored solutions to ensure your professionals have the necessary skills to utilise AI within your industry with accuracy.

We design, develop, and deliver professional assessments for various sectors - will you be next?

Reach out to the Kaplan Assessments team to find out how we can help your team’s professional development.

See all Insights