So last week I was delivering a Train the Trainer course. The audience was a mix of maintenance engineering instructors and assessors. I always like to start courses either with a discussion or with a game of sorts to act as an ice breaker, and I started this particular course with a deceptively simple question to the participants:
"What is competence?"
It is a simple question, but most of the participants struggled to answer. Some said it depends on the amount of experience. Others said it hinges on whether you hold official certificates. A few argued that it depends on whether you have a Part-66 licence. All reasonable starting points — but none of them captured the full picture.
I did not give a reply to the team just yet. Instead, I posed the next question:
"How do you assess competence?"
In other words, how do you gauge whether someone is competent or not? This is where one of the participants half-jokingly told me, "George, this is the form with the checkboxes we fill in every two years or so — that is the competency assessment. If you fill that form, you are deemed to be competent." Most of the other participants concurred when asked whether they agreed.
And that answer made me think. Have we really managed in our industry to reduce competency assessment to a checkbox-filling exercise conducted every two years?
Although there are organisations out there that do take competency assessment seriously, there are others who look at it as an additional administrative burden — another paperwork exercise stacked on top of the mountains of documentation that already need filling in. These thoughts are what inspired this article, and I am going to discuss what competency actually means, how it should be assessed, and why getting it wrong carries real safety consequences.
What Is Competence? The EASA Definition
Let us start with what the regulation actually says. According to GM1 to Annex II (Part-145) Definitions, competency is defined as:
"A combination of individual skills, practical and theoretical knowledge, attitude, training, and experience."
Read that definition carefully. It identifies six distinct elements: individual skills, practical knowledge, theoretical knowledge, attitude, training, and experience. This is not a vague aspiration; it is a structured, multi-dimensional definition that deliberately recognises competence as a holistic state. A person is not competent simply because they have attended a training course, nor because they hold a licence, nor because they have twenty years of experience. Competence emerges when all of these elements combine and are demonstrable.
This distinction matters because many organisations, often unintentionally, equate one or two of these elements with the whole. A completed training record becomes a proxy for competence. A Part-66 licence becomes proof of competence. Years on the job become evidence of competence. But the regulation is telling us something more nuanced: competence is the sum of all these parts working together. Remove one, and you do not have competence — you have a gap.
Competence Across Industries — Aviation Is Not Alone
Aviation is not the only safety-critical industry that wrestles with defining and assessing competence. Looking at how other sectors approach this challenge can sharpen our perspective.
In the oil and gas industry, the International Association of Oil and Gas Producers (OGP) defines competence as a person's ability to accurately and reliably meet the performance requirements for a defined role. Their definition explicitly includes a behavioural element — the ability to apply personal skills and knowledge in typical workplace situations, and critically, the ability to recognise personal limits and seek help when appropriate. The ISO standard for the petroleum industry, ISO/TS 17969, reinforces this by framing competence as the ability to apply knowledge and skills to achieve intended results, adding the important caveat that continuing application of competence can be affected by the work environment, including pressures, relationships, and conflicts that influence attitude and performance.
In healthcare, competency frameworks similarly emphasise the combination of knowledge, clinical skills, professional attitudes, and the demonstrated ability to perform under real-world conditions. A surgeon is not deemed competent solely because they passed their exams; they must demonstrate practical proficiency under supervision before being allowed to operate independently.
Within aviation itself, ICAO defines a competency framework as a selected group of competencies for a given aviation discipline, where each competency has an associated description and observable behaviours. ICAO's approach through Competency-Based Training and Assessment (CBTA) has shifted the focus from time-served to competence-demonstrated, particularly in pilot training through Evidence-Based Training (EBT).
Notice the common thread across all of these industries. Competence is never defined as a single attribute. It is always a combination of knowledge, skills, behaviour, and demonstrated performance. The EASA Part-145 definition sits comfortably within this international consensus. But where aviation maintenance sometimes falls short is in the translation from definition to practice — particularly in how we assess competence.
What Does the Regulation Actually Require?
This is where many organisations get it wrong, not because the regulation is unclear, but because they have not read it carefully enough — or they have read it and reduced it to its most convenient interpretation.
AMC1 145.A.30(e) sets out the competency assessment objectives with considerable clarity. It states that the procedure should require that planners, mechanics, specialised services staff, supervisors, certifying staff, and support staff — whether employed or contracted — are assessed for competency before unsupervised work commences, and that competency is controlled on a continuous basis.
That last phrase deserves emphasis: on a continuous basis. Not every two years. Not once during onboarding and then never again. Continuously.
The AMC goes on to specify that competency should be assessed by evaluation of on-the-job performance and/or testing of knowledge by appropriately qualified personnel, alongside a review of records for basic training, organisational training, task training, and product type and differences training, as well as experience records. The result of this assessment should determine the scope of tasks the individual is authorised to perform, supervise, or sign off, and whether there is a need for additional training.
For a proper competency assessment, the organisation should consider four critical elements. First, that adequate initial and recurrent training has been received by the staff and recorded to ensure continued competency throughout the duration of employment or contract. Second, that all staff can demonstrate knowledge of, and compliance with, the maintenance organisation's procedures as applicable to their duties. Third, that all staff can demonstrate an understanding of safety management principles, including human factors related to their job function. And fourth, that job descriptions should contain sufficient criteria to enable the required competency assessment — meaning you cannot assess competence if you have not clearly defined what competence looks like for each role.
Mapping the Requirements to the Elements of Competence
Here is where the picture comes together, and where you can start to see why a checkbox form falls so far short. Let us map the AMC requirements against each element in the EASA definition of competence.
Individual Skills — The AMC calls for evaluation of on-the-job performance. This is the practical, observable demonstration of skill. You cannot assess individual skills through a written test alone; you need to watch the person work. This is why the regulation provides for having the person work under the supervision of another qualified person for a sufficient time to arrive at a conclusion.
Practical Knowledge — This is assessed through the same on-the-job evaluation. Can the person apply what they know in a real maintenance environment? Do they know how to use the tools, follow the procedures, and handle the aircraft documentation correctly? Practical knowledge is not what someone knows in theory; it is what they can do with their hands and their judgment under working conditions.
Theoretical Knowledge — This is where training records and testing come in. Has the person completed the required basic, organisational, and task-specific training? Can they pass a knowledge assessment that verifies they understand the regulatory framework, the applicable maintenance data, and the technical principles behind their work?
Attitude — This is perhaps the most overlooked element and the hardest to assess through a form. Attitude encompasses the person's approach to safety, their willingness to follow procedures, their response when they encounter something they do not understand, and their openness to reporting errors. The AMC's requirement that staff demonstrate an understanding of safety management principles and human factors directly addresses this element. A person who knows the rules but consistently cuts corners does not meet the competency standard, regardless of their technical ability.
Training — The AMC explicitly requires that adequate initial and recurrent training has been received and recorded. Training is not just about having attended a course; it is about having received the right training for the specific job function, and having that training refreshed and updated over time.
- Experience — The AMC requires a review of experience records and acknowledges that a written confirmation from a previous approved maintenance organisation can be taken into consideration. Experience provides context and depth that training alone cannot deliver. However, experience without current training or proper attitude is insufficient.
The Competency Assessment Procedure — Getting Beyond the Checkbox
AMC2 145.A.30(e) goes further and lays out what a competency assessment procedure should actually look like. The procedure should specify who is responsible for the process, when assessments take place, how to give credit from previous assessments, how to validate qualification records, the means and methods for initial assessment, the means and methods for continuous control of competency including how to gather feedback on performance, the aspects of competencies to be observed in relation to each job function, the actions to be taken if the assessment is not satisfactory, and how to record the assessment results.
Read that list again. It is comprehensive, detailed, and deliberately designed to prevent competency assessment from becoming a superficial exercise. The regulation envisions an active, ongoing process — not a biennial form. It expects organisations to gather feedback on performance, to observe specific competency aspects relevant to each job function, and to have clear corrective actions when someone does not meet the standard.
The regulation also recognises that the depth of assessment should be proportional. For someone recruited from another approved maintenance organisation, a written confirmation from the previous organisation could reduce the duration of the assessment. For someone new to the industry, the assessment may take several weeks. The key is that the organisation makes a genuine judgment about competence based on evidence, not on assumption.
Why the Checkbox Culture Is a Safety Risk
When competency assessment is reduced to a checkbox exercise, several things happen — none of them good.
First, the organisation loses visibility of actual workforce competence. If everyone is assessed as competent by default, the training needs analysis becomes meaningless. You cannot identify gaps if your system is designed to confirm that no gaps exist.
Second, it creates a false sense of compliance. The forms are filled, the files are complete, and the auditor finds the records in order. But the substance behind those records may be hollow. This is precisely the kind of systemic weakness that Safety Management Systems are designed to catch — yet if the SMS is itself running on superficial data, the feedback loop breaks down.
Third, and most critically, it is a safety risk. Aircraft maintenance is unforgiving. A mechanic who lacks current knowledge of a modification, a supervisor who does not understand the latest revision of a procedure, or a certifying staff member whose practical skills have degraded — any of these scenarios can lead to an error that compromises airworthiness. The competency assessment exists to catch these issues before they become incidents.
With the full integration of SMS into Part-145 now a regulatory reality — and with EASA's competent authorities shifting from evaluating plans to evaluating performance — organisations can no longer afford to treat competency assessment as an administrative formality. The regulators are looking for demonstrated effectiveness, not just documented intent.
Making Competency Assessment Work in Practice
So what does a meaningful competency assessment look like? It does not require an enormous bureaucracy. It requires intentionality and a genuine commitment to the process.
Start with robust job descriptions that clearly define the competency criteria for each function. If you do not know what good looks like for a role, you cannot assess whether someone meets the standard. Implement a structured initial assessment for all new personnel — whether new hires or staff moving into a new role — that covers theoretical knowledge, practical skills, and an observed period of supervised work. Build continuous assessment into daily operations through supervisor feedback, peer observation, and performance monitoring. Use recurrent training not as a box to tick, but as a genuine opportunity to refresh knowledge and address identified gaps. Treat the competency assessment as a living document, updated whenever significant changes occur — new aircraft types, revised procedures, regulatory amendments, or organisational restructuring. And establish clear, non-punitive actions for cases where competency is found to be lacking — retraining, mentoring, or temporary scope limitations — rather than ignoring the finding or manipulating the result to avoid inconvenience.
The Bigger Picture
Competency and competency assessment are not merely regulatory requirements to be endured. They are the mechanism through which an organisation ensures that the people working on aircraft are genuinely capable of doing so safely. When done properly, competency assessment protects the individual — by ensuring they are not placed in situations beyond their capability — and protects the organisation — by providing a defensible record that its workforce meets the regulatory standard.
The next time you fill in a competency assessment form, or you are asked to design a competency assessment procedure, remember the definition: a combination of individual skills, practical and theoretical knowledge, attitude, training, and experience. Ask yourself honestly — does your process genuinely evaluate all of these elements? Or has it quietly become the form with the checkboxes?
If the answer makes you uncomfortable, that is the starting point for improvement. And that improvement is not optional. It is what keeps aircraft safe and your organisation compliant.
Frequently Asked Questions
What is competency in aviation maintenance under EASA Part-145?
Under EASA Part-145, competency is defined as a combination of individual skills, practical and theoretical knowledge, attitude, training, and experience. This definition is found in GM1 to Annex II (Part-145) Definitions and establishes that competence is not a single attribute but a holistic state requiring all six elements to be present and demonstrable.
How often should competency be assessed in a Part-145 organisation?
AMC1 145.A.30(e) requires that competency is controlled on a continuous basis, not solely at fixed intervals. While many organisations conduct formal assessments periodically, the regulation expects an ongoing process that includes on-the-job observation, performance feedback, and recurrent training — complemented by structured formal reviews.
What should a Part-145 competency assessment procedure include?
AMC2 145.A.30(e) specifies that the procedure should define who is responsible for assessments, when they take place, how to validate qualifications, the methods for initial and continuous assessment, which competency aspects to observe per job function, corrective actions for unsatisfactory results, and how results are recorded.
How does competency assessment relate to SMS in Part-145?
Competency assessment feeds directly into the Safety Management System. Properly conducted assessments provide data on workforce capability, which informs hazard identification, risk assessment, and training needs analysis. Conversely, a superficial competency assessment undermines the SMS by masking competency gaps that could become safety risks.