Compliance-o-meter: From abstraction to structured granular assessment
– Vinod Kothari and Payal Agarwal | corplaw@vinodkothari.com
In risk assessment, effectiveness testing, compliance management, or other areas where qualitative assessment is required, one may be making abstract statements like: we have very effective controls; we have strong risk management practices; we have the best of the practices in compliance management, etc. However, very often, these may be pure abstractions. How do we use a structured approach which may allow us to give a more granular, methodical approach to benchmark ourselves?
Unlike quantitative parameters, there are no set methods or approaches to qualitative assessment. However, every qualitative assessment is also backed by identifying the elements that need to be studied, the ingredients or the check points in each of these elements, the weights of the respective elements in the overall assessment framework, assignment of scores based on the weights and observations for each of the checkpoints, eventually coming to an aggregate score. That is, a purely qualitative assessment may be converted into a score sheet.
One may create one’s own methodology; here is a suggested one. Before proceeding with the methodology, one may submit that the same methodology that may be used for effectiveness assessment may also be used for risk assessment. A good score in effectiveness is a positive indicator; a high score in risk assessment is a threat.
The suggested assessment methodology involves:
- Identification of elements: Every assessment can be decomposed into the elements underlie it. Take a very easy example of, say, quality of board minutes prepared in a large company. The quality is purely an abstraction, which can be granularly split into, at the least, the timeliness of minuting, the comprehensiveness, ease of understanding, compliance with the law and standards, etc. Similarly, if one refers to the effectiveness of controls on insider trading, one may decompose the overall control into several elements such as identification of UPSI, sharing of UPSI, management of Designated Persons, codes and policies etc. Note that the more granular the elements are, the better is it for the final result.
- Weights of the elements: The next point to understand is whether each of the elements are equally weighted, or do they have differential relevance or importance in the overall matter being assessed. For example, if the subject matter of assessment is “quality of minuting”, compliance with law and standards may be perceived as having a higher weight than, say, comprehensiveness or ease of understanding. The task of assigning weights may, once again, become qualitative – therefore, it is necessary to have a methodical approach towards the weights as well. The weights may be determined based on, in descending order, whether the element may result in penal consequence or reputational loss, whether it may undermine controls or the correctness or reliability of the subject matter, whether it is good to have but not must to have, etc.
- Ingredients or check points for each element: The check-points for each element need to be an even more granular list of activities, processes, policies, etc that make up the respective element. For instance, in the context of PIT controls, the check points under DP management may include the manner of categorizing DPs, periodicity of updating the list of DPs, maintenance of DP database etc.
- Scores: Once the base work w.r.t. creation of the assessment list is done, actual scores are required to be assigned based on the level of performance of the company on the given check-point. Depending on whether the assessment is a risk assessment, compliance assessment or process review, a scoring parameter may be created, for instance:
Scoring Parameter | |
Not compliant/ no practice exists for the same | 0 |
Meeting minimum compliance/ practice | 1 |
Good Practices (indicates industry practice) | 2 |
Gold Practices (indicates leadership practices) | 3 |
- Weighted score: The scores allotted to each check-point has to be multiplied with the weights assigned to each check point, to arrive at the weighted score of the respective checkpoint. For instance, assume there are five checkpoints in an element, the weighted score can be derived as below:
Check-points | Weights | Scores | Weighted Score |
A1 | 3 (maximum) | 1 | 3 |
A2 | 2 | 0 (minimum) | 0 |
A3 | 3 | 3 (maximum) | 9 |
A4 | 3 | 2 | 6 |
A5 | 1 (minimum) | 2 | 2 |
Total | 12 | 20 |
- Maximum score and actual score: The weighted score obtained against each checkpoint of an assessment element sums up to form the actual score of such element. The same is to be compared against the maximum score for such an element, and expressed as a percentage. For instance, in the aforesaid table, the actual score of the element, let’s say ‘A’, that is made up of ‘A1’ to ‘A5’ sums up to 20. The maximum score that can be obtained for the said element ‘A’ is maximum score for a check-point (3) multiplied by the maximum weight (3), i.e., 9 multiplied by the total number of checkpoints (5), i.e. 45. Based on the aforesaid, the percentage score of the element can be calculated as = (Actual score/ Maximum score)*100.
- Radar chart: Once the scores are assigned, and the percentage score for each element has been calculated, the same can be expressed in the form of a radar chart. Below is an example of a compliance radar:
In the picture above, (0-25) is the area of non-compliance, depicting lapses in meeting the minimum legal requirements. (26-50) is the area of meeting the minimum compliance with law, (51-75) indicates that the company is moving towards the general industry practices, and a score beyond 75 shows that the company is adopting leadership practices in the respective compliance area.
A risk assessment chart may be similarly formed, wherein, a higher score indicates a higher level of risk. Also see an article on Compliance Risk Assessment.
Leave a Reply
Want to join the discussion?Feel free to contribute!