Natural use of artificial intelligence – Regulatory review on use of AI in lending transactions

– Aditya Iyer | Manager – Legal – finserv@vinodkothari.com

 I. Introduction

Lenders appear to be increasingly leveraging Artificial Intelligence (‘AI’) to optimize their lending functions (e.g., to reduce the turnaround time, reduce the margin of error, for automating certain tasks, etc.). ‘AI’ here is being used to denote “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment[1]

AI-reliance may have its legitimate place in the lending space (particularly to reduce time-consuming and manual operational tasks), however, it must be ensured that its use is in a measured and sustainable fashion, compatible with the prudential norms and regulatory guardrails placed on lenders. It is also possible that lenders may not be aware of the regulatory concerns associated with the use of AI / ML software, and the potential non-compliances that could result from their unrestrained use.

In this article, we draw on Indian and global sources to review the growing use of AI in the lending sector, particularly with regards to credit underwriting and borrower analytics, and comment on the applicable laws and regulatory compliances to be ensured. Although regulations cited here may be specific to the NBFC sector, the principles would still apply to other lenders pari materia.

II. Review of functions

A.     Credit underwriting

Credit underwriting refers to the use of credit information, borrowers financial statements and disclosures, and the borrower’s risk profile in order to make a credit decision. In the case of digital lending, Para 7.1 of the Guidelines on Digital Lending[2] requires that the REs shall capture the economic profile of the borrower before extending loans to them.

NBFCs use of AI for credit-underwriting may involve using it for document verification, assessment of the borrower’s risk profile, repayment ability of the borrower and likelihood of delinquency, and in some cases, decision-making. As the RBI Working Group on Digital Lending Report[3] (Working Group Report) had identified – “LSPs and DLAs are able to scour through “hundreds of alternative data variables, sometimes combined with traditional credit history” to assess the borrower’s credit worthiness, and risk-profile.”

Here, the use of AI for credit assessment can help with “thin-file” borrowers who may not have much of a credit history to rely on. This would also help lenders implement the best-practices suggested by the Credit Information Reporting Master Directions[4] which requires that Credit Institutions should not reject first time borrowers merely for want of a credit history.

In this regard, however, two things are important to note. The first is that the AI models are only as good as the datasets they have been trained on. Indeed, as has been observed by RBI officials in 2024,

Overreliance on historical data or algorithms may lead to oversights or inaccuracies in credit assessment, particularly in dynamic or evolving market conditions…It is incumbent upon the supervised entities to keep the rule engines and models calibrated from time to time taking into account real time learnings and emerging scenarios”.[5]

The second is that while AI can aid the decision-making process, it cannot, by any means, be a substitute for the decision-making process. Over-reliance on the AI’s assessment would, in substance, be the same as delegating/relegating this function to the AI/ML software (even in cases where there is some manual authorisation by the credit officer). From a regulatory standpoint, this may attract the same concerns as the outsourcing of credit decisioning (which under Annex XIII of the SBR Directions, is not allowed).

Guardrails on the use of AI for credit underwriting

EU AI Act

In addition to the aforementioned RBI guidelines, a very instructive guardrail emerges from the EU Artificial Intelligence Act (EU AI Act)[6], which lays down a legal framework for the use of artificial intelligence in the European Union.

Under Annex III of the EU AI Act, AI software that undertakes credit underwriting or is used to assess the creditworthiness of persons, is categorised as ‘High Risk AI, and hence it is required that such AI: (i) Shall be designed and developed in such a way as to human oversight on these tools; (ii) Shall be designed in such a way that their operation is sufficiently transparent to enable deployers to interpret the systems output and use it efficiently (see our comments on the duty of explanation below); (iii) Shall be designed having regard to cybersecurity, accuracy, and robustness.

RBI Regulations & Recommendations

Guidance may also be taken from the recommendations provided under the aforementioned Working Group Report, which had recommended that the data to be used for training the AI models must be “extensive, accurate, and diverse”.

Further, lenders may often also contract with third-party AI service providers for providing such software. To the extent this qualifies as IT Outsourcing (which in case of SaaS models, may likely be the case[7]) applicable lenders should ensure that appropriate due diligence has been conducted upon the service providers, to ensure that the use and processing of data is in accordance with applicable law (e.g. the IT Act, the DPDPA), and that the service provider has the necessary infrastructure to safeguard this borrower data. These AI systems handling customers’ personal data are attractive targets for cyberattacks, which may place sensitive information of borrowers in the hands of a third party [8].

Because from a regulatory standpoint, in such cases the ultimate responsibility for the outsourced activity would still be on the Regulated Entity (‘RE’) (see Para 4(a) of the IT Outsourcing Directions)[9], it would be in the REs best interest to conduct strict due diligence before selecting the service providers, and ensure the provider has the necessary IT infrastructure in place to prevent a breach. Such an obligation also flows upon REs from the CIC Rules, which require credit institutions (like Banks and NBFCs) to take certain measures to protect the credit information. REs may also vet and audit the same as part of their Vulnerability Assessment and Penetration Testing (aka VA/PT). Here, it is worth noting that the RE may not always have the capabilities to thoroughly vet the third parties (because of information asymmetry between the developer and the RE).

Indeed, this was a concern also highlighted in the United States Department of the Treasury’s Report on the ‘Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector’, stating – “respondents also noted that financial firms face challenges because they may not receive from model vendors and developers access to the type of information needed to assess risks and develop controls[11]

A solution to this, as captured under the EU AI Act, is to cast the AI compliance obligations upon the provider of AI tools (and the provider has been widely defined under Article 25 to include distributor, deployer, importer, or any other third-party).

Lastly, under the CIC Regulations[12], there is an obligation to disclose cast upon specified users (i.e. any credit institution, CIC, or other entity notified by RBI for purposes of obtaining credit information), which states that, in case a specified user denies a borrower credit on the basis of the credit information report, the specific reasons for the rejection shall also be communicated to the borrower in writing (See Regulation 10).

However, in the case of black-box AI models (where there are limited insights on the reasons behind the model’s output), it may not be possible for lenders to do the same. In this regard, the Working Group Report observed that “Lenders should also assume the “duty of explanation” and ensure that outputs from such algorithms are explainable, transparent, and fair by knitting ethical AI design to fabric of FinTech.” Reference may also be made to the U.S. CFPB’s guidance circular published on May 26, 2022, clarifying that creditors (subject to U.S. equal credit opportunity laws) may not “make use of complex algorithms when doing so means they cannot provide the specific and accurate reasons for adverse actions”[13].

B.     Borrower analytics

The use of AI for borrower analytics is connected to the credit-underwriting function (in that it may aid the credit underwriting), but it is also emerging as a separate business function on its own. For instance, lenders with several lending arms in group companies may be able to leverage their access to borrower data by setting-up a unit to analyse this data and generate insights on borrowers (data harvesting). This may also be done by entities acting as LSP for multiple lenders.

For the purposes of this article, the term “borrower analytics” is an umbrella term to describe the use of the borrower’s credit information for a variety of purposes – including (i) generating demographic based insights (e.g. delinquencies in a region, recovery strategies, etc); and (ii) generating a “report” or credit-scoring of the borrower’s data and sharing the same with other regulated entities. These insights may either be used by the lender themselves, or else shared with LSPs / other REs. To this extent, the RBI working group report had flagged instances of “unbridled sharing” of borrower data without considering the privacy issues. The following overt regulatory concerns emerge with this practice –

  1. Unauthorised sharing of credit information: Under the Credit Information Companies (Regulation) Act, 2005 and CIC Regulations, the credit information of a borrower, that has been retrieved from a Credit Information Company, may only be disclosed/shared with a credit institution, or a “specified user”. In cases where the credit information of a borrower is being obtained by an RE, and a “scorecard” is being generated with that data, sharing of that data (even in a processed form) would, in essence, amount to doing indirectly what cannot be done directly.

Reference may also be made to Chapter VI of the CIC Regulations, which require collection, publishing and disclosure of credit information to be restricted to the Credit Institution.

  1. Sharing of the borrower’s data: In our view, sharing of borrower data (other than credit information mentioned above, for e.g. relating to borrower’s demographic and repayment history) with third-parties for the purposes of their analytics function (i.e. to analyse the delinquencies, and generate recovery strategies), may only be done if the borrower data is depersonalised, or if the borrower’s explicit consent has been obtained. In the case of digital personal data collected from the borrower, the consent norms under the DPDPA (including form of notice, right to withdrawal and ensure deletion, the need to receive affirmative, unambiguous consent, etc.) would be squarely applicable for processing of that data in any form. However, even in the case of sharing non-digital personal data, in our view these norms would apply pari materia.

Further, as regards both the sharing of credit information and other borrower data highlighted above, it is worth highlighting that the Supreme Court of India has held that citizens have a constitutional right to privacy[14], and has stated that “Informational Privacy…therefore recognises that an individual may have control over the dissemination of information that is personal to him. Unauthorized use of such information therefore leads to infringement of this right”. This right has also been held to be horizontally applicable[15] (i.e. may be enforced against non-state actors such as NBFCs and Banks), and hence, even in cases where there may be no specific regulatory compliances for REs, the REs would still need to ensure that the borrower’s right to privacy is not being vitiated by their activities/business models.

III. Conclusion and Key Takeaways

One way to ensure that the REs remain on the right side of compliance is to have the AI-function/use vetted beforehand from a legal and regulatory standpoint. However, as a handy rule of thumb, the following guardrails may be ensured:

  • In the case of credit underwriting with AI models, the process of credit decisioning may not be delegated to the AI tool (and there may be no overreliance on it either). However, in case credit decision-making is done based on Black-Box AI models where the credit officer is not able to trace the rationale and matrices for the AI’s outputs, this would, in our view, attract the same concerns associated with outsourcing the credit decisioning, and hence must be avoided. Black-box AI models are also to be avoided from the standpoint of ensuring that the credit decision is “auditable”.
  • To this extent, lenders should also ensure that they are able to meet the duty of explanation, consistent with their obligations under the CIC Regulations.
  • Third parties providing the underwriting solutions should be thoroughly vetted by the REs before contracting with them. Here, reference may be made to the IT Outsourcing Directions for due diligence and compliance.
  • Finally, sharing of the borrowers’ credit information should at all times be within the contours of the CIC Act, CIC Regulations, and CIC Rules, and should be restricted to the credit-institution’s functions. Any (other) sharing of borrower data should also be subject to depersonalisation, and meeting the consent and notice requirements under the DPDPA.

[1] Financial Stability Institute, ‘Regulating AI in the financial sector: recent development and main challenges’, available at: https://www.bis.org/fsi/publ/insights63.pdf (last accessed in April 2025).

[2] Reserve Bank of India, Guidelines on Digital Lending, available at: https://rbi.org.in/Scripts/NotificationUser.aspx?Id=12382&Mode=0 (last accessed in April 2025).

[3] Reserve Bank of India, Report of the Working Group on Digital Lending including Lending through Online Platforms and Mobile Apps, available at: https://rbi.org.in/Scripts/NotificationUser.aspx?Id=12382&Mode=0

[4] Master Direction – Reserve Bank of India (Credit Information Reporting)

Directions, 2025, available at: https://rbidocs.rbi.org.in/rdocs/notification/PDFs/125MD0601257105ED8375BB487AAA4C45F3B88AD0C5.PDF

[5] Swaminathan J: Embracing meaningful assurance for sustainable growth of the NBFC Sector, available at:  https://www.bis.org/review/r240523c.htm (last accessed in April 2025)

[6]Regulation (EU) 2024/1689) available at:  https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689 (last accessed in April 2025).

[7] In the author’s view, this is because the definition for “Outsourcing of IT Services” under Para 3(iv) of the IT Outsourcing Directions includes in its ambit cloud computing services, and Appendix I of the said Directions pertaining to cloud computing services, also regulates Software as a Service (SaaS).

[8] Financial Stability Institute, ‘Regulating AI in the financial sector: recent development and main challenges’, available at: https://www.bis.org/fsi/publ/insights63.pdf (last accessed in April 2025)

[9] https://www.rbi.org.in/scripts/BS_ViewMasDirections.aspx?id=12486

[11] United States, Department of the Treasury – Report on the uses, opportunities, and risks, of artificial intelligence in the financial services sector, available at: https://home.treasury.gov/system/files/136/Artificial-Intelligence-in-Financial-Services.pdf (last accessed in April 2025).

[12] The Credit Information Company Regulations, 2006.

[13] Available at: https://www.consumerfinance.gov/compliance/circulars/circular-2022-03-adverse-action-notification-requirements-in-connection-with-credit-decisions-based-on-complex-algorithms/

[14]Justice K.S.Puttaswamy (Retd) And Anr. v. Union Of India And Ors  AIR 2017 SUPREME COURT 4161.

[15] Ibid; and  Justice K.S.Puttaswamy (Retd) v. Union Of India AIR 2018 SC (SUPP) 1841.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *