19 July 2025

Regulating AI in the Securities Market

 I have a piece with Parker Karia and Varun Matlani on regulation on AI in securities markets in today's Financial Express today:



In one of the first measures undertaken by a regulator in India towards regulating the use of AI, last month, SEBI issued a consultation paper, seeking feedback on its proposals to regulate the use of AI / ML in the securities market. 

Broadly speaking, and as defined in the consultation paper, AI refers to technologies that allow machines to “mimic human decisions to solve problems”. ML is a sub-set of AI, and refers to the automatic learning of rules to perform a task by analysing relevant data. 

Currently, SEBI requires market infrastructure intermediaries such as stock exchanges, clearing corporations, depositories, etc., and intermediaries such as mutual funds, to report to SEBI on AI / ML systems employed by them, thereby giving SEBI an insight into its use-cases. 

Use cases of AI 

SEBI has identified that AI / ML is being used for various purposes. For instance, stock exchanges are leveraging AI for sophisticated surveillance and pattern recognition, and brokers are deploying it for product recommendations and algorithmic order execution. Further, AI is also used for customer support. 

Based on who is creating an AI / ML system, they can be classified into two categories viz. systems that are built in-house, or sourced from a third party. In this context, it is also important to remember that AI / ML systems can be integrated with each other, as well as with present systems. Further, the capabilities of AIs are expanding rapidly, with AIs making near-accurate predictions in finance and generating model portfolios that could, in not too long from now, give a fund manager a run for his money.

Proposed regulatory framework

In a forward-looking approach, SEBI’s consultation paper proposes guidelines to be framed with five core principles, which are a model governance framework, investor protection, testing mechanisms, fairness and bias, data and cyber security.  

Importantly, SEBI has proposed that the services provided by third parties would be deemed to be provided by the concerned intermediary, and thus, be liable for any violation of securities laws. Further, SEBI has extended the applicability of investor grievance mechanism in respect of AI / ML systems as well. 

Regulatory Lite Framework 

SEBI has proposed a ‘regulatory lite framework’ seeking to segregate between AI / ML systems that have an impact on the clients, and those which are used for internal business operations. Further, even if the AI ML system is outsourced, intermediaries will be liable. The real challenge for intermediaries lies in building the sophisticated internal teams, the robust audit trails, and the technical capacity to manage AI / ML systems. In this context, it is worth considering if SEBI should revisit this approach, and borrow a leaf out of its own playbook. 

Earlier this year, in February, SEBI introduced a revised framework for safer participation of retail investors in algo trading. In view of the fact that there were several entities providing various algo strategies to customers, and the consequent risk, SEBI decided to introduce a new class of regulated entities, viz. Algo Providers. While they aren’t directly regulated by SEBI, Algo Providers would have to become agents of stock brokers and be registered and empanelled with the stock exchange(s). 

A similar approach can be evaluated in respect of AI / ML systems, and a new class of persons, i.e., ‘AI Providers’ can be introduced. While it is not necessary that SEBI directly regulates such persons, it could result in better oversight and understanding of the evolving nature of the AI industry and its nexus and impact on the securities market. Further, liability can be affixed onto the person or entity actually responsible if a AI / ML system goes wrong, specially if the intermediary had no role in the violation. The alternative, results in a cascading round of litigation, as the investor would sue the intermediary, which in turn would seek to recover losses from the third party vendor (AI Provider). While the investor grievance mechanism is proposed to be extended to AI / ML systems, introducing a new class of semi-regulated players in the securities market could have a better impact on fostering growth in a transparent and accountable manner, with appropriate oversight. 

Leveraging the Regulatory Sandbox

SEBI’s proposal includes testing requirements at the time of commencement as well as on an ongoing basis, to ensure that the AI / ML systems are working in the expected manner. In this regard, a key reform which could further propel the growth of AI / ML systems is to allow players to access the regulatory sandbox framework to test their products and systems. This would result in a heightened scrutiny of key AI / ML systems, and allow SEBI to work with emerging players in the AI industry. This would also provide SEBI with key data points, thereby aiding in evolving best practices across the board. This kind of a framework would help SEBI become a proactive regulator as opposed to reacting to technological developments, and would be the first step in transforming SEBI into a regulator whose regulatory frameworks lay down the foundation for further innovation and advancement. This method will allow the regulator to be an enabler rather than impose roadblocks to new technology.

Risks of AI in the Securities Market

The paper highlights potential dangers of AI. The regulator explicitly flags the threat of generative AI being used for market manipulation through deepfakes and misinformation, the systemic concentration risk if the industry leans too heavily on a few dominant AI Providers. The identification of concentration risk is particularly salient, as there exists a danger of unregulated technology providers becoming systemic chokepoints for the industry. Further, since there are only a handful of foundational models, the risk emerges of synthetic data loops, wherein everyone uses the same AI model, trained on the same data, which may cause risk of collusive behaviour and herding.

There is much to applaud SEBI’s proposal of a principle based regulatory-lite framework, that reflects the regulator’s intention to adapt to innovation in technology that would shape the financial markets in the future. At the same time, there are steps it can take to not only regulate, but design a  regulatory framework that is ahead of the curve and supports growth and innovation.



No comments: