Kartikay Singh & Yashaswini Chauhan
Introduction
Ever wondered why we colloquially say “teen tigada kaam bigada”? As it turns out, this idiom captures surprisingly a complex truth that finds its roots in classical physics, through what scientists famously call the “three-body problem”.
In simple terms, when two celestial bodies interact under mutual gravitational force, their motion is stable and predictable, often forming neat elliptical orbits. The laws are clear, the outcomes foreseeable. But introduce a third celestial body into this system, and order quickly gives way to complexity. The interactions become chaotic, long-term behaviour turns unpredictable, and no general solution can reliably describe the system.
As patent lawyers, we find it hard to ignore how closely this mirrors what is currently unfolding in patent law with respect to Artificial Intelligence (“AI”).
Until recently, patent systems worldwide were largely comfortable operating within a familiar two-body framework, human inventorship and statutory patentability standards. The tests for novelty, inventive step, and sufficiency of disclosure were well understood, even if occasionally debated. Enter AI as the third element, and suddenly the system starts behaving less like a stable orbit and more like legal chaos.
Undeniably, AI related inventions and its patent drafting is starting to feel much like the chaotic three-body problem. Stakeholders in the patent domain are struggling to juggle the three-body problem i.e. predictability, patentability, and reproducibility of AI related inventions, each sensible on its own, but chaotic together. Introducing a third body turns order into unpredictability; what looks inventive may not be reproducible, what is reproducible may not be predictable, and what is predictable may be obvious. The law hasn’t changed, the tests are familiar and well established and yet once we try to solve the three-body problem in light of predictability, patentability, and reproducibility balance becomes elusive. Just when you think you’ve solved it, the third factor pulls the system off course; teen tigada, once again, kaam bigada.
Through this note, we attempt to examine this emerging legal three-body problem; how the introduction of AI into the innovation ecosystem has disrupted the delicate balance between predictability, patentability, and reproducibility, leaving patent practitioners, examiners, and policymakers grappling with questions that the existing framework was never designed to answer.
First Body: Patentability & Subject-Matter Eligibility
The Office of the Controller General of Patents, Designs and Trade Marks’ recent Guidelines for Examination of Computer Related Inventions, 2025 (“CRI Guidelines”) have provided welcome clarity on the patentability limb, addressing the ‘what’ aspect and explaining that AI-assisted inventions are not categorically excluded under Section 3(k) of the Patents Act, 1970 (“Patents Act”), provided they meet the standard patentability criteria and demonstrate a technical effect through tangible inventive applications. The Indian Patent Office (“IPO”) has tried to address the patentability aspect of subject matter of AI assisted inventions by trying to extensively defining what subject matter of AI assisted inventions will be patentable.
However, the CRI Guidelines do not address the ‘how’ aspect i.e. reproducibility and predictability, and the level of disclosure required under Section 10(4) of the Patents Act. The authors will try to assess these aspects below.
Second Body: Predictability and Black box Conundrum
The CRI Guidelines simply state that the specification “should not limit the description only to its functionality but also specifically and clearly describe the implementation of the invention.” AI-inventions, the CRI Guidelines state that the nature of the disclosure “shall be such that it enables reproducibility without undue experimentation by a person skilled in the art”. They provide a non-exhaustive list of examples encouraging inventors to clarify the logic of transforming the input to output, mention the correlation between input and output and elucidate steps and functions related to pre-processing. However, none of these examples and requirements account for increasingly and characteristically opaque nature of modern AI systems due to the “black box problem”. The problem refers to the inability to understand or interpret how complex AI systems, especially deep learning models, reach specific decisions or outputs despite knowing the inputs. The CRI Guidelines expect inventors to describe the inner workings of these systems when even they have no visibility on it. The questions of how much an inventor must disclose, and where the conceptual bar for the average person skilled in the art (in the context of reproducibility) should be set remain unanswered, risking the entire structure of AI patenting becoming unworkable.
Third Body: Reproducibility and Written Description-Enablement Conundrum
Generally, for an invention to be patentable, it has to meet the enablement requirement i.e. it has to be reproducible by a person possessing average skill in or average knowledge of the relevant art without undue experimentation. To be reproducible, the invention must be sufficiently disclosed in a manner, which “fully and particularly” describes it and discloses “the best method of performing the invention” known to the applicant. However, the threshold for who is a person skilled in the art is different for patentability and reproducibility. The former prescribes a higher threshold; a person needs to be skilled in the art to understand an invention’s novelty (for e.g. a technical person with a higher level of skill or knowledge in the field) whereas for the latter, the demand of sufficient disclosure would only be met if it enables reproducibility by a person possessing average skill or knowledge of the relevant art without undue experimentation. This mismatch introduces a destabilising force in the patents orbit i.e. what should the level of disclosure be in a patent application relating to the invention of AI products and solutions (“AI-inventions”).
A. Who is an average person skilled in the art?
Central to these questions is the definition of the “average person skilled in the art,” and how it interacts with the disclosure requirement itself. In India, such a hypothetical person which is legal fiction has not been defined. Under Section 64(1)(h) of the Patents Act, a patent can be revoked if the complete specification does not “sufficiently and fairly” describe the invention and its workings so as to enable a person possessing “average skill in, and average knowledge of the art to which the invention relates”.
Indian jurisprudence has examined what a person of average skill and knowledge in the relevant technical field is on a case-to-case basis. For instance, in Caleb Suresh Motupalli v. Controller of Patents [CMA (PT) No.2 of 2024], the Court described a skilled person to be a software engineer with expertise in AI and allied fields or a team having experts well-versed in AI, black-box modernisation techniques.
While the case-to-case approach may have worked for more traditional fields, it may be difficult to gauge what the “average” skill or knowledge of a person should be when it comes to AI-inventions. This is because such inventions rely on processes spread across many different stages like model designing, training processes etc., all of which require different levels of expertise/competency across multiple disciplines. The large variance in skill level might make it harder to design a yardstick for average skill or knowledge, especially with no established contours for what constitutes sufficient disclosure in such applications. For instance, a basic programmer is not equivalent to a machine-learning (“ML”) engineer, and even among ML practitioners there is a vast range of competencies and skill levels. Because disclosure obligations hinge on what such a person would understand, the legal fiction must be carefully calibrated.
If the bar is set too low, then applicants will be required to disclose an impractically exhaustive level of detail. This would be especially problematic because the “black box” problem of AI makes the internal workings of the system impossible to explain or be reverse engineered. Imposing this expectation would make compliance impossible and could stem the incentive to disclose new AI-inventions altogether. However, setting the bar too high will conflict with Section 10(4)’s requirements as vague descriptions though appearing sufficient, will make reproducibility impossible without undue experimentation, leaving inventors to connect the dots themselves. Judicial history has shown patent applications have been denied and patents revoked for insufficient disclosure. To understand the issue, it would be important to look at what constitutes sufficient disclosure in light of AI-inventions.
B. What would constitute sufficient disclosure for AI-inventions?
To reiterate, a full and particular description of the specification, explaining the best method of performing an invention is the core of reproducibility under Section 10(4) of the Patents Act.
For decades this framework worked smoothly, because most inventions could be described with concrete steps, components or formulas. But this assumption comes under a strain with AI-inventions, which have complex inner workings, often not even known to the inventors. This begs the question of what must be disclosed to enable a person of average skill and knowledge to reproduce the AI-invention given that the law does not accommodate the inexplicability of some aspects of an AI model. Therefore, the disclosure must strike a perfect balance, it should be detailed enough to enable reproducibility but not so broad or vague as to obscure how it works. Not only India, but other jurisdictions have also struggled to achieve this balance.
For instance, two patent applications, drawing priority from the same Patent Cooperation Treaty application, which allows inventors to seek patent protection in multiple countries simultaneously, were treated differently in the United States(“US”) and the Europe (“EU”). Even though both applications were for the same technology, i.e. a method for determining cardiac output, the US application was successful while the EU application was rejected for lack of sufficient disclosure. As per the European Patent Office (“EPO”), the specification was found lacking as it failed to disclose which data was used to train the AI system. This example is not indicative of the larger trend, rather, it reveals how patent offices around the world are choosing to tackle sufficiency of disclosure on a case-to-case basis. However, overarchingly, the EPO seems to have stricter written disclosure requirements than the US.
Our view
India must find a middle path between the leniency of the US model and the stricter requirements of the EU model as AI becomes more prevalent, and more of its aspects become common knowledge to those skilled in the art. Generally, Indian courts have dealt with the issue of sufficiency of disclosure on a case-to-case basis. However, this could present problems due to the wide-ranging nature of both AI-inventions and average persons skilled in the relevant art. The task ahead is to perform a balancing act between deciding the threshold of average skill in this context while preserving the integrity of disclosure obligations without discouraging inventors.
In the absence of clear statutory or judicial guidance on the contours of sufficient disclosure for AI, uncertainty will persist. The uncertainty surrounding whether an AI-invention’s specifications truly satisfy Section 10(4) of the Patents Act, or who the appropriate average skilled person should be, will make enforcement difficult. For establishing infringement, courts examine patent claims in detail. Written disclosures and enablement requirement form an important part of these claims and hold critical evidentiary value. The lack of clear and well-define disclosures can weaken infringement claims and allow defendants to argue independent development. This is compounded by the absence of a settled understanding of who an average person skilled in the art is, as sufficiency is assessed through this lens. In such circumstances, inventors may find themselves increasingly disincentivised from engaging with the patent regime altogether.
In the interim, trade secret protection could present a more practicable alternative for the applicant. Unlike patents, trade secrets do not require public disclosure and are far more accommodating of the inherent opacity of AI systems. For many AI applicants, especially those working with proprietary training data or models whose internal logic cannot meaningfully be articulated, trade secrets may offer clearer and more secure protection than navigating an uncertain patent landscape. At the same time, India need not abandon the patent pathway entirely.
However, as these competing forces continue to pull in different direction, India will remain engaged in its own version of the three-body problem. While the CRI Guidelines offer partial clarity on patentability of computer and AI related inventions and provide a workable middle ground with a checklist for patent examination of AI related inventions, it could be prudent to also enumerate a series of baselines points and pre-requisites for all applicants. This could reduce the prosecution time and variance in how AI applications are assessed. However, the CRI Guidelines have thrown the patent regime into imbalance as key questions of predictability and reproducibility remain largely unresolved. For now, the patents system appears to be locked in a waiting game.