Advancing AI Governance: Industry Recommendations on Copyright Laws
In light of the rapid development of artificial intelligence (AI), the Indian government has been urged to reassess its stance on the legality of utilizing copyrighted materials for training AI models. Industry representatives have strongly advocated for public consultations to review potential amendments to the Copyright Act, emphasizing the need for clear definitions surrounding the concept of “fair dealing” within the current copyright framework.
Public Consultation and Legal Clarity
The Ministry of Electronics and Information Technology (MeitY) initiated a report on AI governance guidelines in January, which has since prompted a wave of over 100 industry submissions detailing various recommendations. Notably, the National Association of Software and Service Companies (Nasscom) highlighted the necessity for legal clarity pertaining to the use of copyrighted content in AI training, suggesting that an enhanced understanding of the fair dealings exception could alleviate concerns over copyright infringements.
Guidelines for AI-Generated Works
In addition to clarifying copyright exceptions, the industry has suggested that the government develop guidelines addressing the ownership and authorship of AI-generated works, ensuring that both creators and developers are adequately protected. These measures are anticipated to foster legal certainty, thus encouraging AI innovation while safeguarding intellectual property rights.
Global Perspectives on AI Regulation
The discourse over copyright and AI training is not confined to India; similar discussions are occurring worldwide, particularly in countries like the United Kingdom, the United States, Hong Kong, and Singapore. The Coalition for Responsible Evolution of AI (CoRE-AI) has called for a reevaluation of existing copyright laws to better address the unique challenges posed by generative AI technologies. This coalition stresses the importance of balancing privacy with transparency in AI model training.
Shifting from ‘Opt-Out’ to ‘Opt-In’ Mechanisms
Another significant concern raised by stakeholders is the current “opt-out” approach, which places the responsibility for protecting intellectual property on creators. The Consumer Unity and Trust Society (CUTS) suggests a shift to an “opt-in” protocol, requiring explicit consent from creators before their works can be utilized for AI training. This change aims to eliminate the burden on creators while fostering a more equitable framework for AI development.
Legal Cases and Industry Responses
Globally, there have been instances of copyright infringement claims associated with AI training. In the United States, notable cases include lawsuits from the New York Times against platforms like Microsoft and OpenAI regarding the use of its content without authorization. Similarly, in India, the Delhi High Court is currently considering a case brought forth by news agency ANI against OpenAI. Here, AI firms typically invoke the “fair use” doctrine as a defense for their practices.
Creating an Enabling Framework
Nasscom advocates for establishing an enabling AI governance framework within India, addressing both real and speculative harms associated with AI technologies. They emphasize that a country’s posture regarding AI infrastructure significantly affects its role in global AI governance. This includes promoting the development of local AI data centers and foundational models, which are now predominantly established in the U.S. and China.
Roadmap for Future AI Infrastructure
To support this growing sector, callouts for robust planning around power supply provisions necessary for developing AI infrastructure have been made. The recent report, authored by a governmental advisory subcommittee, advocates for a cohesive strategy toward AI governance enforced by an inter-ministerial committee.
Addressing AI Risks and the Role of Deepfakes
Industry submissions also stressed the importance of creating an AI incident database aimed at addressing systemic risks. This database is envisioned as a knowledge repository, focused on risk mitigation rather than punitive measures. Caution was advised regarding the regulation of deepfakes, highlighting their dual-use potential in both positive and negative contexts. The consensus reveals the need for ongoing research and development to set clear standards while addressing privacy concerns related to content traceability.