Is there a right balance between regulation and innovation?- QHN


Traditionally, drug discovery has been an increasingly intricate and resource-intensive process, encompassing multiple stages such as target identification and validation, hit identification, and lead optimisation. The journey for a drug to eventually debut in the market typically spans a duration of 12 to 18 years, and costs over billions of dollars. Additionally, the odds of success are relatively modest, with only 10% of potential candidates advancing to the clinical trial phase. The tremendous severity of the challenge became a massive wake up call for us, especially during Covid when the world raced to find vaccines for a billion plus population. 

Insilico, a company that harnesses the power of AI for drug discovery has been able to move into Phase 1 clinical trials just two and a half years after commencement of a project to discover a drug for a relatively rare respiratory ailment that leads to a gradual decline in lung function, and now moving to Phase 2 clinical trials.  Interestingly if Insilico had adhered to traditional methods, the endeavour would have devoured more than $400 million and stretched over a gruelling six-year period. However, leveraging the power of Generative AI, Insilico accomplished the same feat at a mere fraction of the cost and time.  This remarkable achievement underscores the transformative potential of AI, not only in terms of economic efficiency but also in accelerating the delivery of life-saving innovations to those in need.

AI’s role in economic growth is multifaceted and undeniable. Today, as most countries are grappling with the challenges of a slowing economic growth, driven by slowdown in labour growth and productivity due to ageing population, harnessing AI’s prowess is not just a choice but a necessity for survival. AI optimises resource allocation, enhances decision-making, and fuels innovation, unlocking unprecedented economic potential, efficiency and competitiveness, something the world needs now more than ever, provided we leverage it right. 

And therein lies the biggest dilemma. How do we harness the full power of AI while ensuring adequate and fair oversight to mitigate its substantial risks and ensuring its responsible development?

The key question that everyone is asking, and no one has an answer to – is there a right balance between regulation and innovation?

On one hand, overregulation can completely stifle innovation, and on the other hand, absence of appropriate safeguards can lead to significant unintended consequences. 

This is a question nasscom has been debating with the tech industry for a while now.  We believe that there is an urgent need for Gov’t and Industry to work together to jointly develop a pro-innovation regulation approach that will maximize the benefits of AI while minimizing its risks to the best possible. 

Sharing a few design principles that we believe can help in the development of such a framework.

1. Regulation is a shared responsibility, with both the industry and government playing pivotal roles. We cannot simply shift the responsibility elsewhere. Effective regulation demands a collaboration of three distinct levels:

a. Industry Initiative – Self-Regulation: The journey commences within the industry itself, where self-regulation is a fundamental cornerstone. Every company must proactively demonstrate its commitment to responsible AI development and utilization. Transparency in showcasing the steps taken to achieve this responsibility is paramount and so is accountability.

b. National Oversight: At the national level, regulation becomes a vital instrument to ensure that AI serves as a catalyst for positive change and inclusive growth within a country’s borders. National regulations set the stage for AI to be a force for good, nurturing innovation while safeguarding against potential harms.

c. International Harmonisation: The significance of international cooperation cannot be overstated, primarily for two compelling reasons. Firstly, AI transcends geographical boundaries, operating on a global scale. Secondly, harmonization is imperative to prevent regulatory disparities from turning into a competitive advantage for one nation over another. In a world interwoven by AI technologies, harmonized international regulations ensure that fairness and ethical principles guide AI’s global evolution.

2. Stress-Testing Existing Legal Regimes:  It is important to remember that AI is not creating entirely new behaviour. It significantly amplifies our ability to do both good and bad.  Before making new laws specifically for AI, it’s prudent to examine the laws we already have. Many existing regulations, like those for consumer protection, data privacy, and intellectual property, address issues such as fraud, misinformation, bias etc. By involving multiple stakeholders, we can identify areas where existing laws need to be adjusted for AI. For example, we can update road transportation laws to accommodate self-driving cars based on real-world evidence and market conditions. These assessments also let us modernise outdated regulations and clarify legal uncertainties. We suggest that regulators analyze these gaps and issue clear guidelines, explaining how current laws apply to AI, with input from the public. Recognizing best practices and adapting them to specific industries is also helpful.

3. Tech Neutral Regulation – When crafting new AI legislation, it’s vital to keep it technology-neutral. The emphasis should be on the results and impacts of AI implementations, rather than specifying particular technologies or methods. Before creating new laws, it’s crucial to have a clear understanding of the objectives and the problems we aim to solve. Instead of starting from scratch, lawmakers should explore whether existing laws can be adapted to address AI-related issues. For instance, laws addressing revenge pornography and impersonation might need modifications to cover AI-generated deepfakes. It’s essential to prevent duplication by avoiding overlaps with other ongoing regulatory efforts, such as digital market regulations. This ensures that our regulatory framework is efficient and effective.

4. Adopting a Risk-Based Approach – When drafting new AI regulations, governments should adopt a risk-based approach. Prioritize regulation for high-risk AI applications, especially those involving legal rights and citizen interests, such as public sector services, banking, healthcare, and education. Tailor oversight and requirements to the specific risks associated with each AI application. Contextualising risk assessments to local market conditions is critical. Collaboration between industry, academia, and regulators can establish risk inventories and mitigation strategies. Regulations should promote transparency, accountability, and user rights.

5. Coordination between Existing Regulators – Rather than creating entirely new AI regulatory bodies, existing regulators can play a central role in AI oversight, as they understand how AI impacts their respective sectors. Encourage cross-sectoral cohesion by establishing national principles for AI development and use, ensuring they are ‘whole-of-government’ supported. Existing international principles, such as the OECD AI Principles, offer a strong foundation. Mechanisms for inter-regulatory collaboration, like cooperation forums or multi-regulator sandboxes, can facilitate the pooling of resources and evidence collection.

6. Prioritising capacity development in regulatory bodies to ensure effective and agile enforcement – successful AI regulation requires government officials to have technical expertise. Modernizing procurement frameworks, enabling public officials to be transparent in their use of AI tools, and conducting training programs can build necessary technical capacity.

7. Flexibility is key: Regulations should be flexible and adaptable to the evolving AI landscape. Continuous monitoring and updates are necessary to address emerging challenges and ensure that regulations remain relevant. We need to apply a regulatory sandbox approach to AI regulation so we can adapt as we learn more. 

8. Human Centric design – last but not the least, the starting point for any regulation should be the prioritization of the safety of human life, health, property, and the environment when designing, developing, and deploying AI systems.

In conclusion, AI’s role as a strategic driver for economic growth cannot be overstated. It has the potential to revolutionise industries, boost productivity, and improve the quality of life. However, to fully unlock AI’s potential, we must embrace pro-innovation regulation that upholds ethical standards, promotes transparency, leverages existing regulatory frameworks and encourages collaboration between industry and govt, and across countries. 

The key to harnessing AI’s potential for economic growth lies striking the right balance that ensures that AI continues to drive economic progress while safeguarding against potential risks.

Note:- (Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor. The content is auto-generated from a syndicated feed.))