Contact Us
Back to Blog

AI & Tech Risks and Regulation – Navigating the Evolving AI, IP and Privacy Landscape

With guest speakers: Susan Athey, Marco Iansiti, Mehul Desai, Andy Gandhi, Emily Chissell and James Mickens

On October 1st, Keystone gathered members of the academic, business, and legal community for a discussion on AI & Tech Risks and Regulation – Navigating the Evolving AI, IP, and Privacy Landscape.

Guest speakers included Susan Athey (Chief Scientific Officer at Keystone and Economics of Technology professor at Stanford GSB and former Chief Economist of the DOJ Antitrust Division), Marco Iansiti (David Sarnoff Professor of Business Administration at Harvard Business School and Keystone Senior Advisor), Mehul Desai (Managing Director, Global Head of Resilience at HSBC), Andy Gandhi (Senior Partner at Keystone), Emily Chissell (Senior Principal) and James Mickens (Gordon McKay Professor of Computer Science at Harvard University).

The discussion, held under Chatham House Rules, examined the speed of AI adoption across organizations and industries. Some applications, such as customer-facing chatbots and coding assistants, have spread rapidly and already demonstrated measurable impact on industries and labor markets—coder salaries even fell for the first time in 2025 as AI developer tools became more popular. Despite the speed of those changes, participants cautioned that broader organizational transformation will likely be slower. Instead of disappearing entirely, the bottlenecks in software development are shifting. Coding itself might no longer be the primary constraint, but challenges remain in testing, governance, cybersecurity, and compliance. This is especially true for legacy IT systems, where different components are tightly interconnected. Because so many applications and processes depend on one another, even a small change in one part can create bugs or failures elsewhere. These interdependencies, combined with the high costs of replacing or migrating old systems, make large-scale upgrades slow and risky.

The discussion also emphasized the growing role of technologists in shaping regulation, litigation, and governance around AI. Because AI systems are complex and often opaque, technical experts are increasingly essential in helping courts, regulators, and policymakers understand how these systems work, where accountability lies when harms occur, and what kinds of safeguards are technically feasible. Participants noted that, while tech firms may claim compliance measures are impossible, technologists can provide counterweight assessments that show what is achievable, helping ensure that regulatory frameworks are both realistic and effective.

Speakers emphasized that the skills most in demand will also likely shift. As AI dramatically reduces the cost of writing and maintaining code, the real skill scarcity will no longer be in coding itself but in deciding what systems and solutions should be built, how to evaluate their effectiveness, and how to ensure they align with business needs, governance, and compliance. AI was described as a general-purpose technology, much like the steam engine, electricity, computing, or the internet, with the capacity to reshape every industry. But, as with those earlier technologies, transformation will not come from the core innovation alone. Instead, it will depend on a series of complementary changes in business models, organizational design, and regulatory frameworks that determine how the technology’s power is applied and scaled across the economy.

We hope the session provided fresh perspectives and useful takeaways for participants. Please contact us to make sure you are included for future events.

Access the report to get the insights you need to stay ahead

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.