Op-Ed: California’s AI Bill Veto Proves It—Shaping Policy Needs Experts, Not Just Politicians

By Michael Lee, Chief Strategy Officer, and Mark Masongsong, CEO, UrbanLogiq

Effective AI regulations and policies should mitigate risk while encouraging innovation and the use of AI to help solve real world challenges and problems. However, recent legislative attempts, like California’s AI Bill (SB 1047), seem to miss this balance entirely. While the bill’s intent to protect the public from potential harm caused by advanced AI systems—was commendable and necessary, its lack of input from those who fundamentally understand AI was clear. Governor Gavin Newsom’s veto of the bill was the right move. 

It’s not the size that matters

SB 1047 targeted only “large-scale” AI models, determined by the cost and computational power required to train them. This assumes that only the largest models pose significant risks, but when it comes to AI, size doesn’t matter, it’s more about context. Risk is determined by the application and environment, not just model size, ignoring the potential dangers of smaller, specialized models stifles innovation and as Newsom noted, “could give the public a false sense of security about controlling this fast-moving technology.”

Bill SB 1047  failed to differentiate between high-risk deployments (e.g., critical infrastructure, public safety) and low-risk applications. From UrbanLogiq’s work building AI solutions for governments since 2016, we recognize and know how both large-scale and smaller AI models can pose similar risks depending on how they’re used.

Industry has to be at the table

California’s bill underscores a widespread challenge in AI governance, where the technology is evolving at an unprecedented pace, and policymakers are moving to regulate without the urgency and practical insights needed to guide effective decision-making. This issue isn’t unique—it’s one that Canada is also confronting. Currently, there is no regulatory framework in Canada specific to AI. The proposed Artificial Intelligence and Data Act (AIDA) has yet to pass, and with an election year approaching, no political party is prioritizing the discussion or adequately engaging industry voices.

Individuals in the AI industry who know the challenges and opportunities governments face with AI while understanding how AI works must be at the table. Since 2016 UrbanLogiq has  been working on how to use this technology responsibly to make a real, positive impact in society. Our solutions, whether predicting fire risk in communities or analyzing road safety, equity and amenity gaps, provide actionable insights to cities, empowering them to make data-driven decisions that improve overall quality of life. 

Our approach at UrbanLogiq aligns closely with the European Union’s AI Act which emphasizes a nuanced, risk-based classification of AI systems, with a comprehensive legal framework focusing on their intended use and potential impact on fundamental rights, health, and safety. This aligns with our own Algorithmic Controls policy, which evaluates algorithms based on their benefits, risks, and the environment in which they will operate.

We fundamentally believe our solutions are only effective because they are developed with a deep understanding of each government client’s specific needs, regulatory constraints, and operational realities. 

One-size does not fit all

AI regulation shouldn’t be a one-size-fits-all solution; the dynamic nature of the technology demands a flexible and adaptive approach. Without practitioner and industry input, regulations risk being either overly restrictive, stifling beneficial innovation, or leaving the public vulnerable to unforeseen risks and undesirable outcomes. Effective frameworks must evolve alongside the technology, balancing innovation with accountability to ensure public safety.

Blending the policy-making expertise and knowledge of public sector needs with the technical knowledge of those who build and deploy AI systems sets up the foundation for policy that makes sense and that can be kept up to date, on a comprehensive and emerging basis. 

Governments face complex problems, and need new ways of tackling them. There is immense potential for AI to offer meaningful solutions. We know first hand how it can be applied to drive impactful change. From infrastructure planning, to emergency management, to enhancing public safety, AI has the power to transform how governments operate and how they make decisions that shape the quality of life for generations—but for the reward to outweigh the dangers it needs to be implemented thoughtfully and comprehensively, with a focus on trust, safety, security, auditability, and accountability. Effective AI regulation isn’t just about managing risks; it’s about enabling governments to harness AI’s potential to create real, positive change for communities.

The path forward is about responsibility and partnership

Let’s be clear, this isn’t just a California issue, or a Canada issue—it’s a global one. Policymakers worldwide need to engage directly with AI developers, data scientists, and organizations like ours that work within government systems to design regulations that are as dynamic and adaptable as AI itself. We don’t have to choose between stifling progress and risking public safety and security; with the right voices at the table, we can achieve both innovation and accountability.

AI has incredible potential to drive positive change, but we must approach it with a sense of responsibility and partnership. As AI continues to transform industries, governments, and societies, the need for thoughtful, inclusive regulation becomes even more pressing. The right way forward is to embrace a collaborative approach, bringing together the voices of all stakeholders to shape legislation. A truly effective AI governance framework requires collaboration between policymakers and practitioners. The Gov AI Coalition, spearheaded by the City of San José, CA, is an emerging example to follow. As they look to bring  both government officials and industry practitioners together to learn from one another and shape AI policy and frameworks. With the hope it fosters open dialogue and ensures diverse perspectives are heard. This inclusive approach is essential to crafting regulation that’s grounded in practicality and informed by real-world insights.

The future of AI regulation is not a choice between innovation and safety, we can achieve a framework that supports both. It’s time to build AI legislation that reflects the technology’s transformative potential—guided by both who enforce it and understand it best.