Innovation vs. Safety? AI Regulation Can—and Must—Achieve Both

By UrbanLogiq CEO, Co-Founder Mark Masongsong and Michael Lee, Chief Strategy Officer

When AI regulation misses the mark, innovation stalls, and real-world problems go unsolved. California’s SB 1047 was a case in point—and Governor Newsom was right to hit pause.

The disconnect between government and industry is a persistent problem—and even within industry, needs and priorities often differ. It’s not enough to invite industry voices to the table; they need to be the right voices. While well-intentioned, the bill clearly lacked input from practitioners who understand AI’s real-world applications in the public sector. 

At UrbanLogiq, we’ve been working with governments since 2016 to deploy AI solutions responsibly, giving us a unique perspective on what effective regulation requires.

When It Comes to AI, Context Is King

SB 1047 targeted only “large-scale” AI models, determined by the cost and computational power required to train them. This assumes that only the largest models pose significant risks, but when it comes to AI, size doesn’t matter, it’s more about context. Risk is determined by the application and environment, not just model size, ignoring the potential dangers of smaller, specialized models stifles innovation and as Newsom noted, “could give the public a false sense of security about controlling this fast-moving technology.”

Bill SB 1047  failed to differentiate between high-risk deployments (e.g., critical infrastructure, public safety) and low-risk applications. UrbanLogiq has been building AI solutions for governments since 2016. We recognize and know how both large-scale and smaller AI models can pose similar risks depending on how they’re used. 

The Right Industry Input Is Essential for Effective AI Governance

California’s bill underscores a widespread challenge in AI governance, where the technology is evolving at an unprecedented pace, and policymakers are moving to regulate without the urgency and practical insights needed to guide effective decision-making. This issue isn’t unique—it’s one that Canada is also confronting. Currently, there is no regulatory framework in Canada specific to AI. The proposed Artificial Intelligence and Data Act (AIDA) has yet to pass, and with an election quickly approaching, no political party is prioritizing the discussion or adequately engaging the right industry voices.

AI is one of our biggest hopes for addressing real-life issues like the housing crisis and affordability problems—issues that demand immediate action and innovative solutions.

But there is no sugarcoating it, the challenges governments face with regulating AI are significant.  While inviting industry is essential, a big hurdle is that oftentimes only big industry players have the resources to be in the right rooms and rub elbows with the right people, leading to the risk of regulatory capture, where policies disproportionately favour  large corporations. This dynamic leaves smaller innovators out while failing to address broader public needs.

The appetite for meaningful progress is there and it is possible, but it requires collaboration with those who prioritize public good over self-interest — otherwise it creates the perception of lobbyists pushing for regulations favourable to their deep pocketed clients.

AI Regulation Needs Flexibility and Nuance

AI regulation shouldn’t be a one-size-fits-all solution; the dynamic nature of the technology demands a flexible and adaptive approach. Without practitioner and industry input, regulations risk being overly restrictive, stifling beneficial innovation, or leaving the public vulnerable to unforeseen risks.

Governments face complex problems, and there is immense potential for AI to offer meaningful solutions. From infrastructure planning to emergency management to enhancing public safety, AI can transform how governments operate and make decisions that shape the quality of life for generations.

But for the reward to outweigh the dangers, it must be implemented thoughtfully and comprehensively, with a focus on trust, safety, security, auditability, and accountability. 

Our Approach 

For almost a decade we’ve been working on how to use this technology responsibly to make a real, positive impact in society. Our solutions, whether predicting fire risk in communities or analyzing road safety, equity and amenity gaps, provide actionable insights to cities, empowering them to make data-driven decisions that improve overall quality of life. 

Our approach at UrbanLogiq aligns closely with the European Union’s AI Act which emphasizes a nuanced, risk-based classification of AI systems, with a comprehensive legal framework focusing on their intended use and potential impact on fundamental rights, health, and safety. This matches our own Algorithmic Controls policy, which evaluates algorithms based on their benefits, risks, and the environment in which they will operate.

We fundamentally believe our solutions are only effective because they are developed with a deep understanding of each government client’s specific needs, regulatory constraints, and operational realities. 

four people sitting on stage infront of a banner that reads "Gov AI Coalition Summit"

An Example to Follow: The Gov AI Coalition

A truly effective AI governance framework requires collaboration between policymakers and the right practitioners. The Gov AI Coalition, spearheaded by the City of San José, CA, is an emerging example to follow. As they look to bring both government officials and industry practitioners together to learn from one another and shape AI policy and frameworks. Recently we attended and sponsored the inaugural Gov AI Coalition Summit. It was rewarding to witness the public sector’s willingness to step up, learn from each other and attempt to set a strong foundation for AI’s responsible implementation.

While the emerging theme of the summit “for government by government” highlights public sector leadership, true progress requires collaboration with industry and the public.

Over two days, the summit reinforced the importance of balancing risk and innovation, fostering transparency, and ensuring equity through accurate data. Governments are ready to embrace AI responsibly, but success depends on partnerships that combine public sector knowledge with private sector expertise. Initiatives like the Gov AI Coalition demonstrate that trust, collaboration, and shared purpose are essential for AI to deliver meaningful outcomes for public good. So how do we harness this willingness and momentum and turn it into AI regulation that works? 

team members infront of a banner that reads gov ai coalition summit and to the right of the image two men talking to another one at a exhibit booth at the conference

A Collaborative Path Forward 

Let’s be clear, this isn’t just a California issue, or a Canada issue—it’s a global one. Policymakers worldwide need to engage directly with AI developers, data scientists, and organizations like ours that work within government systems. Regulations must be as dynamic and adaptable as AI itself.

We don’t have to choose between stifling progress and risking public safety. With the right voices at the table, we can achieve both innovation and accountability. 

AI has incredible potential to drive positive change, but we must approach it with responsibility and partnership. The need for thoughtful, inclusive regulation has never been more pressing.

The future of AI depends on collaboration, nuance, and trust. Let’s seize this moment to build regulation that works—for innovation, for safety, and for society.

Navigating the Future of AI in Government

Founded and led by former public servants, UrbanLogiq combines a deep understanding of government needs with expertise in AI. This dual expertise ensures our insights are not only timely and relevant but also rooted in a practical understanding of public sector needs.

Subscribe for Our Guide to AI Regulation, Compliance, and Legislation.

Our content simplifies AI governance, providing public servants with actionable guidance and a centralized source of information to stay informed. We provide the tools and knowledge you need to navigate the evolving world of AI with confidence.