California Governor’s AI Bill Veto Sparks Debate
California Governor Gavin Newsom’s decision to veto Senate Bill 1047, which aimed to introduce first-in-the-nation safety regulations for artificial intelligence, has ignited a fresh debate over AI governance, innovation, and public safety. While supporters of the bill argued it was necessary to ensure AI developers adhere to safety protocols, the decision reflects the complex balance between fostering innovation and protecting the public from the risks associated with this rapidly advancing technology.
Background
Senate Bill 1047, authored by state senator Scott Wiener, sought to impose certain requirements on AI developers before they could proceed with building advanced AI models. The bill emerged as Congress continues to lag on federal AI regulations, leaving a significant regulatory vacuum in the U.S. Meanwhile, the European Union has taken the lead with its AI Act, prompting many in the tech sector to call for similar safety measures domestically. Proponents of SB 1047 believed California, a global hub for AI innovation, was uniquely positioned to fill this gap.
However, the bill faced significant opposition from industry giants like Google, Meta, and OpenAI, who argued that the proposed regulations could stifle innovation and create unnecessary roadblocks for developers. Despite their concerns, some tech figures, including Elon Musk and Anthropic, cautiously supported the bill, acknowledging the importance of responsible AI governance.
Newsom’s Justification
In a statement accompanying his veto, Governor Newsom acknowledged the bill’s good intentions but emphasized that its approach was overly broad. According to Newsom, SB 1047’s standards applied to all AI systems, regardless of the risk or sensitivity of the environment in which they were deployed. He argued that treating basic AI systems and high-risk models with the same level of scrutiny could hinder innovation in non-critical areas while not adequately addressing the real threats AI could pose.
Instead, Newsom pointed to ongoing efforts to develop science-based, empirical guidelines for AI regulation. He emphasized working with top AI researchers, including Fei-Fei Li, and industry leaders to develop a more precise framework for regulating AI. He also committed to revisiting the issue with California’s legislature in the near future.
Implications for AI Regulation
Newsom’s decision highlights the tension between innovation and regulation in the AI space. Supporters of the veto, such as Google and OpenAI, have praised Newsom for maintaining California’s role as a leader in AI innovation. They argue that overly restrictive regulations could slow progress and hinder the development of useful AI tools, which could benefit various industries and societal needs.
However, critics, including Senator Wiener, have expressed disappointment, framing the veto as a missed opportunity for California to lead the way on AI safety, just as it did with net neutrality and data privacy. Nonprofit organizations, such as Accountable Tech, went even further, accusing Newsom of caving to Big Tech interests, leaving the public exposed to unregulated AI tools that could threaten democracy, civil rights, and the environment.
What This Means for AI
The veto of SB 1047 underscores the ongoing debate about how best to regulate AI without stifling innovation. As AI continues to evolve, lawmakers, researchers, and industry leaders face the challenge of developing policies that allow for technological progress while mitigating the potential risks of unregulated AI.
Governor Newsom’s commitment to working with experts to create a science-based framework is a promising step forward. However, the path to responsible AI governance is far from clear. With the federal government lagging on AI regulation and other regions, such as the EU, pushing forward with comprehensive rules, the question remains: How will the U.S. balance innovation and safety in the AI era?
As California continues to play a pivotal role in AI development, the state’s regulatory decisions will likely influence the broader national and global landscape of AI governance.
Tech Spending in 2022 Forecast
An increased focus on investing in business outcomes as opposed to buying finished products underlies Gartner’s expectations for sustained growth in IT services — 9.8% in 2021, with a CAGR of 8.68% over the 2020-2025 period (second only to software at 11.9%).
Also noticeable in Gartner’s forecast is a sharp near-$100 billion rise in spending on devices in 2021, most of which is attributable to the rapid shift to remote working during the pandemic, as companies reacted to the initial shock and stabilized their operations.
That reactive phase is now largely over, Gartner says, with most companies preparing to reach the ‘next normal’ — exiting phase 3 of the analyst firm’s COVID response model (‘rebounding to the future’) and moving into phase 4 (‘accelerating opportunities’) as per chart above.
Source: Gartner, ZDNet
The Impact of AI in UK
The UK was the crucible of the Industrial Revolution and is one of the crucibles of the Intelligence Revolution. It is home to world-beating artificial intelligence (AI) companies and world-class academic centres of AI research. It is well placed to reap great overall economic benefits from the development of AI, but it is not yet clear how those benefits will be shared.
A number of high profile recent studies have predicted high levels of automation in the UK in the coming years as artificial intelligence and related technologies disrupt the economy. The Industrial Revolution drove automation of repetitive physical work; the Intelligence Revolution is having the same effect on a widening range of intellectual tasks, meaning that more and more jobs can potentially be performed by robots and computers.
Click here to download the full report.
https://www.npr.org
How UK Win AI Race?
A revolution in AI technology is occurring. AI will define this century. This presents a huge opportunity for the UK and if we act now, we can lead from the front. That is why we identified AI and data as one of the UK’s four Grand Challenges in the Industrial Strategy and why we are mobilising all of government to seize this opportunity to make the UK a global leader in this technology that will change all of our lives.
The government and UK business must take action to keep the UK at the frontier of AI advancement. The UK is an AI academic powerhouse, publishing nearly 25,000 research papers on the topic in the past ten years. This puts the UK fourth in the world when it comes to AI research. Our experts give their take on the opportunities we can grasp as a nation, and the hurdles we need to clear to keep Britain in contention.
The Economics Impact of AI
According to the report by PWC, despite discussion on social media about the things that AI will be able to achieve, the majority of studies that have sought to answer this question have focused on the risks of artificial intelligence to employment. More recently some researchers have recognised the potential that this automation has to boost productivity,1,2 leading to more efficient production of goods, more affordable products, and higher real incomes.
Our study aims to take further steps towards capturing the full economic potential of AI and the opportunities that it presents. In addition to the more traditionally examined productivity channel, we identify and measure impacts on the household consumption side of the economy through product enhancements resulting from AI.
The Impact of AI in UK
The UK was the crucible of the Industrial Revolution and is one of the crucibles of the Intelligence Revolution. It is home to world-beating artificial intelligence (AI) companies and world-class academic centres of AI research. It is well placed to reap great overall economic benefits from the development of AI, but it is not yet clear how those benefits will be shared.
A number of high profile recent studies have predicted high levels of automation in the UK in the coming years as artificial intelligence and related technologies disrupt the economy. The Industrial Revolution drove automation of repetitive physical work; the Intelligence Revolution is having the same effect on a widening range of intellectual tasks, meaning that more and more jobs can potentially be performed by robots and computers.
Strategy for AI technologies in UK
Scottish standards should interface with and influence UK and international data ethics frameworks, codes and standards. Businesses should put in place robust internal or external governance frameworks, codes of conduct, training, Key Performance Indicators, and active customer and social dialogue. Businesses should share and learn from best practice wherever it is to be found.
The cluster of Scottish data companies ScotlandIS is starting to develop will explore opportunities to support this. To instil good values in new tech companies in the key start up stages, Scottish tech incubators should consider common standards and shared resources.We believe that Scotland and the UK should continue to work closely with the EU, Norway and Switzerland on the Coordinated Plan on Artificial Intelligence. Scotland should also develop alliances with other small countries (e.g. Nordic-Baltic, Ireland etc) with shared goals and values. These partnerships could seek to increase the influence of the countries on tech and its global regulation, or offer attractive propositions for investment.
Enterprise Plans to Deploy AI
According to the report by MMC Ventures, AI is advancing across a broad front. Enterprises are using multiple types of AI application, with one in ten enterprises using ten or more. The most popular use cases are chatbots, process automation solutions and fraud analytics. Natural language and computer vision AI underpin many prevalent applications as companies embrace the ability to replicate traditionally human activities in software for
the first time.
Companies prefer to buy, not build, AI. Nearly half of companies favour buying AI solutions from third parties, while a third intend to build custom solutions. Just one in ten companies are prepared to wait for AI to be incorporated into their favourite software products.
Impact of AI on life of UK
According to the report by Microsoft, AI is here. Whether that fills you with excitement, unease, or a combination of the two, there can be no denying that a new era of intelligent computing has begun – and is set to transform many aspects of our personal and professional lives. It’s already happening. From digital personal assistants, such as Cortana and Alexa, to the algorithms that allow the likes of eBay and ASOS to make suggestions based on our previous behaviour, AI is the emerging power behind daily life in the UK.
Meanwhile, applications such as chatbots and robotic processing automation (RPA) are also having a significant impact on operating practices in workplaces across both the public and private sectors. As Microsoft’s Chief Technology Officer, Enterprise, Norm Judah explains: “AI is about augmenting human ingenuity. Whether you’re a seller, a marketer, a lawyer or something else, AI will change the way you make decisions. It can help you navigate vast amounts of data and give you advice and recommendations
about how to proceed. What you do with that advice is up to you.”
Role of AI in Financial Services
Within financial services there have been many innovations that have changed traditional banking over time, reimagining the way the industry operates, as well as the nature of jobs. The financial services industry has a history of using quantitative methods and algorithms to support decision making. These are a foundation of AI systems, and the industry is therefore primed for AI adoption, positioning it at the forefront of adopting and benefiting from AI technologies.
AI can build on human intelligence by recognising patterns and anomalies in large amounts of data, which is key in applications such as anomaly detection (e.g. fraudulent transactions). AI can also scale and automate repetitive tasks in a more predictable way – including complex calculations, for example for determining risk.