California Governor’s AI Bill Veto Sparks Debate
California Governor Gavin Newsom’s decision to veto Senate Bill 1047, which aimed to introduce first-in-the-nation safety regulations for artificial intelligence, has ignited a fresh debate over AI governance, innovation, and public safety. While supporters of the bill argued it was necessary to ensure AI developers adhere to safety protocols, the decision reflects the complex balance between fostering innovation and protecting the public from the risks associated with this rapidly advancing technology.
Background
Senate Bill 1047, authored by state senator Scott Wiener, sought to impose certain requirements on AI developers before they could proceed with building advanced AI models. The bill emerged as Congress continues to lag on federal AI regulations, leaving a significant regulatory vacuum in the U.S. Meanwhile, the European Union has taken the lead with its AI Act, prompting many in the tech sector to call for similar safety measures domestically. Proponents of SB 1047 believed California, a global hub for AI innovation, was uniquely positioned to fill this gap.
However, the bill faced significant opposition from industry giants like Google, Meta, and OpenAI, who argued that the proposed regulations could stifle innovation and create unnecessary roadblocks for developers. Despite their concerns, some tech figures, including Elon Musk and Anthropic, cautiously supported the bill, acknowledging the importance of responsible AI governance.
Newsom’s Justification
In a statement accompanying his veto, Governor Newsom acknowledged the bill’s good intentions but emphasized that its approach was overly broad. According to Newsom, SB 1047’s standards applied to all AI systems, regardless of the risk or sensitivity of the environment in which they were deployed. He argued that treating basic AI systems and high-risk models with the same level of scrutiny could hinder innovation in non-critical areas while not adequately addressing the real threats AI could pose.
Instead, Newsom pointed to ongoing efforts to develop science-based, empirical guidelines for AI regulation. He emphasized working with top AI researchers, including Fei-Fei Li, and industry leaders to develop a more precise framework for regulating AI. He also committed to revisiting the issue with California’s legislature in the near future.
Implications for AI Regulation
Newsom’s decision highlights the tension between innovation and regulation in the AI space. Supporters of the veto, such as Google and OpenAI, have praised Newsom for maintaining California’s role as a leader in AI innovation. They argue that overly restrictive regulations could slow progress and hinder the development of useful AI tools, which could benefit various industries and societal needs.
However, critics, including Senator Wiener, have expressed disappointment, framing the veto as a missed opportunity for California to lead the way on AI safety, just as it did with net neutrality and data privacy. Nonprofit organizations, such as Accountable Tech, went even further, accusing Newsom of caving to Big Tech interests, leaving the public exposed to unregulated AI tools that could threaten democracy, civil rights, and the environment.
What This Means for AI
The veto of SB 1047 underscores the ongoing debate about how best to regulate AI without stifling innovation. As AI continues to evolve, lawmakers, researchers, and industry leaders face the challenge of developing policies that allow for technological progress while mitigating the potential risks of unregulated AI.
Governor Newsom’s commitment to working with experts to create a science-based framework is a promising step forward. However, the path to responsible AI governance is far from clear. With the federal government lagging on AI regulation and other regions, such as the EU, pushing forward with comprehensive rules, the question remains: How will the U.S. balance innovation and safety in the AI era?
As California continues to play a pivotal role in AI development, the state’s regulatory decisions will likely influence the broader national and global landscape of AI governance.
Interview: Stampede, Bitcoin and AI
Last week, I had the opportunity to do my first Facebook and YouTube Live Stream with Gary A. Fowler, venture capitalist, CEO, and Co-founder of GSD Venture Studios.
We not only had a brief discussion about the ‘Connectivity of the Global Private Capital Markets in a Decentralised World’ but had a candid conversation ranging from the Calgary Stampede to Scotland to Bitcoin and the role of artificial intelligence (AI) in the emerging world of decentralized finance.
It was great fun and in case you missed it, click below to listen in to the recording.
Growing UK’s AI Industry
This demonstrates a steady increase in the number of enrolments, at Masters and Doctorate level at least, however, estimates both of potential and predicted growth in the use of AI in the UK would require significant increases in numbers at both levels to be realised in practice.
Applying forecast worldwide percentage growth rates in AI to UK enrolment numbers between now and 2020 suggests that a significant increase would be needed. The low, medium and high growth rate scenarios used are 15%,73 36%,74 and 62%,75 respectively. This methodology is crude, but goes some way to illustrating the scale of demand, see table below.
Demand for talent already outstrips supply, and average remuneration for data scientists and machine learning experts has increased substantially.
As above, AI in the UK is already used in a broad range of organisations and sectors. However, to fully realise the potential of AI in the UK, additional sectors and different categories of organisation, with a mixed ecosystem of AI provider companies, small, medium-sized and large. All of these different organisations will need access to similar sets of skills, whether by hiring
directly or contracting for services.
Can we trust AI in UK Government?
Civica recently sat down with central government leaders to discuss whether the public sector is prepared for the artificial intelligence revolution and the ethics behind the technology. Steve Thorn, Executive Director, Civica shares his views from the event
By 2035, AI is estimated to add £630 billion to the UK economy. In many ways, AI is already a key feature of our everyday lives and its capabilities are expanding quicker than ever. From spotting lung cancer before a doctor is able to identify it to better predicting traffic routes, as demonstrated by Highways England. AI is undoubtedly improving UK citizens’ lives already, but its adoption doesn’t come without challenges across all sectors. The UK government is no exception.
Founder of London AI lab DeepMind placed on leave
The co-founder of Google’s London-based artificial intelligence lab DeepMind has been placed on leave amid controversy over some of its work.
Mustafa Suleyman is taking a period of absence, it was reported on Wednesday night. DeepMind, which Google paid around £400m for in 2014, is seen as a leading force in AI research but has been criticised over its work with the NHS.
Mr Suleyman, who founded the company in 2010 alongside chief executive Demis Hassabis, has been one of DeepMind’s public faces since it was bought. He is the lab’s head of applied AI and led the development of the company’s healthcare arm, until it was transferred to Google last year.
Earth AI Competitors, Revenue and Employees
EARTH AI, the mineral targetting start-up, whose technology can predict the location of new ore bodies far cheaper, faster, and with more precision than previous methods, on 16 August announced a funding round of up to AUS$2.5 million from Gagarin Capital, the VC firm specialising in AI, and Y Cominbator. Previously, EARTH AI raised AUS$1.7 million in two seed rounds from AirTree Ventures and Blackbird Ventures and high-net worth angel investors. The new round will help the company continue to pursue its mission of fundamentally improving the efficiency of mineral exploration with the help of cutting-edge technology.
More specifically, EARTH AI’s technology uses machine learning techniques on global data, including remote sensing, radiometry, geophysical and geochemical datasets, to learn the data signatures related to industrial metal deposits (from gold, copper, and lead to rare earth elements), train a neural network, and predict where high value mineral prospects will be.
Artificial Intelligence for Energy Efficiency
As energy systems worldwide continue to decarbonise and decentralise, there is an increasing need to manage and predict the distributed constituents of the system, such as renewables, EVs and battery storage, the letter says.
AI will be “essential” to managing this system and the data that comes with it, the letter continues, adding that London is “the European capital for AI and acts as a base to 750+ AI companies – double the total of Paris and Berlin combined”.
How Artificial Intelligence Can Help Protect Children?
How Artificial Intelligence Can Help Protect Children?
London start-up developing child safety technology that can stop children sending dangerous messages even before they finish typing has raised millions from investors and sealed a deal with the German government to get its app onto the phones of thousands of children.
SafeToNet, which develops software that can limit what children access on their smartphone and prompt them if they are in a dangerous situation, raised £7m in cash at a £50m valuation from angel investors and current investor West Hill Capital. It brings its total raised to around £20m.
Existential risk from artificial general intelligence
Artificial Intelligence (AI) systems are becoming smarter every day, beating world champions in games like Go, identifying tumours in medical scans better than human radiologists, and increasing the efficiency of electricity-hungry data centres. Some economists are comparing the transformative potential of AI with other “general purpose technologies” such as the steam engine, electricity or the transistor.
But current AI systems are far from perfect. They tend to reflect the biases of the data used to train them and to break down when they face unexpected situations. They can be gamed, as we have seen with the controversies surrounding misinformation on social media, violent content posted on YouTube, or the famous case of Tay, the Microsoft chatbot, which was manipulated into making racist and sexist statements within hours.
The Financial Sector Must Embrace Transparency in Artificial Intelligence to Ensure Fairness
The recent news that the FCA is partnering with the Alan Turing Institute to explore the explainability of AI in financial services is a welcome development. While financial institutions are increasingly using AI to improve efficiency and productivity, there is little transparency in how so-called neural networks make decisions, putting organisations at risk of inaccurate and even fraudulent decisions. Even worse, it is far more difficult for financial institutions to audit bad decisions by AIs than it is to audit human decisions.
In exploring how to make AI more transparent and explainable, the FCA needs to address a number of issues in reducing the threat of unaccountable AI decision-making across the financial sector