On September 29, 2024, California Governor Gavin Newsom vetoed a significant AI safety bill that aimed to require companies developing artificial intelligence to assess and report on the technology's potential risks before implementation. The bill’s regulations are more far-reaching and stringent than any past bill has attempted to enact. For one, the bill would have made companies legally liable for all harm caused by AI. It would also require the creation of a “kill switch” for any technology that potentially was misused or went rogue. The subject of the bill, AI, including Generative AI – technology that can create just about anything in response to a prompt – has spurred excitement as well as fears about its implications on the job market and superiority over human ability. Newsom acknowledged the importance of regulating AI but cited concerns over both the bill’s feasibility and the potential unintended consequences on California’s economy, paying special attention to Silicon Valley’s wealthy residents and their competition in the global AI race. He expressed support for a balanced approach to AI governance that would ensure innovation while protecting public safety and hinted at the need for federal rather than state-level oversight for such a rapidly advancing field.
The legislation, which had garnered widespread support from advocacy groups concerned about AI's societal impacts, sought to make California a leader in AI safety standards. However, major tech companies and smaller upstarts alike voiced opposition, arguing that the bill lacked clarity and could stifle innovation. Powerful figures such as venture capital firm Andreessen Horowitz, OpenAI, Google, and Meta lobbied against the proposed restrictions, arguing that they would impose burdensome requirements and potentially hinder technological progress.
Proponents, on the other hand, viewed the bill as a proactive step towards ensuring responsible AI development. They have criticized the veto as a setback, alleging that companies aiming to create increasingly powerful technology face no regulations from US policies, especially as legislation in Congress to set safeguards and regulatory AI oversight has stalled. The debate around the bill highlights the ongoing struggle to establish comprehensive AI regulations amidst rapidly advancing AI capabilities and concerns over their potential risks.
Newsom’s veto reflects the challenges of regulating AI at the state level, especially in a tech-centric region like California. The state's position as the central hub for many of the world's largest and most advanced AI companies, such as OpenAI (the creator of ChatGPT), means that any regulatory bill enacted in California would have significant national and even global implications for the industry. A few states, like Colorado and Utah, have already enacted laws addressing AI-related consumer protection concerns such as bias in employment and health-care decisions. However, the relative minority of states that have enacted their own legislation suggests that consistent regulation of AI for institutions across the country might require increased federal involvement.
At Concord Academy, the AI policy is evolving alongside national discourse. Currently, the school recognizes the need to establish a balance between students' intellectual integrity and the educational benefits of AI. The concern around improvement in technology means that in the context of creating and modifying school-related assignments, it could potentially undermine the collaborative aspect that is so essential to our academic program. To ensure a clear boundary between AI-generated work and original student-produced work, AI software should not be used unless instructed by a teacher, and even then, the use of AI must be explicitly stated and acknowledged by the student. For composing written work in particular, editing software is acceptable, but students are advised to minimize their influence. Among increased discussion on the national level around how AI permeates daily life, CA encourages departments to experiment with new technology and continue to adapt to its influence on student learning.