news-24072024-115659

Hey there, readers! Today we’re diving into the impact that VP Kamala Harris could have on U.S. AI regulation if she were to become president. With President Joe Biden endorsing her for the Democratic Party’s nominee, let’s explore what this could mean for the future of AI policy.

Harris has been vocal about the need for stronger AI regulation, emphasizing the importance of prioritizing the well-being of customers, community safety, and democratic stability over profit. Experts in AI policy anticipate that a Harris administration would likely maintain the current AI regulations established under Biden, focusing on increased government oversight, safety measures, and mandatory testing and disclosures for large AI systems.

Lee Tiedrich, an AI consultant, believes that Harris could continue Biden’s work by emphasizing multilateralism and government oversight in AI policy. Sarah Kreps, a professor specializing in AI, suggests that while Harris may not roll back existing safety protocols, she could potentially take a less top-down regulatory approach to address industry concerns.

Krystal Kauffman, a research fellow, hopes that Harris will involve data workers in policy discussions to ensure a more inclusive approach to AI regulation. By considering the voices of those directly involved in programming AI systems, Harris could create policies that better address the challenges faced by data workers, such as low pay, poor working conditions, and mental health issues.

Switching gears to recent developments in the tech world, Meta released its largest text-generating model, Llama 3.1 405B, which will be integrated into various Meta platforms. Adobe also introduced new Firefly tools for graphic designers, offering more AI-powered features in Photoshop and Illustrator.

In the education sector, an English school received a reprimand for using facial recognition technology without proper consent from students. Cohere, a generative AI startup, raised $500 million from investors like Cisco and AMD, focusing on customized AI models for enterprises. Additionally, TechCrunch interviewed Lakshmi Raman, the director of AI at the CIA, to discuss the agency’s use of AI and responsible deployment practices.

On the research front, a new model called test-time training (TTT) models has emerged, claiming to outperform traditional transformers in terms of efficiency and scalability. This innovative architecture could pave the way for more powerful generative AI applications in the future.

As the AI industry continues to evolve, we’re also seeing controversies arise, such as Stability AI’s restrictive licensing terms for its open AI image model, Stable Diffusion 3. The company faced backlash for imposing limits on commercial use and fine-tuning of models based on Stable Diffusion 3 images. In response to the criticism, Stability AI announced adjustments to the licensing terms to allow for more liberal commercial use.

Overall, the landscape of AI policy, technology, and ethical considerations is constantly shifting. With leaders like Kamala Harris potentially shaping the future of AI regulation, it’s essential to stay informed on the latest developments and debates in the field. Let’s continue to monitor these trends and engage in discussions that promote responsible AI practices for the benefit of society as a whole.