AI Safety Accord and Risks Posed by AI

AI Cybersecurity, Regulations and Privacy Week in Review

Hello and welcome to today’s newsletter! 

The latest news covers:

  • More companies commit to the White House AI safety accord

  • More writers sue OpenAI over copyright infringement

  • Why IT leaders should deploy a Generative AI infrastructure now

  • Advocates urge Schumer to tackle AI’s climate impact

  • How Mercedes-Benz leverages the power of ChatGPT

  • Generative AI on Google Cloud: New training content, from introductory to advanced

Please subscribe to The AI Collective Word today to receive your free newsletter directly in your inbox and join our ever-growing network.

Share it with a friend or colleague if you find it helpful.

RISK AND SECURITY MANAGEMENT

Created with Canva

Robots have been killing humans for a long time, either by accident or by design. Some examples are a Ford worker crushed by a robot in 1979, a Kawasaki employee pushed into a machine in 1981, and a patient who died during a robotic surgery in 2018. As artificial intelligence becomes more advanced, the risk of harm from robots increases.

The Biden-Harris Administration is working with AI companies to promote responsible AI development. It has secured voluntary commitments from 15 companies to follow safety, security, and trust principles. It is also preparing an Executive Order and pursuing legislation to regulate AI and protect Americans' rights.

AI regulation is a hot topic with many different opinions and proposals. Some experts want to limit or pause the development of advanced models, while others see AI as a great benefit. The challenge is to balance innovation and harm and to adapt to the fast pace of technology. We need a new way of decision-making that is more flexible and iterative.

REGULATIONS

Created with Canva

Five U.S. writers sued OpenAI for using their works to train ChatGPT. They claimed that OpenAI copied their works without permission and violated their copyrights. They sought monetary damages and an injunction against OpenAI.

Top tech leaders met with US lawmakers to discuss the regulation of artificial intelligence. Tesla CEO Elon Musk said there was "overwhelming consensus" for a "referee" for AI, while OpenAI CEO Sam Altman warned about the potential pitfalls of the technology. AI can create human-like answers, but it can also be inaccurate, cause job losses, enable fraud, and misuse data.

As the federal government continues to move slowly around legislating tech, states have been moving faster in an attempt to put guardrails on what AI companies can and cannot do. While no state has yet passed AI rules, 25 states, along with Puerto Rico and the District of Columbia, introduced AI bills in 2023 as of July, according to the National Conference of State Legislatures.

California, home to many AI companies and tech investors, has passed sweeping laws targeting the tech sector. Its legislature signed off on online safety rules for kids and passed an extensive electronics right-to-repair law.

PLATFORM ENGINEERING

The process of building an AI model involves several steps, from understanding the business problem and defining success criteria to data discovery and cleaning to model training, building, and iteration. Each step requires careful planning and execution to ensure the model meets the project's objectives and benefits the organization. The type of algorithm and the heuristic option are also important factors to consider.

To succeed with generative AI, organizations need to build a strong innovative culture and operating model that enable them to capture its value. According to a McKinsey survey, top innovators are taking five actions to leverage generative AI and gain a competitive edge.

Generative AI is a hot topic for many enterprises, but it also poses challenges to data privacy and security. Nutanix offers GPT-in-a-Box, a scalable and flexible product that combines cloud infrastructure, services, and large language models to help organizations deploy generative AI projects quickly and safely.

SUSTAINABILITY

Created with Canva

System uses light and electrons for faster, greener computing. It connects photons to electronic parts of computers with a new abstraction. It is the first photonic computing prototype for real-time machine-learning inference. Photonic computing uses light particles to do analog computations. It can boost standard computers with photonic-electronic SmartNICs. MIT researchers showed its power in machine learning with "Lightning", a photonic-electronic SmartNIC for deep neural networks.

ChatGPT is a large language model (LLM) that can generate realistic dialogue. However, LLMs have a high environmental cost due to their energy and water consumption. This article explains the sources of this problem and some possible solutions.

The letter says that AI could worsen climate change in two ways: by using too much energy and by spreading disinformation. It asks Senator Schumer to make companies reveal the environmental impact of their AI models and to stop AI-enabled climate denialism. The letter was signed by many groups concerned about the environment, tech, and hate speech.

USE CASES

Created in Canva

Generative AI is a powerful technology that can create new content from data, but not everyone can use it effectively. Some industries, such as healthcare and marketing, have found ways to apply it to their problems, such as validating diagnoses and targeting leads. However, other sectors are still struggling to find the best use cases for generative AI tools.

In this podcast episode, Philipp Skogstad, CEO of Mercedes-Benz R&D North America, talks about how Mercedes-Benz used generative AI to create a voice assistant for its vehicles. He explains the vision, impact, challenges, and future of generative AI in the automotive sector. He also discusses the data privacy and security aspects of using gen AI, and how Mercedes-Benz tested the voice assistant with 900,000 beta users.

RESOURCES

The Deep Learning Lecture Series is a joint project of DeepMind and UCL Centre for Artificial Intelligence. It covers various topics in Deep Learning, a powerful artificial intelligence technique that can learn from raw data and solve many problems. The lectures are given by DeepMind researchers and aim to share the latest advances and responsible innovations in this field. Consists of 12 videos (1 hour to 1.5 hours each)

Google Cloud offers various training options to help you learn about generative AI and how it can benefit your business. You can choose from two learning paths or individual courses and labs, depending on your level and interest.

One learning path offers an introduction to Generative AI, LLMs, and Responsible AI. This path is for those with no technical experience. Time to complete: 2 hours.

The second learning path is more comprehensive and offers a deeper dive into Generative AI. The audience is more suited for developers. Courses are still being added, including a mix of labs and badges.

This is a free and open-source course on Deep Reinforcement Learning, a fascinating topic in Artificial Intelligence. You will learn the theory and practice of Deep RL, use famous libraries, train agents in various environments, share them on the Hub, and participate in challenges. You will also earn a certificate of completion and get to know the Hugging Face team and community. Courses are self-paced. 

Thank you for reading! Please send me feedback and share the newsletter with others.

The AI Collective Word is sent every Tuesday and Thursday. Think your friend or colleague should know about us? Forward this newsletter to them. They can also sign up here.