AI Insights: Securing Research, AIOps Advancements, and Green AI Initiatives

AI Cybersecurity, Regulations and Privacy Week in Review

t

Dear reader, here’s your weekly recap on the latest news on AI security, regulations, and privacy.

In this issue, we cover:

  • How the NVIDIA AI Red Team helps secure ML research

  • Latest AI regulation updates from international governing boards

  • Meta has new GenAI features that respect user’s privacy

  • IBM AIOps uses GenAI and language models to fix IT issues faster

  • How MIT LLSC is tackling the carbon footprint of AI models

Please subscribe to The AI Collective Word today to receive your free newsletter directly in your inbox and join our ever-growing network.

Share it with a friend or colleague if you find it helpful.

RISK AND SECURITY MANAGEMENT

The NVIDIA AI Red Team is a group of security experts who work on improving the safety of data, science, and AI. They contribute to open-source projects, create tools, give talks, organize events, and train others.

They analyzed the Meta Kaggle for Code dataset, which contains a lot of ML code from competitions, and found that many researchers use insecure coding practices.

They also noticed that adversarial training and testing are not widely used in ML research, which could make the models vulnerable to attacks. They suggest that researchers should pay more attention to security hygiene and adversarial robustness in their ML projects.

Space Force stops using generative AI tools over data risks. A memo says the web-based AI tools pose data aggregation risks and need approval from the Chief Technology and Innovation Office. The memo also says generative AI will revolutionize the workforce and the mission, and a task force is working on how to use it responsibly.

AI Risks - Schneier

AI is a powerful technology that will shape our future, but there are many different views and interests on how to deal with its risks and benefits. Some people focus on long-term scenarios, some on current problems, and some on business or security goals. These factions often clash or ignore each other, making it hard to have a clear and unified vision for AI. We need to pay attention to the hidden agendas and assumptions behind their arguments and think carefully about the implications of our choices.

REGULATIONS

Latest updates on AI regulations from various international governing boards.

Google announced a new policy to protect its customers who use its generative AI products from legal risks. The policy covers both the training data and the output of its foundation models, such as Duet AI and Vertex AI. Google said it would take responsibility for any copyright infringement claims unless the customers intentionally tried to violate the rights of others.

Privacy

Meta has new generative AI features that respect your privacy and transparency. The features use a large language model, Llama 2, that was trained with the public, licensed, and Meta data, but not with your private posts or messages. You can learn more about your privacy and Meta’s generative AI features on their website.

Platform Engineering

AI can prevent IT failures that cost millions of dollars. IBM AIOps Insights uses large language models and generative AI to find and fix problems quickly. It can alert IT experts, summarize incidents, identify causes, and recommend solutions.

SUSTAINABILITY

Google shows carbon emissions for flights to help customers choose wisely. The computing industry, however, lacks such transparency, even though it emits more carbon than airlines. AI models are especially energy-intensive, and the MIT Lincoln Laboratory Supercomputing Center (LLSC) is finding ways to reduce their impact without compromising performance. The LLSC hopes to inspire more green-computing research and awareness in the field.

USE CASES

ChatGPT is a text generator that can chat and answer questions, but it has some drawbacks. It may produce wrong or biased content because it learned from unverified and outdated data. It can also make up facts that don't exist. Businesses need to be careful when using ChatGPT for sensitive or critical tasks. OpenAI offers a ChatGPT Enterprise plan that gives more control, privacy, and features than the free or Plus versions.

RESOURCES

The Elements of AI is a course that teaches the basics of AI without math or coding. It covers what AI is, how it works, and how it affects our lives. It also helps you to think critically about AI news and claims. You can get a LinkedIn certificate or credits for completing the course. Self-paced, 6 weeks long, 5-10 hours a week