- The AI Collective Word
- Posts
- Deepfake Threats, AI Principles, and AI a Tool for Opportunity
Deepfake Threats, AI Principles, and AI a Tool for Opportunity
AI Cybersecurity, Regulations and Privacy Week in Review


Hello and welcome to today’s newsletter!
The latest news covers:
Microsoft researchers accidentally expose confidential data
Cybersecurity information sheet on deepfakes
Global cooperation to ensure AI is a tool for opportunity
AI-Supported Automated Code Remediation at Scale
How to monitor and observe LLMs
Walmart taps employees for Gen AI use cases
Please subscribe to The AI Collective Word today to receive your free newsletter directly in your inbox and join our ever-growing network.
Share it with a friend or colleague if you find it helpful.
RISK AND SECURITY MANAGEMENT

Created with Canva
Microsoft exposed 38 terabytes of private data, including secrets, keys, passwords, and Teams messages, on its AI GitHub repository. The leak was caused by an Azure Storage URL that granted full access to the entire storage account. Microsoft said it fixed the issue and found no unauthorized exposure of customer data.
The NSA, FBI, and CISA have published a CSI on synthetic media threats, such as deepfakes, that can spread false information and cause harm. Synthetic media has increased rapidly and poses a challenge to national security and critical infrastructure. The CSI provides best practices to prepare, identify, defend, and respond to deepfake threats.
The Online Safety Bill, which imposes stricter rules on social media platforms to protect children from harmful content, was passed by parliament. Meta, the owner of Facebook and Instagram, plans to use end-to-end encryption for its messaging services, which the government says could enable abusers. Meta argues that encryption enhances online security and privacy and that it has other ways to prevent and combat abuse.
U.S. President Joe Biden addressed the U.N. General Assembly on Tuesday, calling for global cooperation on regulating AI. He said AI has both great potential and peril and that it should be used as a tool of opportunity, not oppression. He also said the U.S. is working to ensure AI technologies are safe and governed by rules and policies. His speech coincided with a meeting of U.S. senators and tech CEOs to discuss AI legislation.
McAfee Scam Protection is a new tool that uses AI to detect and block phishing scams in messages. It alerts users of dangerous URLs before they open them and works with emails, texts, social media, and web browsers. It is free for existing McAfee customers and helps them stay safe from cyberattacks.
Generative AI is transforming DevSecOps, but also raising security concerns. New products use natural language processing to automate SBOM analysis and vulnerability management. DevSecOps teams are using generative AI to write and test code but also feel pressured to adopt it despite the risks.
REGULATIONS

Created with Canva
The UK's CMA has proposed seven principles to regulate AI models that could be monopolized by Big Tech and harm consumers and businesses. The principles include accountability, transparency, access, diversity, and flexibility. The CMA will consult with AI developers and other stakeholders before applying the principles to its new digital market powers.
Google Cloud Next conference highlighted how generative AI can help with cybersecurity. The company announced new features and capabilities based on its AI-first approach. Google aims to simplify complex AI models with a chat interface that can create new content.
The Authors Guild sued OpenAI for using their books to train ChatGPT, a chatbot that can generate text. The lawsuit is part of a wave of legal actions against AI companies that scrape data from the internet. OpenAI said it respects authors' rights and is talking to them, while the Authors Guild said authors need to control how their works are used by AI.
The EU is trying to agree on AI rules by the end of 2021. The main challenges are biometric surveillance and copyright issues. The EU lawmaker in charge of the negotiations urges more flexibility from the member states.
Privacy
OpenAI, a company backed by Microsoft, is under investigation by a Polish watchdog for allegedly violating EU data protection laws with its ChatGPT chatbot. The chatbot is also facing lawsuits in the US for privacy breaches. The company has not commented on the allegations. OpenAI's ChatGPT is a popular consumer app that uses generative AI to create conversations.
PLATFORM ENGINEERING
AI-Supported Automated Code Remediation at Scale Is Catalyst for Mila, Moderne Partnership - Business Wire
Mila and Moderne team up to use AI for code security and maintenance. The partnership combines Mila's DL research and Moderne's refactoring systems to help developers update and fix codebases faster and easier. Moderne hires a research engineer from Mila to explore AI models for code analysis and transformation.
AI startups face a challenge: their models are becoming commodities. To stand out, they need to focus on the quality and diversity of their datasets. However, many of them lack a proper tech stack to generate and process the data they need. VCs should look beyond the models and evaluate the company's data infrastructure. This is crucial for the success and sustainability of AI ventures, especially in biotechnology.
Google Bard is an AI assistant that can integrate with Google apps and double-check its answers with web content. It supports over 40 languages and has a new feature to avoid confabulation or hallucinations. However, the double-check button is not always reliable, and Bard Extensions may raise privacy concerns.
Microsoft Copilot is a new AI companion that helps you with any task. It uses the web, your work data, and your PC context to provide better assistance. It will be available in Windows 11, Microsoft 365, Edge, and Bing.
Every Programmer Should Know #1: Idempotency - Berkan Sasmaz
Idempotency is a key concept for programmers, especially for distributed systems. It means that an operation can be repeated without changing the outcome. This helps avoid problems like double-charging customers due to network issues. Idempotency is one of the methods to make systems more reliable and robust.
The Amazing Ways Snowflake Uses Generative AI For Synthetic Data And Natural Language Queries - Bernard Marr
Generative AI can create synthetic data that mimics real data without using any real information. This can solve many problems related to data collection, security, and bias. Snowflake is a company that uses generative AI to produce and offer synthetic datasets for various domains.
MLP and NLP
LLM Monitoring and Observability — A Summary of Techniques and Approaches for Responsible AI - Towards Data Science
This article is about how to monitor and observe large language models (LLMs) in production. The author reviews different software tools and methods for LLM evaluation, tracking, and monitoring. The author also plans to explore academic research on LLM monitoring in the future.
SUSTAINABILITY

Created with Canva
ChatGPT is an AI product that can write like humans. It was trained on a huge amount of text using a supercomputer in Iowa. The supercomputer needed a lot of water and electricity to run, which increased Microsoft's environmental impact.
USE CASES

Created in Canva
AI Here & Now - Gartner
AI technology is a strategic priority for many enterprises, but it also poses significant challenges and risks. How can leaders invest wisely in generative AI (GenAI) and prepare their talent and policies for the future? This GBQ edition provides insights and guidance on these critical issues.
Walmart is using its 50,000 U.S. campus employees to find new ways to use its generative AI tool, My Assistant. The tool can help them with tasks like writing and summarizing documents. Walmart wants to learn from the employees' experiences and feedback to scale up the best uses of the tool. The company hopes that GenAI will transform its business and customer service.
AI is a powerful tool for industrial processing plants. It can increase production, profitability, and competitive advantage by finding hidden patterns and insights from data. However, AI requires enablers such as network design, control systems, and historical data capture, as well as agile methodology and people development.
Evaluate model options for enterprise AI use cases - TechTarget
Here are four common types of AI: machine learning, deep learning, simple AI, and generative AI. Each one has its own strengths and limitations, and choosing the right one depends on the use case and the data available.
RESOURCES
This course teaches how to make deep learning models more efficient for resource-limited devices. It covers techniques such as model compression, pruning, quantization, neural architecture search, distributed training, data/model parallelism, gradient compression, and on-device fine-tuning. It also covers application-specific acceleration techniques and quantum machine learning.
OpenAI & Kafka for IoT Streaming Data Analytics - SingleStore
Join us for a webinar on IoT streaming data analytics with OpenAI and Kafka. You will learn how to use the latest tools and technology for real-time streaming analytics and Generative AI LLMs. You will also see a live demo and code sharing of an IoT analytics application with OpenAI and Kafka. Don't miss this opportunity to get practical knowledge and valuable code snippets for your own IoT analytics projects. Duration: 60 minutes
Thank you for reading! Please send me feedback and share the newsletter with others.
The AI Collective Word is sent every Tuesday and Thursday. Do you think your friend or colleague should know about us? Forward this newsletter to them. They can also sign up here.