ChatGPT: The New Online Threat and How to Stop It

AI Cybersecurity, Regulations and Privacy Week in Review

Hello and welcome to today’s newsletter! 

The latest news covers:

  • ChatGPT: The New Online Threat and How to Stop It

  • UK Leads the Way in AI Safety Summit

  • Microsoft Offers Legal Shield for AI Copilot Users

  • Responsible AI: A Matter of Regulation and Ethics

I hope you enjoy this week’s newsletter!

Please subscribe to The AI Collective Word today to receive your free newsletter directly in your inbox and join our ever-growing network.

Share it with a friend or colleague if you find it helpful.

RISK AND SECURITY MANAGEMENT

Created with Canva

ChatGPT and other LLMs are a new threat to online businesses and customers. They can steal content, reduce traffic, and expose sensitive data. Some industries are more vulnerable than others, such as those that rely on data privacy, unique content, and web revenue. Businesses should try to prevent their data from being used by LLMs.

REGULATIONS

Created with Canva

UK government sets out AI Safety Summit ambitions - Gov UK

The UK government has announced its goals for the AI Safety Summit, a two-day event at Bletchley Park in November. The summit will gather experts from different sectors and countries to discuss how to prevent and mitigate the risks of powerful AI systems and how to use them for good. The UK aims to work with global partners to ensure safe and beneficial AI for everyone.

Microsoft will protect its AI Copilot customers from legal troubles if they follow the rules. The company announced this in a blog post, explaining its reasons and its respect for copyright holders. Microsoft also said it has safeguards to prevent infringement.

There are concerns about how AI will affect society and jobs, and Law 144 aims to regulate AI in hiring and protect workers. Bennett is an AI expert who supports human oversight of AI systems and explores how AI relates to arts and culture. He thinks the US government should follow the EU's example and implement laws to control the use of AI, especially for the safety of children.

Congress is working on AI legislation with the SAFE Innovation Framework, a plan that aims to balance national security, accountability, democratic values, transparency, and innovation in AI. The framework will be informed by AI Insight Forums, which will gather experts from different sectors. Schumer, the main proponent of the plan, said innovation is the "north star" of the framework. Some AI policy experts praised Schumer's approach as "measured and mindful" of the benefits of AI.

PRIVACY

Created with Canva

X has updated its privacy policy to collect more user data, including biometric and career information, Bloomberg reported. The company also plans to use this data and other public data to train its machine learning and AI models, according to the policy. Some speculate that X owner Elon Musk wants to use X as a data source for his AI venture, xAI, which he said would use public tweets for training.

Meta is developing generative AI tools and lets you ask to delete some of the external data it uses for AI training. However, this does not affect the data it collects from its own platforms, and it may not comply with your request.

PLATFORM ENGINEERING

Stack Overflow is launching OverflowAI, a generative AI initiative that will integrate its vast community knowledge into various products and platforms. OverflowAI will help users of Stack Overflow for Teams to build and access knowledge repositories, enhance search capabilities across multiple sources, and provide solutions within IDEs and Slack. OverflowAI aims to improve developer productivity and learning.

Generative AI and large language models (LLMs) can give enterprises a competitive edge in various fields, but they need reliable, complete and consistent data to avoid inaccurate or misleading results. Bruno Aziza, an Alphabet executive, discussed how to prepare data and analytics infrastructure for LLMs at VB Transform. He suggested creating a data ocean, an open repository that can manage and process any type of data with high performance and reliability.

Many founders and CEOs use AI to cut costs, but they limit its role to specific tasks. AI can do more, such as creating and testing new products. We used ChatGPT to build an AI team and saved time and money. We also learned five steps to use AI effectively, starting with defining clear goals and tasks.

ML and NLP

Regression is a machine learning technique that predicts outcomes based on the relationships between variables. It uses algorithms to find patterns in the data and fit them to a line or a curve. Linear regression and logistic regression are two common types of regression. Regression is useful for answering quantitative questions and estimating values within or beyond the data points.

Microsoft wants to teach AI how to use any digital tool with a huge collection of APIs. This would make AI more powerful than ever, as it could do things like order pizza for you based on your preferences, health, and location. It could also use tools to enhance its own abilities and learn new skills.

ETHICS

Generative AI is a powerful technology that raises ethical concerns about bias, transparency, explainability, and trust. To address these issues, ethical AI frameworks should consider three elements: strategy, process, and technical controls; cultural norms and governance; and AI usage principles. These elements can help enterprises develop and use AI responsibly and holistically, especially for generative AI that democratizes AI access.

AI can benefit many industries, but it needs to be used responsibly. Responsible AI principles help organizations comply with laws, govern their AI use, and consider the impacts on people and society. These principles vary by context and domain, but they provide a common framework and language for different functions and departments. Without them, organizations may face ethical, legal, or security risks from AI.

SUSTAINABILITY

Created with Canva

Data centers are essential for our online world, but they consume a lot of energy and resources. AI can make them more efficient but also increase their demand and emissions. The industry needs to balance the benefits and costs of AI for the climate.

AI is a powerful and popular technology, but it also has a high environmental cost. AI affects the climate both positively and negatively, depending on how it is used and developed. To build more sustainable AI, the tech sector needs to consider many factors, such as model size, hardware efficiency, and application impact.

Thank you for reading! Please send me feedback and share the newsletter with others.

The AI Collective Word is sent every Tuesday and Thursday. Think your friend or colleague should know about us? Forward this newsletter to them. They can also sign up here.