Strategic partnerships, Industry Trends and Latest Updates in AI Regulations

What's Happening This Week in AI

Happy Friday! In this edition, we bring you a diverse range of insights and updates from the world of artificial intelligence, machine learning, natural language processing, regulations, research, and more.

We hope you find this newsletter informative and insightful as you navigate the dynamic world of AI and its applications across industries.

Please subscribe to The AI Collective Word today to receive your free newsletter directly in your inbox and join our ever-growing network.

I hope you enjoy this week’s newsletter!

Share it with a friend or colleague if you find it helpful.

TRENDING NEWS

Created in Canva

Partnerships are key to positioning organizations to excel in the AI space and to help organizations with their IT strategy. Here are recent key partnerships:

  • Meta, Microsoft, Intel, and Atos team up to form an ethernet consortium to build AI-suited networks.- Reuters

  • Meta opens AI model to commercial use. - Reuters

  • Open AI forms a partnership with American Journalism Project to support local news. - OpenAI

  • McKinsey partners with startup Cohere to help clients adopt generative AI - Reuters

  • Microsoft offering Bing Chat Enterprise and Microsoft 365 Copilot in partner program to accelerate AI transformation - Microsoft Inspire, Microsoft

  • Qualcomm is working with Meta to enable on-device AI applications using Llama 2 - Qualcomm

  • Google is in talks with news organizations to explore AI tools for journalists - Reuters

The Global Race for AI Leadership is well underway. Here’s the latest updates:

  • China applies generative AI rules only to public products.

  • Japan needs a surge in computing power to stay in AI race.

Read more: Reuters

INDUSTRY TRENDS

A new study from MIT researchers explores how generative AI can help workers with writing tasks. The study used ChatGPT, a chatbot that can generate text based on keywords and tone. The workers used ChatGPT to write cover letters, emails, and analyses. The results showed that ChatGPT increased the workers' productivity and quality of their writing. The study also highlights the potential of generative AI for the future of work.

ZeroNorth is a start-up that helps ships use less fuel and reduce CO2 emissions with AI. It has a platform that offers data-driven insights and simulations for shipping operations.

ML and NLP

Created in Canva

Keras Core released a new multi-backend framework. Some Keras Core features:

  • You can choose the fastest backend for your model.

  • You can export your model to any format.

  • You can use your model with any ecosystem tool.

  • You can share your model regardless of their preferred framework.

MIT researchers have developed a technique to protect sensitive data encoded within machine-learning models. They introduced a privacy metric called Probably Approximately Correct (PAC) Privacy and built a framework around it. This framework determines the minimum amount of noise needed to safeguard the data from extraction, without requiring knowledge of the model's inner workings.

The study compares the March 2023 and June 2023 versions of GPT-3.5 and GPT-4, two widely used large language models (LLMs). The research evaluates their performance on four tasks: solving math problems, answering sensitive questions, generating code, and visual reasoning. The results indicate that the behavior and performance of both models can vary significantly over time.

The study explores the integration of Language Models and Vision Language Models into Reinforcement Learning (RL) agents. They create a framework that leverages language as a core reasoning tool to address various RL challenges like efficient exploration, reusing experience data, scheduling skills, and learning from observations. The method is tested in a simulated robotic manipulation environment with sparse rewards, where the robot needs to stack objects. Results show significant performance improvements in exploration efficiency and data reuse compared to baselines, demonstrating the ability to apply learned skills to solve new tasks and imitate human experts from videos.

REGULATIONS

Created in Canva

First UN discussion in New York this week to discuss the risks of AI.

  • Seeking input on regulations: Australia, G7, Ireland, Israel

  • Planning regulations: Britain, EU, UN

  • Implemented temporary regulations: China

  • Investigating possible breaches: France, Italy, Japan, Spain

How judges, not politicians, could dictate America’s AI rules - MIT Technology Review

AI development and use in the US may face legal challenges from the FTC and creators who claim their data and work were used without consent or compensation. These lawsuits could affect how AI is made and deployed and how creators are paid for their work.

The EU is engaging with Asian countries to promote new rules for tech firms regarding AI and copyright disclosure. The EU aims to establish its proposed AI Act as a global benchmark for the technology, but some Asian countries show a preference for a more flexible regulatory approach or a "wait and see" stance.

A New York lawmaker, Clyde Vanel, used AI to "think up" and draft a new piece of legislation, potentially a first in U.S. politics. Vanel's staff employed Auto-GPT, an open-source tool using GPT-4, to autonomously research New York law and write a bill addressing a gap in the law.

Real-time cameras equipped with AI will be used during the Paris Olympics to detect suspicious activity like abandoned luggage and unexpected crowds. However, civil rights groups express concerns over potential threats to civil liberties, fearing the technology could become permanent beyond the experimental period allowed by the law.

Top AI companies, OpenAI, Alphabet, and Meta Platforms, have voluntarily committed to implementing safety measures for AI-generated content, as announced by the Biden administration. The companies pledged to test systems thoroughly before release, share risk reduction information, and invest in cybersecurity. The companies also committed to developing a watermark system for all AI-generated content to help users identify deep-fakes and protect against misuse. And, they aim to ensure user privacy, mitigate bias, and deploy AI solutions for scientific advancements.

PLATFORM ENGINEERING

This Developer Survey surveyed 90,000 developers and technologists about their views and experiences with AI and ML in their work. Here are some key findings:

  • Most developers are interested in using AI tools: 70% are already using or plan to use them in their development process.

  • Developers have a positive attitude towards AI tools: 77% feel favorable about them, but there are some variations among different groups.

  • Developers see productivity as the main benefit of AI tools: 33% rank it as the most important advantage of integrating AI into their workflow.

  • Developers trust the accuracy of AI tools: 42% trust the output of the AI tools they use, but they also have some concerns and challenges.

RESEARCH

AI platforms are gaining attention in hospitals for their potential diagnostic and treatment innovations. Three hospital-developed AI platforms show promise in changing medical practices: INav at Northwell Health, aiding in diagnosing pancreatic cancer faster; a machine-learning platform at UPMC, predicting high-risk surgical complications; and an AI-powered model at Mount Sinai Health System, identifying potential birth defects caused by certain medicines.

A study found that ChatGPT, an AI language model, generated clinical notes comparable to those written by senior internal medicine residents, suggesting its potential for everyday clinical practice.

Researchers have developed and validated a deep learning model that can estimate cardiac function and detect valvular disease using chest radiographs. The study aimed to understand the potential of chest radiographs for cardiovascular examination, particularly in classifying valvular disease and cardiac function. The deep learning model was trained and tested using data from multiple medical institutions in Japan, showing promise in simultaneously detecting various cardiac conditions using chest radiographs.

Advances in AI have the potential to spare thousands of men with intermediate-risk prostate cancer from unnecessary treatment and side effects. Currently, men with this type of cancer are offered both radiotherapy and hormone therapy, but AI can identify those who would not benefit from hormone therapy based on biological patterns in their cancer cells.

USE CASES

In the emergency response sector, AI is being used to assist 911 call-takers, helping with non-emergency tasks and triaging calls. This technology aims to ease the burden on call centers, particularly amid staffing shortages and increasing diversity among callers.

Tim Dickson, the CIO of Generac Power Systems, has led the company's transformation into an energy technology provider through predictive modeling with cloud, analytics, and AI.

RISK AND SECURITY MANAGEMENT

PoisonGPT - MITHRIL Security

This post demonstrated the importance of having a secure LLM supply chain with model provenance to guarantee AI Safety.

Generative AI has been misused by malicious actors to develop a cybercrime tool called WormGPT, aimed at facilitating sophisticated phishing and business email compromise attacks. Advertised on underground forums, this tool automates the creation of convincing fake emails personalized to recipients, increasing the success rate of attacks. WormGPT is built on the open-source GPT-J language model developed by EleutherAI and is positioned as a blackhat alternative to legitimate GPT models.

Canada's top cybersecurity official, Sami Khoury, has revealed that hackers and propagandists are leveraging AI to develop malicious software, craft convincing phishing emails, and spread disinformation online. Cyber watchdog groups have previously warned about the potential risks of AI, especially language processing programs like large language models (LLMs), which can generate realistic dialogue and documents.

ETHICS

OpenAI faces a class-action lawsuit for using online data to train its chatbot. A law firm claims that the company violated the copyrights and privacy of millions of internet users. The firm has experience in large-scale lawsuits on data issues. The case was filed in California on Wednesday. OpenAI did not comment.