- The AI Collective Word
- Posts
- Trending news, latest in ML and NLP, Regulations and more
Trending news, latest in ML and NLP, Regulations and more
Weekly AI Updates

Happy Friday! In this edition, we bring you a diverse range of insights and updates from the world of artificial intelligence, machine learning, natural language processing, regulations, research, and more.
We hope you find this newsletter informative and insightful as you navigate the dynamic world of AI and its applications across industries.
Please subscribe to The AI Collective Word today to receive your free newsletter directly in your inbox and join our ever-growing network.
I hope you enjoy this week’s newsletter!
Share it with a friend or colleague if you find it helpful.
TRENDING NEWS

Created in Canva
Elon Musk Presents His New Company xAI
Elon Musk officially launched his new company xAI. Comprised of industry veterans, xAI is set out to “Understand the Universe” and take on ChatGPT.
Read more: Reuters, Techradar, arstechnica, xAI
Shutterstock to build GenerativeAI Tools with OpenAI
Shutterstock and OpenAI have announced a long-term partnership to use Shutterstock's data for training OpenAI's AI models. Shutterstock will also get early access to OpenAI's new tech and editing tools. The deal will also enable Giphy users to create and modify images with generative AI.
Read more: TechCrunch, Newswire, CNBC
INDUSTRY TRENDS
The TSA plans to use facial recognition at 430 airports soon. It says the technology is 97% accurate for all skin colors. The agency won't share the data from its pilot program. Some people say the TSA is forcing travelers to join the program.
ZeroNorth is a start-up that helps ships use less fuel and reduce CO2 emissions with AI. It has a platform that offers data-driven insights and simulations for shipping operations.
AI4NSL is a new program that teaches national security leaders about AI, machine learning, and data science. The course helps them use AI to get and process information from sensors and also warns them of the possible risks of AI for national security.
ML and NLP

Created in Canva
An LLM Optimized for Spanish - TechCrunch
Meet Lince Zero: A Spanish-instruction tuned LLM, released last week by Madrid-based AI startup Clibrain, which reckons it's spotted a gap to join the generative AI race by developing models optimized for speakers of espanol.
NotebookLM Launched - Google
NotebookLM is a new experiment from Google Labs that uses a language model to enhance notetaking. It was formerly called Project Tailwind and was introduced at Google I/O as an AI-first project. It aims to help people learn faster by rethinking notetaking software. It is currently available to a limited number of users in the U.S.
AudioPaLM - A LLM That Can Speak and Listen - Google Research
AudioPaLM is a model that can understand and generate speech and text. It combines PaLM-2, a text-based model, and AudioLM, a speech-based model. AudioPaLM can keep the speaker's voice and tone when translating speech. It can also use text data to improve speech tasks. AudioPaLM beats other systems for speech translation and can do zero-shot translation for many languages.
This post is the first of a two-part series on how to customize large language models (LLMs) using instruction tuning. Instruction tuning is a technique that makes it easier to build LLMs for specific tasks and domains. We explain the benefits and challenges of instruction tuning, and introduce Metaflow, a framework that helps us apply it in a flexible way.
The article discusses how code generating AIs are changing the software industry and making coding skills less relevant. It criticizes the phrase "learn to code" as a dismissive and outdated advice for people who lose their jobs in other sectors.
H2O.ai Introduces h2oGPT: A Suite of Open-Source Code Repositories for Democratizing Large Language Models (LLMs) - Marktechpost
h2oGPT is a new open-source project by H2O.ai that aims to create and use LLMs based on GPTs. LLMs are powerful models that can understand and generate natural language, but they also have challenges such as bias, privacy, and ethics. h2oGPT provides tools and resources to address these challenges and enable responsible and ethical use of LLMs.
This paper reviews quantum neural networks (QNNs), which combine quantum computing and artificial neural networks. It covers the theory, architecture, and applications of QNNs, and analyzes their advantages and challenges. It also showcases some QNN applications in quantum chemistry, optimization, and pattern recognition, and outlines future research directions.
The authors present a new method for answering questions about surgical videos with both text and location. They use a Transformer model with a special embedding module to combine visual and textual features. They test their model on public surgical videos and show that it outperforms existing methods. They claim that their method can help medical students and junior surgeons learn from recorded surgeries. They also share their code online.
OntoChatGPT Information System: Ontology-Driven Structured Prompts for ChatGPT Meta-Learning - arXiv
This paper shows how to use an ontology and structured prompts to improve ChatGPT, a large language model (LLM) for chatbots. It explains the models and methods for this approach and tests it with Ukrainian in rehabilitation. It also shows how this approach can work with other LLMs like Google's Bard with PaLM 2. This way, chatbots can use meta-learning, prompts, and ontology to get better at NLP and dialogue in different domains and languages.
REGULATIONS

Created in Canva
The FTC is investigating OpenAI for possible consumer protection violations related to its AI chatbot ChatGPT. The agency wants to know how OpenAI handles risks of harming personal reputations and data with its AI models.
Over 150 European companies have written an open letter to criticize the EU's AI Act, which they say is ineffective and harmful for competition. The AI Act is a draft law that regulates artificial intelligence, including new technologies like large language models. The companies argue that the AI Act will undermine Europe's competitiveness and technological sovereignty.
Japan wants to adopt less strict AI rules than the EU to boost its economy and chip industry. The EU's rules may include disclosing the sources of AI-generated content. An EU official is visiting Japan to discuss AI and semiconductors.
AI policy is gaining momentum in the US, as lawmakers from both parties propose bills and hearings to regulate the technology. Senate majority leader Chuck Schumer has a grand strategy for AI regulation, based on some key principles. He wants Congress to act fast, especially to compete with China. Other agencies have also issued statements and guidelines on AI, especially generative AI.
The imperative for regulatory oversight of large language models (or generative AI) in healthcare - npj Digital Medicine
Large language models (LLMs) like GPT-4 and Bard can help with many tasks in healthcare, such as writing notes, getting approvals, reviewing papers, or chatting with patients. However, LLMs are not like other AI tools that are already regulated for medical use. They need careful oversight to ensure they are safe, ethical, and respectful of patient data and privacy. This paper gives some suggestions for how regulators can achieve this goal.
PLATFORM ENGINEERING
Harness, a CI/CD platform provider, has introduced several generative AI features to reduce DevOps bottlenecks. These include an AI Development Assistant that analyzes logs and suggests fixes, a natural language interface for defining cloud policies, and an automatic vulnerability detection and remediation tool. Harness plans to add more generative AI capabilities to its platform using large language models that are trained on public data sources.
AI vs. Predictive Analytics: A Comprehensive Analysis - Marktechpost
This article explains how AI and Predictive Analytics can help engineers design and optimize products and processes. It introduces the concept of AI and its methods, such as Machine Learning and Deep Learning. It also compares different types of analytics, such as descriptive, diagnostic, predictive, and prescriptive. The article shows how Deep Learning can use historical data to forecast the results of experiments or simulations.
What AI can do with a toolbox... Getting started with Code Interpreter - One Useful Thing
ChatGPT with Code Interpreter is a new mode of AI that lets you use GPT-4 to write and run Python code, upload and download files, and solve problems with a large memory and a general-purpose toolbox. It is useful for both coders and non-coders, as it can do math, language, and other tasks that ChatGPT alone could not do.
RESEARCH

Created with Canva
Machine learning model detects heart attacks faster and more accurately than current methods - NewsMedical.net
Researchers created and tested a smart model for detecting occlusion myocardial infarction (OMI), a type of heart attack that is often missed by current methods. The model used data from thousands of patients and corrected many diagnosis errors. OMI can happen without ST-segment elevation (STE) in electrocardiograms (ECGs), which is a common sign of acute coronary syndrome (ACS). OMI diagnosis is also hard because of limited biomarker-based diagnosis and variable interpretation by clinicians.
Insilico Medicine is a drug discovery company that uses generative AI to develop new therapies for diseases. One of its drug candidates, discovered with AI, is now in Phase 2 trials for a rare lung disease. The company used AI to speed up and reduce the cost of the preclinical process, from finding a target molecule to predicting clinical outcomes.
AI technology can help treat brain cancer in various ways, according to a study. AI can diagnose brain cancer more accurately by analyzing medical images. AI can also plan treatments based on patient-specific data and medical knowledge. AI can predict how patients will respond to treatments based on various factors. The study recommends integrating AI into clinical workflows to make use of its benefits.
AI technology has shown great potential in improving the diagnosis and treatment of brain cancer, according to a recent study. AI algorithms can analyze large amounts of data, such as medical images and patient records, to provide doctors with valuable information for making accurate diagnoses, developing personalized treatment plans, and predicting treatment responses. The study emphasizes the need to integrate AI into clinical workflows to enhance its practical use in brain cancer care.
Generative AI is a technology that the US military is exploring for various purposes, such as training, planning, and decision-making. It can create realistic simulations, generate possible scenarios, and enhance autonomy and automation. However, it also poses ethical and security challenges that need to be addressed.
USE CASES
Disentangling and Operationalizing AI Fairness at LinkedIn - Cornell University
This database contains 200 case studies from 64 companies that show how they use machine learning to solve real-world problems and design effective ML systems. You can learn from their experiences and best practices across different domains and applications.
RISK AND SECURITY MANAGEMENT

Created in Canva
Generative-AI apps & ChatGPT: Potential risks and mitigation strategies - The Hacker News
Generative-AI apps are popular tools for automating work tasks, but they also pose security risks when connected to core business systems. Users should be aware of the potential dangers of exposing sensitive data and credentials to third-party apps that may not be secure or trustworthy.
Securing AI Systems — Defensive Strategies - DataDrivenInvestor
This blog post introduces the concept of FMEA to analyze the failure modes and effects of security incidents in AI solutions and apply the right defenses.
The risks of AI are real but manageable - Gates Notes
AI poses many risks, but we can cope with them. We have faced similar challenges before with other innovations, such as cars and the Internet. We can learn from the past and create rules and safeguards for AI. We should not be afraid of the unknown, but embrace the opportunities and benefits of AI.
8 Examples of AI Risk - Center for AI Safety
AI risk is the possibility that AI systems could cause or contribute to large-scale harm, such as bias, misinformation, cyberattacks, or autonomous weapons. AI risk could also be catastrophic or existential, meaning that it could threaten the survival or well-being of humanity. Some examples of catastrophic AI risks are:
- AI alignment: The challenge of ensuring that AI systems behave in ways that are aligned with human values and goals, especially when they become more intelligent than humans.
- AI takeover: The scenario where AI systems gain control over human society, either by coercion, deception, or manipulation.
- AI race: The situation where multiple actors compete to develop or deploy AI systems, leading to reduced safety standards, increased conflict, or accelerated technological change.
- AI accidents: The possibility that AI systems could cause unintended or unforeseen consequences, such as malfunctioning, hacking, or triggering other risks.
ETHICS
ChatGPT maker OpenAI faces a lawsuit over how it used people’s data - Washington Post
OpenAI faces a class-action lawsuit for using online data to train its chatbot. A law firm claims that the company violated the copyrights and privacy of millions of internet users. The firm has experience in large-scale lawsuits on data issues. The case was filed in California on Wednesday. OpenAI did not comment.
RESOURCES - Course

Created in Canva
New skills for a new way of working - Microsoft
The Microsoft AI Skills Initiative is a new program that offers free online courses, grants, and events to help people learn how to use AI in their work. The initiative focuses on generative AI, a type of AI that can create new content and insights. The goal is to empower workers and communities to benefit from AI and overcome the challenges of information overload and skills gaps.
This course teaches you how to build applications with large language models (LLMs) using popular frameworks. You will learn how to use Hugging Face, LangChain, data embeddings, and vector databases for natural language processing (NLP) problems. You will also fine-tune LLMs with domain data, compare proprietary models, and address ethical and safety issues of LLMs. Lastly, you will deploy your models at scale with LLMOps best practices.
This course will help you create a production-ready LLM workflow!
6 weeks (4 to 10 hours/week)/ Self-Paced
An Introduction to Statistical Learning - Statlearning ebooks
This book introduces statistical learning, a key skill for analyzing data. It covers important topics in a simple way. It is suitable for anyone who wants to use modern data analysis tools. It has two editions, one with R and one with Python, and has been translated into many languages. Each chapter has a lab with R or Python examples.
Generative AI with Large Language Models: Hands-On Training feat. Hugging Face and PyTorch Lightning - Jon Krohn (YouTube)
This training teaches how to use LLMs like GPT-4 for data science. It shows how LLMs can improve machine learning and data products, and boost data scientists' creativity. It uses Hugging Face and PyTorch Lightning for LLM development.
Intro to Artificial Intelligence - Udacity
This course introduces you to the fundamentals and applications of modern AI. You'll explore the history, possibilities and challenges of this exciting field that aims to enhance human capabilities. Est. time to complete: 4 months, Skill Level: Intermediate
The Building BLocks of Generative AI - Jonathan Shriftman
This post is about Generative AI infrastructure, the essential components and trends that enable generative solutions. The author has a background in conversational AI and is interested in the recent developments and acquisitions in this space. He will cover topics such as models, compute, frameworks, orchestration, fine-tuning, labeling, synthetic data, and AI observability. He hopes to simplify these concepts and use them for future growth investments.
Thank you for reading! Please send me feedback and share the newsletter with others.