- The AI Collective Word
- Posts
- Navigating Risks and Transformations in 2023
Navigating Risks and Transformations in 2023
Your weekly AI Security news


Your weekly AI Security news
Welcome to the AI Collective Word Newsletter!
Dear AI Enthusiasts,
In this edition of the AI Collective Word, we explore the dynamic landscape of artificial intelligence, highlighting endeavors that aim to mitigate risks and redefine the industry's paradigms.
Headlining our issue is Distributional, a startup founded by Scott Clark, dedicated to fortifying AI systems against risks such as bias, instability, and security threats.
We delve into the significance of AI governance frameworks, essential for secure AI deployment, and scrutinize the cybersecurity landscape shaped by GenAI in 2023.
Additionally, Google Cloud announces the General Availability of Duet AI, while Harvard Business Review outlines the five forces propelling the adoption of Generative AI.
Join us on a journey through the transformative realms of AI in this comprehensive newsletter.
The AI Collective Word Team
Distributional wants to develop software to reduce AI risk [TechCrunch] Distributional: A new startup to make AI safe and reliable
AI is a powerful tool that can boost productivity, but it also comes with risks. Data quality, bias, security and privacy are some of the main challenges for AI implementation. How can enterprises test and evaluate their AI systems to prevent harm?
That's the question that Scott Clark, the co-founder of SigOpt and former Intel VP, wants to answer with his new startup, Distributional.
Distributional is a platform for AI testing and evaluation that helps AI product teams identify, understand and address AI risk before it affects their customers.
Distributional focuses on large language models, which are prone to hallucinations, instability, inaccuracy and integration issues. By using distributional understanding and testing methods, Distributional aims to ensure that AI systems are safe, reliable and secure.
How an AI governance framework can strengthen security [TechTarget] AI governance frameworks: Why they matter for security
AI is transforming businesses, but it also poses new security challenges. To deploy AI safely and effectively, organizations need to adopt AI governance frameworks. These are sets of policies, processes and best practices that guide secure AI deployment. AI governance frameworks help organizations conduct risk analysis, implement access control, and monitor and respond to incidents involving AI systems.
Non-Human Access is the Path of Least Resistance: A 2023 Recap [The Hacker News] The cybersecurity industry faced a unique year in 2023, as generative AI disrupted everything. The main topic of the year was GenAI, the technology that can create realistic content and deceive security systems. GenAI sparked debates, policies, and innovations across the world, and became the focus of every cyber conference. No other topic has ever dominated the industry like GenAI did in 2023.
A Robot the Size of the World [Schneier] The Internet of Things (IoT) is a world-sized robot that senses, thinks, and acts. In 2023, we added large-language models (LLMs) like GPT to the thinking part. In 2024, we will connect them to sensors and actuators through APIs. They will interact with humans and the physical world in new ways, from controlling thermostats to forming groups.
Announcing General Availability of Duet AI for Developers and Duet AI in Security Operations [Google Security Cloud] Google Cloud announces the GA of Duet AI for Developers and Duet AI in Security Operations, two services that use AI to help users with various tasks. Duet AI for Developers helps with coding, testing, deploying, and managing applications on Google Cloud. Duet AI in Security Operations helps with detecting, investigating, and responding to security threats. Both services will soon use Gemini, their most advanced model. Duet AI will also be available in Google Workspace and integrated with other Google Cloud products.
5 Forces That Will Drive the Adoption of GenAI [Harvard Business Review] Generative AI is a hot topic for executives, but also a challenging one. How can they leverage the technology to create value and meet the expectations of various stakeholders? Gartner research reveals five forces that drive the adoption of generative AI: 1) Board and CEO expectations; 2) Customer expectations; 3) Employee expectations; 4) Regulatory expectations; and 5) Investor expectations. These forces require leaders to have a clear vision, strategy, and governance for generative AI, as well as new skills and business models.
2023 Cyber Review: The Year GenAI Stole the Show [government technology] The cybersecurity industry faced a unique year in 2023, as generative AI disrupted everything. The main topic of the year was GenAI, the technology that can create realistic content and deceive security systems. GenAI sparked debates, policies, and innovations across the world, and became the focus of every cyber conference. No other topic has ever dominated the industry like GenAI did in 2023.
Enjoyed this update? Please share The AI Collective Word with a friend or colleague.