- The AI Collective Word
- Posts
- The Safety of LLMs and the Impact of GenEng on the Developers' Role
The Safety of LLMs and the Impact of GenEng on the Developers' Role
AI Week in Review for Cybersecurity and PRovacy


Happy Friday! In this edition, we bring you a diverse range of insights and updates from the world of artificial intelligence, machine learning, cybersecurity, regulations, privacy, and more.
This week:
Research into the safety of LLMs
Despite AI potential, enterprises are facing adoption challenges
How and why businesses should create ChatGPT policies
International Privacy Laws and the impact on AI
GenEng and how it will change the developers’ role
We hope you find this newsletter informative and insightful as you navigate the dynamic world of AI.
Please subscribe to The AI Collective Word today to receive your free newsletter directly in your inbox and join our ever-growing network.
I hope you enjoy this week’s newsletter!
Share it with a friend or colleague if you find it helpful.
ML and NLP
This research delves into the safety of large language models (LLMs) like ChatGPT, Bard, and Claude. While previous studies have identified "jailbreaks" - specific inputs that trigger unintended responses - requiring manual effort to design, this study demonstrates the automated generation of adversarial attacks. These attacks involve appending carefully chosen character sequences to user queries, compelling the LLMs to follow harmful commands. Unlike previous methods, these attacks are created automatically, raising concerns about the safety of both open-source and closed-source LLMs when utilized autonomously. Addressing these threats might prove as challenging as similar issues in computer vision, suggesting the need for caution as these AI models gain more prominence.
PRIVACY
AI hype far exceeds its use within enterprises - TechTarget
Enterprises are cautious about embracing generative AI due to concerns over costs, legal liabilities, and data privacy risks. Many organizations are focusing on understanding these factors and preparing their IT platforms for AI integration. Despite the excitement surrounding generative AI, high inflation and global pressures on investments make its adoption challenging. Enterprises are currently prioritizing spending on CRM and ERP for operational efficiency rather than AI, leading to a 13.5% increase in global software spending. Although 70% of executives are investigating generative AI's potential, technical challenges, including data formatting and GPU cluster availability, need to be overcome before wider adoption.
NYC subway using AI to track fare evasion - NBC News
New York City has quietly implemented surveillance software that employs AI to identify people evading subway fares. The system, which hasn't been officially acknowledged, is operated by a third-party company called AWAAIT and aims to aid law enforcement in curbing fare evasion. The software was used in seven subway stations in May, with plans to expand to about two dozen more by the end of the year. While officials claim the AI system is used for tracking fare evasion and not directly flagging offenders to the police, privacy advocates express concerns about the growing surveillance landscape in the city.
REGULATIONS

Created in Canva
U.S. senators sharpen focus on AI regulations - TechTarget
U.S. Senators are looking for proposed options for AI regulations from industry experts. Proposed ideas include: creating an agency to oversee AI, watermarking and transparency when AI is used, and testing and auditing by third parties.
Experts emphasize the need for democratic governments to establish aligned rules for AI governance to counter China's influence and shape the global AI landscape. While Europe is moving ahead with the AI Act to classify AI systems by risk, the U.S. has relied on state-level regulations and corporate responsibility. The Biden administration's recent collaboration with tech giants like Microsoft, Meta, and Google to develop safe and transparent AI is seen as a step, but experts argue that the U.S. should clarify its regulatory approach to effectively negotiate AI governance at an international level. The challenge lies in coexisting with authoritarian regimes that use AI for control, and while concerns about stifling innovation exist, accountable and transparent AI regulation could position the U.S. strategically in the global AI arena.
OpenAI's ChatGPT and similar language models offer substantial potential for automating enterprise workflows. However, these models come with risks, such as generating factually inaccurate content and collecting user queries. To harness their benefits while mitigating risks, enterprises must develop acceptable use policies that outline responsible ChatGPT use, particularly concerning sensitive information. Clear guidelines should be set for AI experimentation, especially when handling proprietary data, and communication should be established company-wide to ensure secure and compliant usage. CEOs' engagement, understanding of risks, and commitment to responsible AI are crucial for a successful AI policy.
Several generative AI apps have been removed from Apple's China App Store ahead of China's upcoming regulations for generative AI services. Chinese developers received notices from Apple explaining that their apps were removed due to "content that is illegal in China." The new regulations, effective from August 15, require AI apps operating in China to obtain an administrative license. China has been actively regulating the generative AI space to ensure that apps using large language models like ChatGPT comply with the country's laws and regulations. This move may deter many developers, particularly smaller ones, from entering the market due to increased compliance requirements.
International Privacy Law and Artificial Intelligence - SSRN (Paper)
In the last five years, the global data management landscape has witnessed a surge in regulations, including comprehensive privacy laws, cybersecurity measures, and data localization policies. These laws significantly impact AI implementation and functionality due to their specific restrictions. The international policy landscape encompasses privacy, cybersecurity, and data localization (PCL) laws, which have grown increasingly stringent and tailored in their application. These laws, while targeting different objectives, collectively limit the collection, processing, transfer, use, and access of data, particularly sensitive personal information (SPI) like health and financial data, affecting organizations' ability to develop AI capabilities.
Massachusetts securities regulators are launching an investigation into how investment firms use artificial intelligence in their interactions with investors. Secretary of State Bill Galvin has expressed concerns about the potential unchecked use of AI technology, worrying that it could harm investors if deployed without proper disclosure and conflict consideration. Letters of inquiry have been sent to several firms, including JPMorgan Chase, Morgan Stanley, Tradier Brokerage, US Tiger Securities, E*Trade, Savvy Advisors, and Hearsay Systems. The investigation follows the U.S. Securities and Exchange Commission's recent proposal to eliminate conflicts of interest from AI use on trading platforms.
PLATFORM ENGINEERING
Interest in generative AI is on the rise, with a 160% increase in content engagement year-over-year and 60% growth in the last quarter, as indicated by TechTarget reader data. These advancements in generative AI are expected to impact IT operations significantly. A study from TechTarget's Enterprise Strategy Group reveals that while 55% of organizations using observability practices employ AIOps, only 40% report that AIOps have simplified operations, highlighting challenges like poor data quality and incomplete model training. However, the integration of generative AI, particularly in incident analysis and root cause identification, holds promise for improving incident triage and resolution, potentially reducing mean time to detection (MTTD) and mean time to resolution (MTTR).
The availability of generative AI, powered by large language models (LLMs), is transforming how enterprises view artificial intelligence. There's a shift occurring towards a "post-training era," where application developers will drive innovation by integrating LLMs into enterprise applications. This movement, called "Generative Engineering" or "GenEng," democratizes access to AI for a broader community of developers, allowing them to leverage generative AI's potential without becoming deep learning experts. This shift parallels the DevOps movement, with GenEng being led by developers who possess both traditional application development skills and proficiency in integrating generative AI technologies into applications for accurate and domain-specific outcomes.
USE CASES
Pros and cons of ChatGPT for finance and banking - TechTarget
ChatGPT and similar large language models (LLMs) offer potential benefits in the finance and banking sector but also pose challenges that must be addressed to ensure safety and compliance. Use cases where LLMs have demonstrated success include customer service, marketing, and human resources, where language-based tasks are predominant. LLMs can assist in tasks such as summarizing complex insights, streamlining underwriting processes, enhancing customer service, automating compliance, risk assessment, and management, improving personalization, automating document processing, translating legalese, and generating executive briefs. However, challenges such as data security, bias, transparency, and contextual understanding must be addressed to safely scale the use of LLMs in the finance sector.
RISK AND SECURITY MANAGEMENT
Defending Against AI Threats - FBI Podcast
The FBI is exploring the use of artificial intelligence (AI) to anticipate and defend against threats, as well as to keep the American people safe. AI has the potential to impact various aspects of society, from convenience to security. While AI offers benefits, it can also be exploited by cybercriminals for malicious purposes. The FBI is concerned about the spread of AI into the physical world and its use in cybercrimes, economic espionage, and disinformation campaigns. The FBI emphasizes the importance of collaboration between public and private sectors to address AI-related challenges and ensure security.
The Good, the Bad and the Ugly of Generative AI - Security Week
Generative AI, like ChatGPT, has both positive and negative aspects that need to be considered. Negativity bias, our tendency to focus on negatives, often leads to concerns about the potential downsides of AI. However, there are opportunities to leverage generative AI for positive outcomes. It can enhance workflows, such as aiding software development with code recommendations, but challenges arise when it relies on outdated data, leading to ineffective recommendations. Threat actors can exploit gaps in AI-generated content, leading to misinformation and potentially malicious outcomes. Despite the challenges, generative AI has the potential to transform security operations by enhancing efficiency through natural language processing and machine learning, while human expertise remains crucial for analysis, intuition, and risk management. It's important to focus on specific use cases to harness generative AI's potential effectively.
Google has introduced the Secure AI Framework (SAIF), a conceptual framework aimed at establishing industry security standards for the responsible development and deployment of artificial intelligence (AI), particularly generative AI. The framework draws inspiration from established security best practices in software development and addresses risks unique to AI systems, such as model theft, data poisoning, malicious inputs, and confidential information extraction. SAIF aims to ensure that AI models are secure-by-design and encourages collaboration across public and private sectors to safeguard advancements in AI technology.
IBM Security's 2023 Cost of a Data Breach Report highlights the advantages of integrating artificial intelligence (AI), automation, and threat intelligence into cybersecurity strategies. The report reveals that organizations with advanced integration of these technologies into their security operations (SecOps) teams experience shorter breach lifecycles, resulting in lower breach costs and greater cyber resilience. The study emphasizes that AI can improve breach detection, containment, and response, leading to cost savings and faster identification of breaches by SecOps teams. Despite the benefits, only a fraction of enterprises have extensively integrated AI and automation, leaving a significant gap in cybersecurity defenses.
ETHICS
Ryanair uses facial recognition to check customers' identities when they book through online travel agents. NOYB, a digital rights group, filed a complaint against Ryanair in Spain, saying this violates data protection rights. Ryanair says it needs to verify identity for safety and security reasons and offers other options that are less convenient.
Researchers have demonstrated that they can bypass the filters intended to prevent chatbots powered by generative AI models from producing toxic and harmful content. Researchers from Carnegie Mellon University, the Center for A.I. Safety, and the Bosch Center for AI were able to trick these models, including OpenAI's ChatGPT and Google Bard, into generating disinformation, hate speech, and harmful content. The technique involves providing specific character sequences that the models should have prevented, raising concerns about the safety and reliability of such models, especially when used autonomously. This discovery has highlighted the challenges enterprises face in ensuring the safe use of generative AI applications.