Eric Killelea reports: It used to be easier. Christopher Delzotto remembers the days not so long ago when many online financial scams could be spotted just by reading them. They were full of misspellings, poor grammar and awkward phrasing — all signs that they were created in other countries where a hacker’s first language isn’t English. The…
Category: Artificial Intelligence
California Privacy Protection Agency publishes new draft regulations addressing AI, risk assessments, cyber audits
Philip N. Yannella, Gregory P. Szewczyk, and Timothy Dickens of Ballard Spahr write: The California Privacy Protection Agency (CPPA) recently published two new sets of draft regulations addressing a range of cutting-edge data protection issues. Although the CPPA has not officially started the formal rulemaking process, the Draft Cybersecurity Audit Regulations and the Draft Risk Assessment Regulations will serve…
Insights From The IBM 2023 Cost of a Data Breach Report
Joseph J. Lazzarotti of JacksonLewis writes: The annual Cost of a Data Breach Report (Report) published by IBM is reliably full of helpful cybersecurity data. This year is no different. After reviewing the Report, we pulled out some interesting data points: Is it beneficial to involve law enforcement in a ransomware attack? According to the Report, organizations…
Announce: New category on DataBreaches.net
Given the explosion of news and analyses concerning artificial intelligence, a category for the topic has now been added to DataBreaches.net. To find articles prior to 2022, use the site’s search function for “Artificial Intelligence.”
FTC investigates OpenAI over data leak and ChatGPT’s inaccuracy
Cat Zakrzewski reports: The Federal Trade Commission has opened an expansive investigation into OpenAI, probing whether the maker of the popular ChatGPT bot has run afoul of consumer protection laws by putting personal reputations and data at risk. The agency this week sent the San Francisco company a 20-page demand for records about how it…
The criminal use of ChatGPT – a cautionary tale about large language models
From Europol: In response to the growing public attention given to ChatGPT, the Europol Innovation Lab organised a number of workshops with subject matter experts from across Europol to explore how criminals can abuse large language models (LLMs) such as ChatGPT, as well as how it may assist investigators in their daily work. Their insights…