Farah Yousry reports the aftermath of a ransomware attack by Hive that was previously reported on DataBreaches in 2021: As the second year of the pandemic was nearing an end, employees at Johnson Memorial Health hoped they could catch their breath after dealing with a weeks-long tsunami of COVID-19 hospitalizations and deaths. But on a…
NYS Secures $200,000 from Law Firm for Failing to Protect New Yorkers’ Personal Data
NYS Attorney General Letitia James announced a settlement: New York Attorney General Letitia James secured $200,000 from the law firm, Heidell, Pittoni, Murphy & Bach LLP (HPMB) for failing to protect New Yorkers’ personal and healthcare data. HPMB’s poor data security measures made it vulnerable to a 2021 data breach that compromised the private information of approximately…
No need to hack when 682,000 medical records are leaking, Monday edition
On March 15, DataBreaches was contacted by a researcher who had found a “bunch of medical docs.” The files included patient intake evaluations, laboratory results, medical records requests, insurance information forms, treatment or consultation notes, and other files you would expect to see in a patient’s records. The patients all appeared to be in Texas,…
Updating: Cyberattack against CHRU Brest: what happened
In a March 11 post about non-U.S. hospitals that had been victims of cyberattacks, DataBreaches had noted a report about CHU-BREST. Valéry Rieß-Marchive of LeMagIT has an update and more details on the incident. The following uses machine translation from the original French: During a press briefing, this Friday, March 24, the direction of the…
Twitter takes legal action after source code leaked online
Dan Milmo reports: Twitter has revealed some of its source code has been released online and the social media platform owned by Elon Musk is taking legal action to identify the leaker. According to a court filing made on Friday, Twitter is demanding that GitHub, a code-sharing service, identifies who released on the platform parts…
The criminal use of ChatGPT – a cautionary tale about large language models
From Europol: In response to the growing public attention given to ChatGPT, the Europol Innovation Lab organised a number of workshops with subject matter experts from across Europol to explore how criminals can abuse large language models (LLMs) such as ChatGPT, as well as how it may assist investigators in their daily work. Their insights…