> If an organisation is non-compliant with their agreement, we work with them to address any problems and conduct follow up audits to ensure they are fully resolved.
This feels like the right response to me. In most of these cases, we're talking about a data provider with reasonable governance controls in place, who grants access to a requester who says they'll use the data responsibly, then just does not.
If the requester is part of a large research university, it doesn't make sense to say "researchers in Study A violated the data use agreement, therefore hundreds of other researchers in studies B-Z must now erase the data they've already downloaded, and never apply for access to more data from the largest research data provider in the country ever again." Those other studies had nothing to do with the violation, so shouldn't be punished.
The institution should punish the offending individuals, and the data provider should blacklist those individuals, as well as carefully audit both the institution (for its education and oversight of its research teams) and the principal investigators of the offending study for some length of time.
The government has put in all kinds of laws, but really others find ways around. IT should be written into law that the spirit is such, that if you fuck up, you pay. Orgs need to keep data safe like life or death depended on it. In the end, it does. Data encrypted in flight, at rest and only kept around as long as needed.
This is evil shit you’re saying (I’m being a bit dramatic, sure). Healthcare is made more expensive by all these rules about IT. Life or death depends on keeping costs down and making it seamless for doctors to share information with one another. It isn’t some catastrophe if a breach happens. They’ve happened before, yet people have not been getting their private health information published in the local newspaper.
Shouldn't be seamless, on the contrary. Full of seams. Go back to pencil and paper, better. That was a better system on all counts.
Like actually getting my private health information published in the local newspaper--in particular regarding my psychiatrization and torture--is much, much better than creepy standardized test cheaters knowing all my secrets and nobody else knowing anything. That's what I'm doing these days.
In fact I would say the system works far too well, it's very easy for doctors to gossip in their seamless channels and far too hard to stop my psych record--like it was a prison record or a warrant or some actual judgment by a worthy authority--getting around behind my back. I still can't see it myself. I'm expected to believe I have a doctor who advises people paying for my treatment and I've never met this asshole. Hundreds, even thousands of pages and I can't see any of it, it's like a Kick Me on my back, but a biography, a Kick Me epic. Jorge Barros Beck has a few hundred pages, Ximena Rojas Núnez has some, Stanford CAPS has some, that asshole I told you about, then "Clínica" Rayencura produced a couple hundred (they have a former trained journalist called Gonzalo, former junkie too, working to fill out accusative, slanderous dossiers on people, he doesn't say shit he just writes shit, only shit, lots of shit).
Can't wait for that leak!
At the same time, I wouldn't want anyone to be able to enforce certain parts of that. For example, to make sure that data was only kept around as long as needed, you'd need to be able to monitor the contents of all the computers that contained that data. This creates problems of its own, much larger than the original one. To a certain extent, we just have to trust researchers with sensitive data, and severely punish gross violations of that trust.
To be honest, I've heard of many more examples of organizations who put too strict of controls on their data. This is due to researchers trying to walk a line between a requirement that they share their data, and their (understandable) desire to keep their work to themselves as long as possible, so other competing researchers can't publish on it first. A bad data governance committee fails much more often in allowing data contributors to be too strict with their data, even though I agree that a data breach is a worse outcome, and avoiding it should be the highest priority.
> Orgs need to keep data safe like life or death depended on it. In the end, it does.
Then the parties injured can bring claims with the actual damages in hand. If the courts get clogged with such cases, we’ll have the evidence with which to legislate.
Jumping the shark by assuming hypothetical harms are real is how we supercharge needless bureaucracy.
'Then the parties injured can bring claims with the actual damages in hand.'
This is basically impossible.
First you must prove the leak came from the party you are suing, most Westerners had their data leaked by over 10 organisations at this point - equifax, high profile hacks, etc.
Secondly you need to prove damages. That is extremely difficult because no-one will ever admit in court to using leaked data against you.
So the solution you are proposimg is unworcable. We know, becauae we tested it - equifax leaked data for hundred million people, and so far noone was able to prove the damage.
The tricky bit is the interplay between security and criticality of need.
If I'm dying of an unknown condition in the ER, I really want minimal fences between my doctor and my data. So a careful balance has to be struck because sometimes patient need is served by breaking privacy. Bulletproof technical solutions could impede patient care.
Selective enforcement is a key method for... something. Some political thing. I forget the name.
This is the perfect area for a vigilante to regulate the market
What tools will that vigilabte use? Lawsuits? Bullets?
> Should NHS Digital curtail their access?
Depends. Will curtailed access harm them or harm their patients?
Perhaps the US needs national/federal GDPR/CCPA-style legislation?
Possibly, but the BMJ is the British medical association. This report tells about UK data breaches.