What Kafka Can Teach Us
Daniel Solove
Professor, GW Law School + CEO, TeachPrivacy + Organizer, Privacy+Security Forum
Although Kafka shows us the plight of the disempowered individual, his work also paradoxically suggests that empowering the individual isn’t the answer to protecting privacy, especially in the age of artificial intelligence.
Kafka in the Age of AI and the Futility of Privacy as Control
In my short essay with Prof. Woodrow Hartzog, Kafka in the Age of AI and the Futility of Privacy as Control, 104 B.U. L. Rev. 1021 (2024), we discuss what Kafka’s work teaches about how best to regulate data privacy.
You can?download the article?for free on SSRN.
Some key quotes from our essay:
Listen to Woodrow Hartzog discuss our piece here.?
You can?download the article?for free on SSRN.
My Forthcoming Book ON PRIVACY AND TECHNOLOGY
I'm thrilled to announce that you can now?pre-order my new book, ON PRIVACY AND TECHNOLOGY?(Oxford University Press, Jan 2025). ?
Daniel J. Solove is a law professor at the George Washington University Law School and a leading international expert on privacy law. Join the free weekly email newsletter for more great privacy analysis, cartoons, whiteboards, events and resources.
Learn more at TeachPrivacy.com. For education and events, check out the Privacy + Security Academy.
Corporate Governance Manager
1 个月An excellent read. I agree that the law of equity has huge potential in protecting us from ourselves, putting the controllers of our data in the same position as that of trustees, in the traditional sense, how have to look after our personal data better than their own and better than we would do alone.
teacher at HIGH SCHOOL
2 个月???? ??? ??? ???.????.??????.?? ???? ?? ??? ??? ????. ???? ?? ????? ??? ?????? ?????????? ??????
Adjunct Law Professor | Author | Historian | former CPO Northrop Grumman | former Legal and Policy leader Federal Government
2 个月Always enjoy your insight stand, especially as it uses stories and literature. Illustrate what may sometimes be the abstract.
total privacy == total security
2 个月A similar argument could be make for cellular 911 services.. A rush to push that out caused misunderstandings of local law enforcement bounded and hospital intakes… programming error.. architectual error.. but that was an error. Is it impossible to train a model to not include private data? Training a model and not respecting privacy is an error, but most models seem to have a high error rate.. so maybe that is private enough.