?? AI & Lawyers
Luiza Jarovsky
Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. ??Join our AI governance training (1,000+ participants) & my weekly newsletter (40,000+ subscribers)
The latest developments in AI policy & regulation | Edition #110
*Personal request: dear readers, I'm moving this newsletter from LinkedIn to my website, where I have more autonomy and can build a closer relationship with the readers. Please re-subscribe here.?Once you receive the welcome message in your inbox, you can safely unsubscribe from this LinkedIn publication. Thank you!
?? Hi, Luiza Jarovsky here. Welcome to the 110th edition of this newsletter on AI policy & regulation, read by 27,400+ subscribers in 140+ countries. I hope you enjoy reading it as much as I enjoy writing it.
?? Summer AI training: Registration is open for the July cohorts of our Bootcamps on the EU AI Act?and on Emerging Challenges in AI, Tech & Privacy. More than 800 people have attended our training programs, which are live, remote, and led by me. Don't miss them! *Students and independent professionals receive 20% off.
?? A special thanks to MineOS for sponsoring this week's free edition of the newsletter. Read their guide:
With the passage of the Colorado AI Act, the U.S. now has its first comprehensive AI regulation. Learn what the law entails, its similarities and differences from the EU AI Act, and how to prepare before its enforcement on February 1, 2026. Read the?MineOS guide?for full details on this landmark regulation.
?? AI & Lawyers
Will AI replace lawyers? Not anytime soon. However, lawyers who are familiar with legal AI tools will be in a better position in the coming years. Below are 10 examples of popular AI tools focused on legal work and how they describe themselves, followed by some thoughts (*I have no connection with these companies, and they are listed in alphabetical order):
? Clio - "Simplify every aspect of your law firm, from billing to communication and document management"
? Everlaw - "Transform your approach to litigation and investigations with the world’s most advanced e-discovery software"
? Harvey - "Augment your workflows using domain-specific models trained by and for professional service providers"
? Lawgeex - "Our contract review automation solution is an industry-first, using patented AI technology to review and redline legal documents based on your predefined policies"
? LegalFly - "LegalFly’s Copilot streamlines your legal operations from contract reviews and drafting to discovery on thousands of legal and financial documents"
? LEXIS-AI - "Fast and accurate generative legal AI that uniquely combines Legal Research and Practical Guidance from LexisNexis"
? Luminance - "Built on a proprietary legal Large Language Model (LLM), Luminance uses next-generation AI to automate the generation, negotiation and analysis of contracts."
? Robin AI - "Robin AI empowers businesses and legal teams to fly through their contracts with advanced?Legal AI?tools for fast review and powerful searches"
? Spellbook: "Spellbook uses GPT-4 to review and suggest language for your contracts, right in Microsoft Word"
? Tonkean - "Elevate your legal operations intake with real-time data enrichment, adaptable form sequences, and an easy-to-use editor for building and modifying workflows." These are just 10 examples. There are many more.
?? Thoughts:
? In the same way that lawyers are expected to know how to use the internet, search engines, Cloud, legal software, and many other tech tools,?they will soon be expected to be familiar with legal AI tools and integrate them into their work.
? I've not tested the AI tools above, and I cannot say if they are better overall than humans doing the same job. However, they are becoming mainstream, and many law firms are starting to rely on these and similar AI tools. I already see law firms announcing their AI use as a "competitive advantage," and soon, clients will expect that too (also to *potentially reduce billable hours).
? In this context, my view is that lawyers who are familiar with AI (especially legal AI tools), who know how to "pilot" it well, and, perhaps most importantly, who have critical thinking to know when to use it, when not to use it, why use it, how to use it, which tools to use and so on, will be in a much better position in the coming years.
?? AI & the job market
"Generative AI could expose the equivalent of 300 million full-time jobs to automation" (Goldman Sachs' report). Some thoughts on that and what to do next:
?? From the report:
"If generative AI delivers on its promised capabilities, the labor market could face significant disruption. Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work. Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300mn full-time jobs to automation."
"The good news is that worker displacement from automation has historically been offset by creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth. The combination of significant labor cost savings, new job creation, and higher productivity for non-displaced workers raises the possibility of a productivity boom that raises economic growth substantially, although the timing of such a boom is hard to predict."
?? Remember the 90s, when the commercial internet became available? Most of us know examples of people who quickly realized it was a big deal, made sure their small business or skillset was fully adapted to the new "online" reality, and became extremely successful.
?? It's happening again now with AI, probably on a bigger scale. Analogously to the internet, AI is already changing how we work, communicate, do business, and so on. Even if you, like me, reject the excessive hype or some of the popular "use cases," I'm aware that I cannot be naive, and I need to keep track of what is going on, what others are doing, and how it can affect me. I think you should be too.
?? With that in mind, these are 7 simple steps anyone can use to think about this issue (read my article on the topic):
1. Learn about AI in general;
2. Understand how people in your field are using AI;
3. Understand how AI is already impacting or might soon impact your job, given current capabilities;
4. Look for support from worker's organizations and other people in your field to understand what is being done to support your profession;
5. Think about ways you could work with AI to increase your competitive advantage in the context of your skillset and your career;
6. Think about ways you could adapt the way you use your skills in case AI disruption affects the job market;
7. Think about other jobs/skill sets you could explore in case it becomes difficult to earn a living in your profession.
?? My final thoughts: job creation will definitely happen, and it will benefit "first-movers."
?? New OECD report on AI, Data Governance & Privacy
?? The OECD published the report "AI, Data Governance and Privacy - Synergies and Areas of International Cooperation," a must-read for everyone in privacy and AI. Highlights & comments:
?? The section "Generative AI: a catalyst for collaboration on AI and privacy" has an interesting overview of the latest developments in the intersection between Privacy Enhancing Technologies (PETs) and AI. Quote: "Machine unlearning is another emergent subfield of machine learning that would grant individuals control over their personal data, even after it has been shared. Indeed, recent research has shown that in some cases it may be possible to infer with high accuracy whether an individual's data was used to train a model even if that individual's data has been deleted from a database. (...)" (page 20)
?? The paper also covers:
? Mapping existing OECD principles on privacy and on AI: key policy considerations
??National and regional developments on AI and privacy
?? In the last 18 months, I've been writing about the intersection of AI & privacy and some of the unsolved legal challenges.
?? One of these issues is the practical interpretation of legitimate interest requirements in the context of AI. Current AI practices do not fit the traditional interpretation of this lawful basis (meaning that currently, the whole AI industry has uncertain legal status in the EU). On the issue of legitimate interest, this report writes:
"(...)?most privacy and personal data protection frameworks require that there be a “lawful basis” for both collecting and processing data. While most laws generally provide for a series of such legal bases, in practice, the legal basis known as “legitimate interests” is the one which is considered the most suitable in the context of Generative AI. This requires that the interest pursued by the AI developer, provider or user (e.g. in developing or implementing a model) be legitimate, that the data processing at stake is effectively needed to meet this legitimate interest, and that it does not create disproportionate interference with the interests and rights of data subjects. As mentioned earlier, striking the right balance between these different interests can be complex in the current state of the art and calls for reinforced co-operation between the AI and the privacy community"
?? My personal comment is that this issue still needs a firm stand from data protection authorities, and the matter remains open.
?? This report is an interesting summary of the current understanding of the intersection of AI, privacy & data governance. Read it here.
?? To learn more about AI compliance and regulation, check out our AI training programs at the AI, Tech & Privacy Academy (two Bootcamps starting in July).
?? Excellent AI paper
Woodrow Hartzog published "Two AI Truths and a Lie," and it's a must-read for everyone in AI. Important quotes:
"The two AI truths and a lie are also part of a bigger story about technology and corporate greed. There are several different accounts of this story. Kate Crawford explains AI’s extractive nature by reference to the capitalist-colonial logics of classification that underpin it. Shoshana Zuboff sees digital extraction as the inevitable endgame of late capitalism. Julie Cohen sees digital platform extraction and manipulation as a way to remake social and political institutions to legitimize their financial gain. AI is just the latest tool to succumb to what Cory Doctorow has called the “enshittification” of digital platforms. Doctorow describes the inevitable degradation cycle of platforms as “first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.” Under this theory, companies deploying AI are going to try to avoid the four forces that discipline companies: competition, regulation, self-help, and workers. Any holistic regulatory response to AI and the broader story of technology and corporate greed must embolden these forces, or otherwise the cycle will continue."
"One of the most robust defaults lawmakers should consider is presumptive prohibitions absent justifications and demonstrably safe use. This could take the form of licensing regimes, pre-clearance regimes, and other legal frameworks deployed in contexts like healthcare devices and pharmaceuticals. Gianclaudio Malgieri and Frank Pasquale have proposed a system of “unlawfulness by default” for AI systems, which would be “an ex-ante model where some AI developers have the burden of proof to demonstrate that their technology is not discriminatory, not manipulative, not unfair, not inaccurate, and not illegitimate in its legal bases and purposes.” (...)"
"There’s much we don’t know about how AI systems will work to change our world. But there are a few things that lawmakers should count on. Companies will take everything they can for their own benefit, and we will get used it. People can benefit from AI systems and still be individually and collectively worse off overall. Unless lawmakers create rules to respond to extraction, normalization, and selfdealing, companies will use AI systems to permanently impoverish our lives."
?? Read the full paper here.
?? Australia publishes AI Framework
?? The Australian government published its "National framework for the assurance of AI in government," a must-read for everyone in AI, public administration & policymaking. Quotes: ”Existing decision-making and accountability structures should be adapted and updated to govern the use of AI. This reflects the likely impacts upon a range of government functions, allows for diverse perspectives, designates lines of responsibility and provides clear sight to agency leaders of the AI uses they are accountable for. Governance structures should be proportionate and adaptable to encourage innovation while maintaining ethical standards and protecting public interests.” (page 7)
"During system development governments should exercise discretion, prioritising traceability for datasets, processes, and decisions based on the potential for harm. Monitoring and feedback loops should be established to address emerging risks, unintended consequences or performance issues. Plans should be made for risks presented by obsolete and legacy AI systems. Governments should also consider oversight mechanisms for high-risk settings, including but not limited to external or internal review bodies, advisory bodies or AI risk committees, to provide consistent, expert advice and recommendations." (page 8)
"Governments should also consider internal skills development and knowledge transfer between vendors and staff to ensure sufficient understanding of a system’s operation and outputs, avoid vendor lock-in and ensure that vendors and staff fulfill their responsibilities. Due diligence in procurement plays a critical role in managing new risks, such as transparency and explainability of ‘black box’ AI systems like foundation models. AI can also amplify existing risks, such as privacy and security. Governments must evaluate whether existing standard contractual clauses adequately cover these new and amplified risks." (page 10)
?? The framework also?demonstrates how governments can practically apply Australia’s eight AI Ethics Principles to their assurance of AI. These are them:
领英推荐
1. Human, societal and environmental wellbeing
2. Human-centred values
3. Fairness
4. Privacy protection and security
5. Reliability and safety
6. Transparency and explainability
7. Contestability
8. Accountability
?? Read the framework here.
?? Are you looking for a speaker in AI, tech & privacy?
I would welcome the opportunity to:
? Give a talk at your company;
? Speak at your event;
? Coordinate a training program for your team.
?? Here's my short bio with links. Get in touch!
?? AI copyright lawsuit
?? The Center for Investigative Reporting (behind Mother Jones &?Reveal)?sues OpenAI & Microsoft for copyright infringement. Quotes:
"Defendants copied, used, abridged, and displayed CIR’s valuable content without CIR’s permission or authorization, and without any compensation to CIR. Defendants’ products undermine and damage CIR’s relationship with potential readers, consumers, and partners, and?deprive CIR of subscription, licensing, advertising, and affiliate revenue, as well as donations from readers."
"Protecting these unique voices is one of the fundamental purposes of copyright law. Since the founding of the United States, the Copyright Clause of the U.S. Constitution promises to “promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” The Copyright Act similarly empowers Congress to protect works of human creativity that persons have worked hard to create, encouraging people to devote substantial effort and resources to all manner of creative enterprises by providing confidence that creators’ works will be shielded from unauthorized encroachment and that creators will be properly compensated."
"When they populated their training sets with works of journalism, Defendants had a choice: to respect works of journalism, or not. Defendants chose the latter. They copied copyrighted works of journalism when assembling their training sets. Their LLMs memorized and at times regurgitated those works. They distributed those works and abridgements of them to each other and the public. They contributed to their users’ own unlawful copying. They removed the works’ copyright management information. They trained ChatGPT not to acknowledge or respect copyright. And they did this all without permission."
?? Read the lawsuit here.
?? AI Governance is HIRING
Below are 10 AI Governance positions posted in the last few days. Bookmark, share & be an early applicant:
1. Zurich Australia (Australia): AI Governance Lead - apply
2. Siemens Energy (Portugal): AI Governance Consultant - apply
3. Barclays Bank US (US)?-?AI Governance and Oversight - apply
4. Visa (US) - Lead System Architect, AI Governance - apply
5. Zurich Insurance (Spain) -?AI Governance Expert - apply
6. AXA UK (UK)?-?AI Governance Lead - apply
7. AstraZeneca (Sweden) -?Head of Data & AI Governance - apply
8. E.ON Digital Technology (Germany) -?Data & AI Governance - apply
9. TRUSTEQ GmbH (Germany)?AI-Governance Associate Consultant - apply
10. Deloitte (Canada) -?Consultant, AI Governance, Risk and Data - apply
?? For more AI governance and privacy job opportunities, subscribe to our weekly job alert. Good luck!
?? If you are transitioning to AI governance or focusing on upskilling to land your dream job in AI, check out the AI, Tech & Privacy Academy's 4-week Bootcamps (two upcoming cohorts in July). More than 800 people have joined our AI programs - don't miss them!
???Insights on AI governance, compliance & regulation
?? If you are interested in AI governance, compliance, and regulation, you can't miss my conversation with Barry Scannell in July. Topics we'll cover:
? What are some of the unspoken challenges behind the EU AI Act?
? What AI compliance issues are companies currently ignoring, and what should they focus on?
? What essential skills aspiring AI governance professionals should aim for?
? Questions sent by the audience (reply to this email and let me know)
?? To participate live and receive the recording, register here.
?? To watch my previous live sessions, visit my YouTube channel.
? Reminder: upcoming AI training programs
The EU AI Act Bootcamp (4 weeks, live online)
??? Tuesdays, July 16 to Aug 6 at 10am PT
Emerging Challenges in AI, Tech & Privacy (4 weeks, live online)
??? Wednesdays, July 17 to August 7 at 10am PT
?? Subscribe to the AI, Tech & Privacy Academy's Learning Center to receive info on our AI training programs and other learning opportunities.
I hope to see you there!
?? Upskill & advance your career
Choose a paid newsletter subscription, and besides receiving this free weekly edition?on AI policy & regulation, get exclusive access to:
?? Thank you for reading!
If you have comments on this week's edition, reply to this email or write to me, and I'll get back to you soon.
Have a great day.
Luiza
Sr. Economist / Innovation Advisor at Int'l Dev - on social media as a private citizen. 18k+
4 个月AI Laws https://www.dhirubhai.net/posts/petruta-pirvan_us-ai-laws-activity-7209153395423649792-qGaW AI push by IMF continues https://www.dhirubhai.net/feed/update/urn:li:ugcPost:7203149142414499840?commentUrn=urn%3Ali%3Acomment%3A%28ugcPost%3A7203149142414499840%2C7203389359071342593%29 AI and wonky consultants https://www.dhirubhai.net/feed/update/urn:li:activity:7212408075234787329?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7212408075234787329%2C7212513928277303296%29&dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287212513928277303296%2Curn%3Ali%3Aactivity%3A7212408075234787329%29 IMF on Cybersecurity https://www.dhirubhai.net/feed/update/urn:li:ugcPost:7209923227970613248?commentUrn=urn%3Ali%3Acomment%3A%28ugcPost%3A7209923227970613248%2C7210002095821701120%29
Human Resource Management Professional using Artificial Intelligence & SAP SuccessFactors.
4 个月[email protected]
Barrister–at–Law | AI & Law Research
4 个月Congratulations! ??