top of page

Artificial Intelligence in the Legal Profession: What It Speeds Up, Where It Creates Risk, and How to Use It Responsibly

  • תמונת הסופר/ת: עו"ד אמיתי אביעד
    עו"ד אמיתי אביעד
  • 23 בפבר׳
  • זמן קריאה 5 דקות

Clients now encounter artificial intelligence before they ever speak with a lawyer: drafts generated at the click of a button, "legal opinions" displayed on a screen, or document summaries that promise to save time and cost. In a field where documents sit at the centre of almost every matter-an agreement, a statement of claim, a demand letter, or business correspondence-it is easy to see why these tools have quickly become part of daily practice. The practical question is not whether they exist, but how to use them without undermining what clients need most: accuracy, accountability, and confidentiality.

It is important to start with the right baseline. Most mainstream AI tools do not “verify truth.” They generate text based on patterns and probabilities. That makes them strong at producing an initial draft, organising material, and suggesting alternative wording. They are far weaker where a matter depends on a correct fact, a precise reference to authority, or strict alignment with a particular legal regime and specific circumstances. Law is an area in which a small error can have outsized consequences, and that is exactly why a conservative, structured workflow is required.

In day-to-day practice, these tools can offer real value at the early stages. In contract work, for example, they can assist with a rapid first-pass review: identifying missing provisions, pointing to internal inconsistencies, and flagging areas that frequently generate disputes, such as limitation of liability, indemnities, confidentiality, and termination. Even here, the benefit is early detection rather than decision-making. Determining what is reasonable in a particular transaction-and what level of risk the client is prepared to accept-remains a human judgment based on experience, commercial context, and risk management.

In legal research, AI is useful mainly as a navigation aid. It can help clarify the question, suggest search terms, and raise avenues for comparison. Research itself, however, must rest on verified sources. Anyone who relies on automatically generated text without checking primary materials and recognised legal databases may end up with an argument that reads persuasively but is wrong, or with quotations and citations that do not exist. In legal work, “sounds right” is not an acceptable standard.

In litigation and case management, AI most often appears through volume: tagging documents, sorting correspondence, identifying recurring themes, and producing internal summaries. In cases involving thousands of documents, this can be a genuine step change in efficiency. But anything that leaves the firm-submitted to a court, sent to the other side, or delivered to a regulator-must meet a professional standard grounded in independent verification. In the United States, careless use has already led to sanctions, including where filings contained fabricated or distorted citations linked to reliance on AI tools. The message that has emerged is straightforward: responsibility does not shift to the tool; it remains with the lawyer who signs and files.

That brings the discussion to confidentiality. Clients reasonably expect sensitive information-personal, medical, commercial, or strategic-to be protected. Entering information into an external tool can create exposure, even where that was never the intention. As a result, professional practice is moving toward a "minimum necessary" approach: avoid entering identifying details unless required, use anonymisation, review terms of use and security safeguards, and define in advance what must never be provided to such tools. This is not theoretical caution. It is operational discipline designed to prevent harm.

Another risk area is bias. Where AI is used for recommendations, rankings, or filtering-such as assessing risk, analysing recurring arguments, or supporting internal organisational decisions-there is a concern that it may reproduce problematic patterns from historical data. The practical implication is clear: do not delegate "judgment" to the tool. Its role should be defined in advance, outliers should trigger review, and a human decision-maker must remain responsible and able to explain the basis for the decision.

It is notable that developments in the United States are not trending toward a blanket prohibition. They are trending toward a framework. The American Bar Association issued dedicated ethics guidance that places these tools within familiar duties: professional competence (understanding what the tool does and does not do), confidentiality, fair communication with the client, and candour to the tribunal. At the same time, courts and court systems have begun adopting internal usage policies that restrict the handling of confidential information and emphasise human supervision. Legislative initiatives are also being considered at the state level. In California, for example, a bill has been advanced that seeks to anchor duties of verification and restrictions on entering non-public information into public systems, and to require reasonable steps to correct "hallucinations" and other errors.

A comparison to software development helps explain where this is heading. In that sector, AI tools accelerated coding, but they did not eliminate the need for controls: testing, review, and security practices. In fact, the faster code is produced, the greater the need for a process that catches errors before deployment. The legal parallel is direct: verification of facts, validation of citations, jurisdiction-specific alignment, and a clear separation between an internal working draft and a document intended for filing or transmission on the client’s behalf.

The practical working rule is therefore simple: treat AI output as a draft, not as a conclusion. A draft can save time and improve style, but it must be read critically, corrected, and completed by reference to the governing documents and binding authorities. Used this way, the tool can function as a force multiplier. Used without a process, it can become a risk multiplier.

For a diverse client base-individuals, small businesses, and companies-the result is a change both in expectations and in the service standard. Clients increasingly expect speed, automation, and an earlier written output. Alongside that, the standard of accountability becomes sharper: clients, courts, and regulators expect lawyers to explain what they relied on, what was checked, and what was submitted only after verification. This is where an experienced practice is measured: not by the ability to generate text quickly, but by the ability to combine efficiency with responsibility.

Nothing in the above constitutes legal advice, and each matter requires an individual assessment.

The references below reflect U.S. sources and published policy materials. The remainder of the article is general professional analysis. Any application to Israeli law and to the facts of a particular case requires case-specific review.

References (U.S.)

  • American Bar Association, ABA announcement regarding the publication of Formal Opinion 512 on lawyers’ use of generative AI tools (29 July 2024). (americanbar.org)

  • "Generative Artificial Intelligence Tools" (ABA Formal Opinion 512) – publication containing the text and discussion of the opinion. (thebarexaminer.ncbex.org)

  • Mata v. Avianca, Inc., No. 1:22-cv-01461 (S.D.N.Y.), Order dated 22 June 2023 (court document available via an open repository). (law.justia.com)

  • United States Court of Appeals for the Fifth Circuit, Case No. 25-20086, Order (18 February 2026) – sanction order addressing fabricated/distorted citations connected to AI use and failure to verify. (ca5.uscourts.gov)

  • New York State Unified Court System, "Interim Policy on the Use of Artificial Intelligence" (effective October 2025) – internal usage policy for the court system. (nycourts.gov)

  • New York State Unified Court System, press release regarding adoption of the AI policy (10 October 2025). (nycourts.gov)

  • California SB 574 (bill text/draft) – proposed duties for lawyers, including restrictions on entering non-public information into public systems and reasonable steps to verify and correct errors/“hallucinations.” (legiscan.com)

  • NIST, Artificial Intelligence Risk Management Framework (AI RMF 1.0), January 2023 – framework for AI risk management. (nysba.org)


Legal Disclaimer: The information in this article is provided for general and preliminary informational purposes only and does not constitute legal advice or a substitute for legal advice. You should not rely on the content without obtaining specific legal advice from an attorney practising in the relevant field before taking any action or making any decision. The information is accurate as of the date of publication and is based on the legal position known at that time.

We will be pleased to answer any questions and assist you. You may contact us via the phone icon (for mobile users), by telephone at +972-3-6236130, by email, through the contact form at the bottom of the page (left-hand side), or in any other way that is convenient for you.

 

תגובות


bottom of page