• Pascal's Chatbot Q&As
  • Posts
  • The best path forward is incremental adoption, ensuring that AI augments human decision-making, with humans remaining the ultimate authority in complex, high-risk, and interpretive legal decisions.

The best path forward is incremental adoption, ensuring that AI augments human decision-making, with humans remaining the ultimate authority in complex, high-risk, and interpretive legal decisions.

GPT-4o: While AI can assist and enhance decision-making processes, fully automating the strategic decision-making process risks undermining the core human element of legal practice.

Question 1 of 2 for ChatGPT-4o: Please read the article “A ‘step too far’? Legal, other professional services disagree on whether AI should give advice” and tell me what it says

ChatGPT-4o’s answer:
The article “A ‘step too far’? Legal, other professional services disagree on whether AI should give advice” from Thomson Reuters Institute explores the differing perspectives across professional industries regarding the role AI should play in providing advice.

Key highlights include:

  • General Acceptance of AI: 78% of respondents from legal, tax & accounting, risk & fraud, and government sectors view AI positively. These industries, traditionally risk-averse, now see potential in AI, but opinions diverge on its application.

  • Varied Levels of Comfort: While using AI for administrative tasks is broadly accepted, professionals express reservations about AI making final decisions or offering strategic advice. For instance, only 17% of legal professionals feel AI giving legal advice is ethical, whereas 57% in risk & fraud and 65% in tax & trade believe AI can offer advice.

  • Human vs AI: In legal services, the human touch is highly valued. AI is seen as useful for basic tasks but not for substantive legal work, due to issues like inaccuracies in AI-generated information. On the other hand, professions like tax, trade, and risk, which deal with more quantitative data, are more open to using AI for strategic input.

  • Client Expectations: Legal professionals might face pressure from clients who are more comfortable with AI. Companies may expect law firms to integrate AI into their advisory processes, and failure to do so might lead to a competitive disadvantage.

  • Defining Standards: The article emphasizes the importance of professional organizations defining clear guidelines around AI's role in providing advice, balancing ethical concerns with client expectations.

The article underscores the ethical and practical debates about the boundaries of AI’s involvement in advisory roles across different professional sectors​.


Question 2 of 2 for ChatGPT-4o: When reading the article, what would you suggest as the best way forward? When and how can legal experts confidently decide to allow an A(G)I to take decisions, if you deem that wise at all?

ChatGPT-4o’s answer:
Based on the article’s insights, here are some suggestions for a cautious and strategic way forward regarding when legal experts might allow AI, particularly advanced AI (AGI), to take decisions:

1. Differentiate Between Types of Decisions:

Legal experts should clearly delineate between routine administrative decisions and strategic, interpretive legal decisions. AI can reliably handle administrative tasks such as document drafting, case research, and data analysis, provided there is robust oversight. However, allowing AI to make final legal decisions—especially those involving human rights, ethical considerations, or significant financial risks—should be approached with great caution.

Recommendation: AI should be allowed to make decisions in areas where the decisions are quantifiable, repeatable, and free from ambiguity (e.g., tax calculations or compliance checks). However, AI should not be permitted to make decisions requiring subjective judgment, especially where there is potential legal liability or ethical complexity.

2. Maintain Human Oversight for High-Risk Scenarios:

AI should remain a decision-support tool rather than the decision-maker in high-risk legal scenarios. Legal experts can utilize AI for suggestions, insights, and efficiency in processes but should maintain final responsibility for interpreting the output and making the final decision.

Recommendation: Develop hybrid workflows where AI provides data-driven recommendations, but human professionals review, interpret, and make decisions in critical areas (e.g., trial strategies, contract negotiations, and final legal advice). This ensures accountability and maintains the critical “human touch” that legal work often requires.

3. Establish AI Ethics and Governance Frameworks:

Legal firms and regulatory bodies need to proactively establish ethical frameworksand governance policies that define AI’s role in decision-making. These frameworks should address transparency, accountability, and the potential for AI bias. Legal professionals should understand AI's limitations, particularly in nuanced legal matters.

Recommendation: Draft clear policies that stipulate when and how AI can be used for different categories of tasks and decisions, requiring regular audits of AI’s decision-making accuracy and fairness. These policies can also mandate human oversight for high-stakes decisions, ensuring that AI’s output is always subject to professional review.

4. Pilot AI in Low-Risk Environments First:

Before entrusting AI with more consequential decisions, firms should pilot AI in low-risk environments where there is little to no room for harm or liability. These test cases can help identify AI’s strengths, weaknesses, and areas where more human supervision is necessary.

Recommendation: Use AI to automate simple legal tasks like case filing, document review, and non-critical contract drafting. Over time, with proven success, firms may gradually expand AI’s role, but should continue to assess its performance against ethical and legal standards.

5. Client Collaboration and Transparency:

Since client expectations regarding AI use may vary, legal professionals should communicate openly about how AI is used in their work. Clients should be informed of the decision-making process and the role AI plays, ensuring transparency and building trust in the use of the technology.

Recommendation: Firms should establish clear communication with clients, ensuring that the use of AI for any advisory or decision-making role is transparent and mutually agreed upon. It should always be clear when AI is used and how decisions were reached, providing clients the opportunity to request additional human oversight if needed.

When Might It Be Wise to Allow AI to Take Decisions?

If/when AI has demonstrated a track record of consistent, unbiased, and accurate outputs in well-defined, low-risk areas, legal professionals could allow it to take certain autonomous decisions within a limited scope. For instance:

  • Regulatory compliance: AI could monitor and automatically ensure adherence to compliance laws in a pre-defined context.

  • Administrative approvals: For routine approvals, such as simple contract execution or non-controversial document processing, AI can autonomously execute tasks.

Even in these areas, however, legal experts should implement a fail-safe mechanismthat allows human intervention when needed.

Final Consideration

Legal experts should be cautious about delegating full decision-making power to AI (or AGI) in contexts where interpretive flexibility, ethical discernment, and nuanced reasoning are required. While AI can assist and enhance decision-making processes, fully automating the strategic decision-making process risks undermining the core human element of legal practice, which often involves balancing competing interests, interpreting ambiguous laws, and understanding deeply human contexts.

Thus, the best path forward is incremental adoption, ensuring that AI augments human decision-making, with humans remaining the ultimate authority in complex, high-risk, and interpretive legal decisions.