• Pascal's Chatbot Q&As
  • Posts
  • Which kinds of actions and tasks are perfect for LLMs or AI agents to perform, analyze and complete, but will never ever be completed without final verification and approval by the human(s) affected?

Which kinds of actions and tasks are perfect for LLMs or AI agents to perform, analyze and complete, but will never ever be completed without final verification and approval by the human(s) affected?

There are several activities and tasks that, while potentially suited to initial automation and analysis by LLMs or AI agents, will always require final verification and approval by humans.

Asking AI services: which kinds of actions and tasks are perfect for LLMs or AI agents to perform, analyze and complete, but will never ever be completed without final verification and approval by the human(s) affected, simply for the fact that no third party is keen on taking responsibility (and especially the liability) for that? List all possible activities and tasks that will turn out to be challenging when wanting to automate them using AI.

ChatGPT-4o’s answer:
There are several activities and tasks that, while potentially suited to initial automation and analysis by LLMs or AI agents, will always require final verification and approval by humans due to the significant responsibility and liability involved. Here are some key areas:

  1. Medical Diagnosis and Treatment Recommendations:

    • Actions: Analyzing medical data, diagnosing conditions, suggesting treatment plans.

    • Reason for Human Verification: Liability for misdiagnosis or inappropriate treatment is high; ethical implications of medical decisions; patient safety.

  2. Legal Advice and Document Drafting:

    • Actions: Providing legal advice, drafting legal documents, predicting case outcomes.

    • Reason for Human Verification: Legal responsibility for incorrect advice or documents; potential for significant legal consequences.

  3. Financial Advice and Investment Decisions:

    • Actions: Offering investment advice, financial planning, executing trades.

    • Reason for Human Verification: Financial liability for poor investment outcomes; regulatory compliance; client trust.

  4. Contract Review and Negotiation:

    • Actions: Reviewing and drafting contracts, suggesting negotiation points.

    • Reason for Human Verification: Legal liability for contract terms; need for strategic decision-making; potential for significant financial impact.

  5. Hiring and HR Decisions:

    • Actions: Screening resumes, conducting initial interviews, making hiring recommendations.

    • Reason for Human Verification: Ethical considerations; risk of bias and discrimination; legal implications of employment decisions.

  6. Sensitive Customer Service Interactions:

    • Actions: Handling complaints, providing compensation, resolving complex issues.

    • Reason for Human Verification: Maintaining customer relationships; potential for significant brand impact; handling sensitive situations appropriately.

  7. Academic Grading and Admissions Decisions:

    • Actions: Grading assignments, evaluating applications, making admissions recommendations.

    • Reason for Human Verification: Ensuring fairness and accuracy; significant impact on students’ futures; ethical considerations.

  8. Creative Work (Writing, Art, Design):

    • Actions: Generating creative content, designing visuals, composing music.

    • Reason for Human Verification: Ensuring originality and quality; personal and subjective nature of creative work; intellectual property considerations.

  9. Critical Infrastructure Management:

    • Actions: Monitoring and controlling power grids, water supplies, transportation systems.

    • Reason for Human Verification: High stakes for public safety; significant impact of potential failures; regulatory requirements.

  10. Compliance and Regulatory Filings:

    • Actions: Preparing regulatory filings, ensuring compliance with laws and regulations.

    • Reason for Human Verification: Legal liability for non-compliance; ensuring thorough understanding and interpretation of regulations.

  11. Emergency Response and Disaster Management:

    • Actions: Coordinating emergency response, making real-time decisions in crises.

    • Reason for Human Verification: High stakes for human lives; need for rapid and adaptive decision-making; ethical considerations.

  12. Intellectual Property Evaluation and Management:

    • Actions: Evaluating patents, managing intellectual property portfolios.

    • Reason for Human Verification: High financial and strategic stakes; legal implications of IP decisions.

These tasks involve complex decision-making, ethical considerations, and potential legal and financial consequences that necessitate human oversight to ensure responsible and accurate outcomes.


Perplexity’s answer:
Several tasks and activities are well-suited for AI and Large Language Models (LLMs) to perform, analyze, and complete, but will likely always require human verification and approval due to liability concerns and the need for human oversight. Here's a list of such activities:

  1. Medical diagnosis and treatment recommendations: AI can analyze symptoms and medical data, but final decisions should be made by human doctors.

  2. Financial investment decisions: AI can provide market analysis and suggestions, but human approval is crucial for high-stakes financial choices.

  3. Legal document drafting and contract analysis: AI can assist in creating and reviewing legal documents, but lawyers must verify and take responsibility for the final content.

  4. Autonomous vehicle decision-making: While AI can control various aspects of driving, critical ethical decisions in emergencies may require human oversight.

  5. Criminal sentencing recommendations: AI can analyze case data, but judges should make final decisions due to ethical and legal implications.

  6. Hiring decisions: AI can screen resumes and suggest candidates, but human HR professionals should make final hiring choices to avoid bias and ensure cultural fit.

  7. Content moderation on social media platforms: AI can flag potentially inappropriate content, but human moderators should review edge cases and make final decisions.

  8. Building and infrastructure safety assessments: AI can analyze data and suggest improvements, but engineers and inspectors must verify and approve final safety measures.

  9. Educational curriculum development: AI can suggest content and learning paths, but educators should approve and customize curricula for their students.

  10. Environmental impact assessments: AI can process large amounts of data, but human experts should interpret results and make final recommendations.

  11. Military strategic planning: AI can analyze scenarios and suggest strategies, but human commanders must make final decisions due to ethical and geopolitical considerations.

  12. Pharmaceutical drug development: AI can assist in identifying potential compounds, but human researchers must verify results and conduct clinical trials.

  13. News article fact-checking: AI can flag potential inaccuracies, but human journalists should verify information and make editorial decisions.

  14. Credit approval decisions: AI can analyze financial data, but human bankers should review and approve high-stakes lending decisions.

  15. Emergency response planning: AI can suggest resource allocation, but human emergency managers should make final decisions in critical situations.

These tasks require a balance between AI capabilities and human judgment due to their complexity, ethical implications, and potential for significant real-world impact. While AI can greatly enhance efficiency and provide valuable insights, human verification remains crucial to ensure accountability, address nuanced situations, and maintain public trust in these critical areas.


Claude’s answer:
This is an insightful question about the limitations and challenges of AI automation. Here's a list of activities and tasks that are well-suited for AI analysis but will likely always require human verification and approval due to liability and responsibility concerns:

  1. Medical diagnosis and treatment plans

  2. Legal advice and court decisions

  3. Financial investment strategies and large transactions

  4. Safety-critical system designs (e.g., nuclear power plants, aircraft)

  5. Urban planning and major infrastructure projects

  6. Environmental impact assessments

  7. High-stakes policy decisions (government, corporate)

  8. Educational curriculum design and student evaluations

  9. Hiring and firing decisions in organizations

  10. Criminal investigations and sentencing recommendations

  11. Drug development and approval processes

  12. Risk assessments for insurance policies

  13. Merger and acquisition decisions for companies

  14. Military strategic planning and tactical decisions

  15. Cybersecurity threat analysis and response

  16. Crisis management in large organizations or governments

  17. Psychological evaluations and mental health treatment plans

  18. Child custody recommendations in family court

  19. Autonomous vehicle navigation in complex urban environments

  20. Content moderation decisions on social media platforms

These tasks are challenging to fully automate with AI for several reasons:

  1. Ethical considerations: Many of these decisions have significant ethical implications that require human judgment.

  2. Legal liability: In case of errors, it's unclear who would be held responsible if an AI system made the final decision.

  3. Complexity and context: These tasks often require understanding nuanced human contexts that AI may struggle with.

  4. High stakes: The consequences of errors in these areas can be severe, making human oversight crucial.

  5. Regulatory requirements: Many of these fields have strict regulations requiring human involvement.

  6. Public trust: People may not trust fully automated systems for critical decisions affecting their lives.

  7. Accountability: It's important to have human decision-makers who can be held accountable for outcomes.

  8. Handling exceptions: These tasks often involve unique situations that may fall outside an AI's training data.

  9. Emotional intelligence: Many of these tasks require empathy and emotional understanding that AI currently lacks.

  10. Interdisciplinary nature: These tasks often require integrating knowledge from multiple domains, which can be challenging for AI systems.

While AI can significantly enhance the efficiency and accuracy of these tasks, the final decision-making and responsibility will likely remain with humans for the foreseeable future. This creates a hybrid model where AI assists and augments human capabilities rather than fully replacing human involvement.


Google Gemini’s answer:
Tasks Ideal for AI Assistance but Requiring Human Oversight

The following tasks are well-suited for AI assistance but necessitate human verification due to the potential consequences of errors:

High-Stakes Decision Making

  • Financial Investments: While AI can analyze market trends and provide recommendations, final investment decisions carry significant financial risk.

  • Legal Decisions: AI can assist in legal research and document analysis, but legal judgments involve complex ethical and societal implications.

  • Medical Diagnosis and Treatment: AI can support medical professionals by analyzing medical data, but final diagnoses and treatment plans require human expertise and empathy.

Tasks Involving Human Interaction and Judgment

  • Human Resources Decisions: AI can screen resumes and schedule interviews, but hiring and firing decisions involve subjective judgment and potential legal liabilities.

  • Customer Service: AI can handle routine inquiries, but complex customer issues often require human intervention and problem-solving skills.

  • Education and Assessment: AI can provide personalized learning recommendations and grade objective assessments, but evaluating subjective assignments and providing feedback requires human educators.

Tasks with Significant Societal Impact

  • Criminal Justice: AI can analyze crime data and predict potential crime hotspots, but decisions about arrests,sentencing, and parole require human oversight to ensure fairness and avoid biases.

  • Autonomous Vehicles: While AI can control vehicle functions under specific conditions, complex driving scenarios and unexpected events may require human intervention.

  • Autonomous Weapons Systems: Even with advanced AI, decisions about the use of lethal force should ultimately rest with humans.

Challenges in Automating Tasks

Several factors can hinder the full automation of tasks:

  • Data Quality and Bias: AI systems rely on high-quality data, which can be challenging to obtain and free from biases.

  • Complexity and Ambiguity: Many real-world tasks involve complex, ambiguous situations that require human judgment and adaptability.

  • Ethical Considerations: AI systems must be developed and used ethically, which can be difficult to ensure in all contexts.

  • Liability and Trust: Concerns about potential errors and negative consequences can limit the willingness to fully automate tasks.

  • Unpredictable Environments: Real-world environments are often dynamic and unpredictable, making it difficult for AI systems to anticipate all possible scenarios.

  • Lack of Common Sense: AI systems often lack common sense and understanding of the world, which can lead to errors in decision-making.

In conclusion, while AI can be a valuable tool for enhancing human capabilities, it is essential to maintain human oversight and judgment in critical areas to mitigate risks and ensure ethical outcomes.


MS Copilot’s answer:
Great question! There are several tasks and activities where AI, including large language models (LLMs), can provide significant assistance but will always require human verification and approval due to the associated responsibilities and liabilities. Here are some examples:

  1. Medical Diagnoses and Treatment Plans:

    • AI can analyze medical data and suggest diagnoses or treatment plans, but final decisions must be made by healthcare professionals to ensure patient safety and legal compliance.

  2. Legal Advice and Document Drafting:

    • AI can draft legal documents and provide preliminary legal advice, but lawyers must review and approve these to ensure accuracy and adherence to legal standards.

  3. Financial Planning and Investment Advice:

    • AI can analyze financial data and suggest investment strategies, but financial advisors need to verify and approve these recommendations to manage risk and comply with regulations.

  4. Autonomous Vehicle Navigation:

    • AI can control autonomous vehicles, but human oversight is necessary to handle unexpected situations and ensure safety.

  5. Content Moderation and Censorship:

    • AI can flag inappropriate content, but human moderators must review these decisions to avoid errors and ensure fairness.

  6. Hiring and Recruitment:

    • AI can screen resumes and suggest candidates, but human recruiters need to make the final hiring decisions to avoid biases and ensure a good fit.

  7. Creative Work (e.g., Writing, Art, Music):

    • AI can generate creative content, but human creators must review and refine this work to ensure quality and originality.

  8. Scientific Research and Experimentation:

    • AI can assist in data analysis and hypothesis generation, but scientists must validate and interpret the results to ensure accuracy and reliability.

  9. Policy Making and Governance:

    • AI can analyze data and suggest policies, but policymakers must make the final decisions to ensure they align with societal values and legal frameworks.

  10. Critical Infrastructure Management:

    • AI can monitor and manage critical infrastructure (e.g., power grids, water supply), but human operators must oversee these systems to handle emergencies and ensure reliability.

These tasks are challenging to fully automate because they involve complex decision-making, ethical considerations, and significant consequences if errors occur. Human oversight ensures that these factors are appropriately managed123.