• Pascal's Chatbot Q&As
  • Posts
  • GPT-4o: Shakkarwar’s arguments may not align well with academic policies or standards, as they primarily lean on copyright law rather than academic integrity rules,..

GPT-4o: Shakkarwar’s arguments may not align well with academic policies or standards, as they primarily lean on copyright law rather than academic integrity rules,..

...conflating ownership rights with requirements for personal originality in educational assessments.

Asking AI services: Explain what's atypical or even wrong about the arguments used by the plaintiff.

ChatGPT-4o’s answer:
The arguments presented in this case by Kaustubh Shakkarwar, the plaintiff, raise some atypical points and logical inconsistencies. Here are key areas where his argument may be flawed or unconventional:

  1. Misapplication of Copyright Law and Plagiarism Standards:
    Shakkarwar’s argument centers around the idea that, to establish plagiarism, there must first be a violation of copyright law. However, plagiarism in academic settings is an ethical violation, not necessarily a legal one. Plagiarism policies are typically separate from copyright law, as they are focused on the originality and authenticity of a student's work rather than the legal ownership of content. Academic integrity guidelines often define the use of unauthorized assistance, like AI, as a breach of conduct, irrespective of copyright status.

  2. Contradiction in Defense:
    Shakkarwar claims both that there is no explicit prohibition on AI usage in his university’s policy and that the submission was his original creation without AI assistance. This presents a contradictory stance: arguing simultaneously that AI use should be permissible while denying having used AI at all. If the petition aims to prove his submission was not AI-generated, the argument regarding the permissibility of AI usage becomes irrelevant and dilutes his case.

  3. AI as a Tool Argument and Plagiarism:
    Shakkarwar suggests that AI is merely a tool, implying that its use should be acceptable like any other tool. While AI can indeed serve as an assistive tool, most academic institutions differentiate between permissible and impermissible tools, especially in exam settings where originality is key. Institutions often consider the use of generative AI as inherently different from tools like grammar checkers, as it can produce substantial, pre-formed content with minimal input. His argument doesn’t address the distinct academic standards that apply to exams versus general content creation.

  4. Copyright Misinterpretation:
    Shakkarwar appears to be conflating copyright ownership with academic integrity standards. He argues that because copyright law would assign authorship to him, the university’s plagiarism claim is invalid. However, copyright law does not protect against plagiarism; it only deals with the right to reproduce and distribute content. Plagiarism concerns the originality of ideas and adherence to institutional ethics, not ownership under copyright law.

  5. Evidentiary Demand for AI Detection:
    Shakkarwar challenges the university's decision by asserting they provided no "iota of evidence" that AI was involved in generating his answers. AI detection tools are not foolproof, but universities often apply them as part of broader assessments of academic honesty. His focus on the lack of evidence might make sense legally, but educational institutions generally have leeway to make determinations based on their policies and practices without needing forensic-level proof in every case.

  6. Lack of Precedent for AI Authorship in Exams:
    He also seeks a declaration that humans using AI are considered the authors of AI-generated work. This argument, while potentially relevant in copyright law discussions, has little bearing on academic standards, which prioritize authentic individual effort, especially in an exam context. The claim that authorship remains with the student using AI doesn't address the core issue of whether AI-generated answers fulfill the academic requirements for originality and independence in exams.

In summary, Shakkarwar’s arguments may not align well with academic policies or standards, as they primarily lean on copyright law rather than academic integrity rules, conflating ownership rights with requirements for personal originality in educational assessments.


Claude’s answer:
There are a few potential issues or atypical arguments in the student's petition against the Jindal Global Law School:

  1. Prohibition on AI-generated content:

    • The student argues that there is no explicit prohibition on using AI-generated content in exams. However, most educational institutions have academic integrity policies that prohibit submitting work that is not one's own, which would likely cover AI-generated content.

    • The student's claim that the university must state that "use of AI would amount to 'plagiarism'" to prosecute him is questionable. Academic integrity policies typically cover broader issues of academic dishonesty beyond just plagiarism.

  2. Copyright claims:

    • The student's argument that he holds the copyright over any AI-generated content because "AI is only a tool" is unconventional. Copyright law generally vests in the actual author of a work, not the user of a tool.

    • Relying on Section 2(d)(vi) of the Copyright Act, 1957 to claim ownership over AI-generated content is a novel and untested legal argument. This provision deals with works produced by employees, which may not directly apply to AI-generated content.

  3. Lack of evidence:

    • Typically, the burden of proof would be on the student to demonstrate that the submitted work was his own original creation, not on the university to prove it was AI-generated.

    • The student's claim that the university has not presented any "iota of evidence" to support the allegation is unusual, as the university would likely have access to plagiarism detection tools or other evidence to justify its decision.

  4. Student's background:

    • The fact that the student is an LLM candidate in Intellectual Property and Technology Laws, has worked as a law researcher for the Chief Justice, and runs an AI platform related to litigation raises questions about his level of awareness regarding academic integrity policies and use of AI tools.

    • This background may undermine the student's claim that he was unaware of any prohibition on using AI-generated content in exams.

Overall, the student's arguments appear to be novel and unconventional, relying on legal interpretations that may not align with typical academic integrity policies and copyright principles. The university may have stronger grounds to uphold its decision based on established practices and policies regarding academic honesty and originality of student work.


Grok’s answer:
The arguments presented by Kaustubh Shakkarwar in his lawsuit against Jindal Global Law School for failing an exam due to alleged AI-generated content contain several aspects that can be considered atypical or potentially problematic:

  1. Lack of Clear Institutional Policy on AI Use:

    • Atypical Argument: Shakkarwar's contention that there is no explicit prohibition against using AI in exam submissions might hold if the university had not previously established norms or guidelines regarding the use of AI in academic work. However, many institutions are implicitly expected to maintain academic integrity, which includes the originality of student work.

    • Potential Issue: While the absence of an explicit rule might create ambiguity, academic institutions often uphold a broader principle of originality where the use of any external aid, including AI, could be seen as violating the spirit of these principles unless otherwise specified.

  2. Claiming Ownership Over AI-Generated Work:

    • Atypical Argument: Shakkarwar's argument that he should be considered the copyright holder of any AI-generated content he uses or modifies is innovative but legally untested in many jurisdictions.

    • Potential Issue: This claim might be legally complex because:

      • Copyright laws traditionally protect works by human authors. While AI might be a tool, the legal framework around AI-generated content remains underdeveloped, and courts might not accept this interpretation without more legal precedent.

      • If the AI was not created or trained by Shakkarwar, there could be questions about derivative works or the originality of the content if the AI's output is based on its pre-existing data or training.

  3. Denial of AI Use While Defending AI as a Tool:

    • Atypical Argument: Shakkarwar both denies using AI and then defends the use of AI as a tool, suggesting a contradiction or at least a complex position.

    • Potential Issue: If he insists the work was his original creation without AI assistance, this could undermine his argument about the legitimacy of AI as a tool in academic settings. This dual stance might confuse the issue or potentially weaken his case if not carefully articulated.

  4. Lack of Evidence Requirement:

    • Atypical Argument: Demanding that the university provide "evidence" to substantiate their claim that his work was AI-generated might be seen as standard, but given current AI detection technologies' limitations, this can be challenging.

    • Potential Issue: AI-generated text can sometimes be identified by software, yet these tools aren't infallible. The burden of proof might lean more towards showing how AI was not used or how the work was independently produced by Shakkarwar.

  5. Plagiarism vs. AI Use:

    • Atypical Argument: The argument that AI use does not constitute plagiarism unless copyright is violated introduces a nuanced legal debate into what might be considered straightforward academic integrity policies.

    • Potential Issue: While technically correct in legal terms, this might not align with academic standards where using AI to generate answers could be seen as undermining the educational goal of assessing a student's personal understanding and capabilities.

The case highlights the evolving intersection between technology and education, particularly how AI tools fit into existing legal and ethical frameworks. Shakkarwar's arguments push boundaries but might face challenges due to the lack of clear legal precedents on AI in educational contexts and potential contradictions in his defense strategy.


Perplexity’s answer:
The arguments presented by Kaustubh Shakkarwar in his lawsuit against Jindal Global Law School contain several atypical and potentially problematic aspects:

Contradictory Claims

Shakkarwar's primary argument is contradictory. He simultaneously claims that:

  1. He did not use AI to generate his exam answers.

  2. Even if he did use AI, it should be considered a legitimate tool, and he should retain authorship of the work.

This inconsistency weakens his overall case, as it appears he's trying to have it both ways.

Shakkarwar cites Section 2(d)(vi) of India's Copyright Act, 1957, to argue that he would be the rightful author of any AI-generated work. However, this argument is problematic for several reasons:

  1. Irrelevance to Academic Integrity: Copyright law is not typically applicable to academic integrity issues. Universities have their own policies regarding plagiarism and academic honesty.

  2. Misinterpretation of the Law: The cited section of the Copyright Act predates the widespread use of AI in content creation. It's questionable whether this law can be directly applied to AI-generated content without further legal interpretation.

  3. Conflation of Authorship and Academic Integrity: Even if Shakkarwar were considered the legal author of AI-generated content, this doesn't necessarily mean using AI in an exam setting is academically acceptable.

Lack of University Policy as Defense

Shakkarwar argues that the university didn't explicitly prohibit AI-generated content. This argument is atypical because:

  1. Implicit Understanding: There's generally an implicit understanding in academia that exam answers should be the student's own work, not generated by external tools.

  2. Ethical Considerations: Relying on the absence of explicit rules ignores the ethical considerations of using AI in an exam setting.

Background and Expertise Contradiction

Shakkarwar's background as a law researcher and operator of an AI litigation platform potentially undermines his claim of ignorance about the implications of using AI in an academic setting.

Burden of Proof Argument

Shakkarwar claims the university hasn't provided substantial evidence of AI use .However, this argument overlooks the possibility that AI detection tools used by the university might be considered sufficient evidence in an academic context. In conclusion, while Shakkarwar raises interesting points about the intersection of AI, copyright law, and academic integrity, his arguments are often contradictory or misapplied, making his case atypical and potentially flawed from a legal and academic standpoint.


Gemini’s answer:
Atypical Arguments in the Law Student's Case

The law student's argument against the university's decision to fail him for an AI-generated exam submission presents several atypical and potentially problematic claims:

  1. Lack of Explicit Prohibition: The student argues that the university's lack of a specific policy prohibiting AI-generated submissions means he cannot be penalized. This argument, while technically correct, ignores the broader academic integrity principles that universities uphold, which typically include honesty and originality in academic work.

  2. AI as a Tool, Not an Author: The student contends that AI is merely a tool and that the human using the tool is the true author of the work. While this argument might hold some merit in certain contexts, it's important to note that AI can generate highly sophisticated and original content, blurring the lines of authorship.

  3. Copyright and AI: The student's reliance on copyright law to defend his use of AI is questionable. Copyright law generally protects original works created by humans. While AI can generate original content, it's unclear whether such content would qualify for copyright protection, especially if it's merely a derivative of existing works or data.

  4. Burden of Proof: The student seems to shift the burden of proof onto the university, requiring it to provide definitive evidence of AI-generated content. However, in academic integrity cases, the student often bears the responsibility of demonstrating the originality and authenticity of their work.

Why These Arguments Might Be Problematic

These arguments, while technically sound in some respects, overlook the underlying ethical and academic concerns. They raise questions about the future of education in an age of AI, and the potential for misuse of technology to circumvent academic standards.

Ultimately, the court's decision will likely hinge on a balance between academic integrity, technological advancements, and the specific rules and regulations of the university. It will be interesting to see how the court addresses these complex issues and sets a precedent for future cases involving AI and academic integrity.