• Pascal's Chatbot Q&As
  • Posts
  • At its heart, Aidify is not an AI detector. It is a process reconstruction system.

At its heart, Aidify is not an AI detector. It is a process reconstruction system.

This produces something far more robust than a percentage score—it creates a forensic timeline of authorship.

Summary: Aidify replaces unreliable AI detection with a system that proves authorship by tracking how work is created, not just analyzing the final output.



Its strength lies in verifiable process data (edits, timing, contributions), though it raises concerns around privacy and potential system gaming.



Beyond education, this approach could become core infrastructure for trust and auditability across research, enterprise, legal, and creative industries.

@cbc

these teens built this program in… a month 👀 Stream Dragons’ Den now on @CBC Gem #DragonsDen #AI #AItools #Aidify

Source: TikTok

“Proof Over Prediction: Why Aidify Signals a Shift from AI Detection to Human Evidence”

by ChatGPT-5.5

The pitch for Aidify by Nico To and Edlyn To lands at a very specific fault line in the AI era: the collapse of trust in “AI detection” and the growing need for verifiable human provenance. Their core idea is deceptively simple but strategically powerful—stop trying to guess whether AI was used, and instead show how the work was actually created.

This shift—from probabilistic detection to behavioral evidence—has implications far beyond education.

The Core Idea: From Guessing to Reconstructing Work

At its heart, Aidify is not an AI detector. It is a process reconstruction system.

Instead of asking:

“Was this written by AI?”

Aidify asks:

“Can we see how this was written?”

The platform tracks:

  • Keystrokes and typing patterns

  • Edits, deletions, and rewrites

  • Copy-paste behavior

  • Time distribution across writing sessions

  • Contribution splits in collaborative work

This produces something far more robust than a percentage score—it creates a forensic timeline of authorship.

The founders correctly identify the core weakness in current tools: AI detectors operate in a cat-and-mouse loop, where improving models constantly outpace detection systems. Aidify sidesteps this entirely by shifting the problem from output analysis to process verification.

Why This Matters: A Structural Weakness in AI Governance

Aidify exposes a deeper systemic issue:

We are trying to regulate AI outputs without controlling or understanding the process that generates them.

This is exactly the problem seen across:

  • AI copyright disputes

  • Model training opacity

  • Enterprise AI governance failures

  • Academic integrity enforcement

In all of these domains, output alone is insufficient evidence.

Aidify introduces a different paradigm:

Trust = reconstructability

This aligns closely with emerging ideas in provenance (e.g., C2PA) and your own work on verifiability, attribution, and auditability in AI systems.

Pros of the Aidify Approach

1. Moves Beyond the “AI Detection Illusion”

AI detection tools provide probabilistic guesses with no evidentiary grounding. Aidify replaces this with observable behavior, which is far harder to dispute.

2. Creates Audit Trails Instead of Scores

A timeline of edits and writing sessions functions as defensible evidence, not just a signal. This is crucial in high-stakes environments (education, legal, research).

3. Aligns Incentives Toward Authentic Work

Students are nudged toward genuine effort because:

  • Faking the process is often harder than doing the work

  • Effort becomes visible and measurable

4. Improves Collaboration Transparency

The group-work feature addresses a long-standing issue: free-riding in team assignments. Contribution becomes quantifiable.

5. Low Cost, High Accessibility

A Chrome extension model enables:

  • Rapid adoption

  • Minimal infrastructure friction

  • Direct teacher-level distribution (bypassing slow institutional procurement)

6. Shifts the Narrative from Policing to Evidence

This is subtle but important:

  • AI detectors feel punitive

  • Process visibility feels fair and explainable

Cons and Risks

1. System Gaming Is Still Possible

As acknowledged in the pitch:

  • Users could simulate human typing

  • AI outputs could be manually retyped over time

This creates a new arms race—behavioral mimicry instead of textual mimicry.

2. Privacy and Surveillance Concerns

Tracking keystrokes and behavior raises serious issues:

  • Student monitoring and consent

  • Data storage and misuse risks

  • Psychological impact of constant observation

This could trigger resistance, especially in regions with strict data protection frameworks (e.g., GDPR).

3. False Sense of Security

A realistic risk:

“Visible effort” ≠ “original thinking”

A student could:

  • Heavily rely on AI

  • Then reconstruct the process manually

Aidify proves effort—not necessarily authentic cognition.

4. Institutional Friction

The Toronto District School Board example highlights a structural barrier:

  • Schools distrust small vendors

  • IT policies block extensions

  • Procurement favors large incumbents

This is a classic innovation vs bureaucracy mismatch.

5. Narrow Initial Use Case

Education is a strong entry point—but also a constrained one:

  • Budget limitations

  • Fragmented decision-making

  • Political sensitivity around AI in schools

Scaling beyond this niche is essential.

Where This Model Can Be Applied Next

The real opportunity lies in generalizing Aidify’s core principle:

“Make the creation process observable, auditable, and reconstructable.”

  • Drafting contracts or legal briefs

  • Demonstrating authorship and reasoning trails

  • Auditable decision-making in regulated environments

This directly connects to AI governance frameworks and liability questions.

2. Scientific Research & Publishing

  • Tracking how research papers are written

  • Verifying originality and reproducibility

  • Linking drafts to data and sources

This is particularly relevant for organizations like John Wiley & Sons, where provenance and integrity are core assets.

3. Enterprise Knowledge Work

  • Auditing AI-assisted outputs

  • Tracking employee contributions

  • Ensuring compliance in AI-augmented workflows

This could integrate into enterprise AI governance systems.

4. Software Development

  • Tracking how code is written (human vs AI assistance)

  • Debugging authorship disputes

  • Improving accountability in collaborative coding

Especially relevant as AI-generated code becomes dominant.

5. Journalism and Media

  • Verifying how articles are produced

  • Combating misinformation and synthetic content

  • Creating transparent editorial workflows

6. Creative Industries

  • Music, writing, design: proving human contribution

  • Addressing AI training and authorship disputes

  • Supporting royalty attribution models

7. Financial and Audit Systems

  • Recording decision-making processes

  • Demonstrating compliance with regulations

  • Providing traceability for automated or AI-assisted decisions

8. Government and Public Sector

  • Transparent policy drafting

  • Audit trails for decision-making

  • Reducing corruption and opacity

Strategic Insight: This Is Not an EdTech Tool—It’s Infrastructure

Aidify looks like an educational tool, but structurally it resembles something much bigger:

A lightweight provenance layer for human-AI interaction

This aligns with a broader shift you’ve been tracking:

  • From content access → intelligence systems

  • From outputs → processes

  • From trust claims → verifiable evidence

In that sense, Aidify sits in the same conceptual space as:

  • Provenance standards (C2PA)

  • AI audit trails

  • Usage tracking and attribution systems

Final Assessment

Aidify is not solving AI detection—it is rendering it obsolete.

Its real innovation is reframing the problem:

  • Not “Can we detect AI?”

  • But “Can we prove human work?”

That shift matters.

However, its long-term success will depend on whether it can:

  • Overcome privacy concerns

  • Expand beyond education

  • Integrate into broader AI governance ecosystems

If it does, Aidify could evolve from a clever Chrome extension into something far more consequential:

A foundational layer for trust in a world where human and machine outputs are indistinguishable.